text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Android-Based Articulate Storyline Interactive Media in IPAS Subjects : The limitations of learning media, especially technology-based, make students passive so they feel bored and less interested in learning. This results in low student learning outcomes. This research aims to develop and test the feasibility, practicality, and effectiveness of Android-based interactive media in the IPAS subject, fifth grade elementary school interaction material. This type of research is developed using the Borg and Gall model. The data collection techniques used are observation, interviews, document data, tests Introduction Education is the main factor that determines the creation of a quality next generation of the nation who can develop skills according to their respective potential.This is by the Regulation of the Minister of Education, Culture, Research and Technology of the Republic of Indonesia number 16 of 2022 which states that Process Standards are used as guidelines in carrying out an effective and efficient learning process to develop students' abilities, initiative, skills and independence to the maximum (Nurrita, 2018;Silalahi, 2020;Saputra, A., & Filahanasari, 2020).Effective, efficient, and independent learning processes are also applied to learning in elementary schools (Arwanda et al., 2020;Firmadani, 2020) . Learning at the elementary school level certainly cannot be separated from subjects.Natural science is one of the mandatory curriculum subjects that must be taken in elementary school education. Natural Sciences as a compulsory subject is very useful for students' lives because it is closely related to human life, the natural environment, and even the universe (Siswanto & Susanto, 2022;Pratiwi, 2021) .In the independent curriculum, science subjects are combined with Social Sciences to become Natural and Social Sciences (IPAS).According to the Decree of the Head of the Educational Assessment Curriculum Standards Agency number 033/H/KR/2022, Natural and Social Sciences (IPAS) is a science that studies living and inanimate things in the universe and their interactions, as well as human life as individuals and as creatures.social interaction with its environment.It is hoped that the combination of these two subjects can help students better understand and be able to manage the natural and social environment around them simultaneously (Nugraha et al., 2021;Pratiwi, 2021) .Even though the new curriculum has been used, the implementation of IPAS learning in elementary schools is not free from problems. Problems that arise in the implementation of IPAS learning in elementary schools include learning that is still dominated by lectures by teachers, making students passive. in learning activities, and tending to feel bored (Tiyas et al., 2020;Zakirman et al., 2022).Learning IPAS is also considered difficult by students because there is too much memorization.Apart from that, there are limited learning media used to deliver learning media (Nugroho & Arrosyad, 2020;Sari & Harjono, 2021;Widya et al., 2022).Many teachers also find it difficult to develop and operate learning media for IPAS subjects, especially technology-based ones (Donna et al., 2021;Rohmah & Bukhori, 2020).Learning media makes learning interesting so that it makes students active (Karo & Rohani, 2018;Tafonao, 2018;Wahid et al., 2020).One way to achieve quality learning is through the development of innovation in technology, namely learning media, so that learning continues to develop and is not left behind by world education (Bayu & Wibawa, 2021;Devi & Bayu, 2020).Media has many functions, including being able to motivate students to learn, increasing students' interest in learning, fostering student focus, learning that is not boring, and improving students' memory in capturing material (Fisnani et al., 2020;Munandar et al., 2021). The results of observations and interviews conducted at SDN Kemiri Lor with class V teachers also found problems, including existing teaching aids and media that were starting to break down, limited learning resources for students, and minimal development of learning media, especially those based on technology.These limitations in the development of learning media have an impact on students' lack of interest in learning, especially in subjects such as IPAS, because students have to memorize and understand in depth the material provided using monotonous learning methods from teachers with lectures.This makes students less interested, bored, and have difficulty understanding the material provided.From the document data on the learning outcomes of class V students at SDN Kemiri Lor, it can be seen that the learning outcomes for IPAS lesson content are still low.In the grades of class V students on ecosystem material, it can be seen that the number of class V students at SDN Kemiri Lor is 25, indicating that there are 16 students (64%) who have not yet completed the KKTP and 9 students (36%) who have completed the KKTP.The IPAS lesson load is 70. The solution to this problem is to develop technology-based interactive learning media.Interactive media is a combination of several media designed as one whole, for example, a combination of images, text, audio, animation, and simulations, which can be used in learning to help clarify abstract material or lesson concepts into concrete ones and is equipped with tools (Hasan, 2021;Deliany, 2019).Through innovative and practical interactive media, students will be more interested and can use it to learn anytime and anywhere (Oktafiani et al., 2020;Septiani et al., 2021).Interactive learning media also contains learning materials and questions, as well as more interesting exercises to motivate students to learn and improve the quality and learning outcomes of students (Jannah et al., 2020;Sari & Harjono, 2021). Learning media is a tool aimed at creating teaching materials in an efficient manner (Nurdyansyah, 2019).Innovative interactive learning media can be developed based on Android.Android is a Linux-based operating system used for mobile phones, such as smartphones and tablet computers (PDA) (Prabowo et al., 2020).The use of Android in developing learning media is efficient and also supports the use of facilities and infrastructure that are already available at schools in the form of tablets, while gadgets are already available at each student's home (Aranta et al., 2021;Jubaerudin et al., 2021). This Android-based interactive learning media can be developed with the help of software to articulate storyline.Interactive learning media in the form of the software Articulate storyline has many advantages, so it is appropriate to use it as a learning media to increase students' interactivity and understanding (Deliany et al., 2019;Ariani, 2020).Articulate storyline can provide easy design features similar to Microsoft PowerPoint, making it easy to use for beginners who want to develop learning media (Amiroh 2020;Lestari, 2021).Apart from that, interactive media created with an articulate storyline also provides ease of operation during learning, increasing students' understanding by inserting material and utilizing features in the form of writing, sound, and animation regarding the material to be taught so that learning becomes communicative and interactive (Sari et al., 2023;Nugroho & Arrosyad, 2020). Interactive media design with software articulate storyline an Android-based storyline by creating a background with an attractive theme and color according to the material and characteristics of the students (Wulandari et al., 2019;Yamin, 2020).Then there are videos, audio, animations, stories or text, and interesting pictures so that students don't feel bored (Rahayu et al., 2023;Widya et al., 2022;Setyaningsih et al., (2020).The use of digital devices in learning has a high level of interest, so that the use of this media shows that there are variations in learning that are adaptive to the characteristics of students in the digital era as generation Z (Pratama & Sakti, 2020).Apart from that, media is also created so that it can be used individually or in groups anywhere and at any time (Marpelin et al., 2023;Septiani et al., 2021).The advantage of this interactive media is that there are quizzes and evaluation questions that are made of multiple choice with pictures and drag and drop so that it motivates students to work on it while teachers can save time and paper (Lestarani et al., 2023;Oktafiani et al., 2020;Septiani et al., 2020).The results of evaluations or quizzes contain feedback, making this media more interesting and interactive for students to use. The use of interactive media supports IPAS learning, where students will better understand material that is far from their environment by viewing videos or images presented by actively operating the media to understand the material (Fitriyah et al., 2023;Rahayu et al., 2023).By the essence of natural science, namely products, processes, attitudes, and technology.So, in science learning, it is not possible for students to only acquire knowledge (products), but students must be actively involved in learning.Based on this description, interactive media with software Androidbased articulate storyline is a learning media that can increase students' motivation to be more active in understanding learning more interestingly and efficiently. Based on research conducted by (Jubaerudin et al.,2021;Kartika Sari et al.,2023;Widya et al.,2022) states that interactive media with software Androidbased articulate storyline is suitable for delivering material to elementary school students because of its attractive colors and operation.Apart from that, the media developed also makes it easier and attracts students' interest in understanding the material, so that their learning outcomes increase.This is in line with the results of research by Marpelin (2023) entitled "Interactive Multimedia Based on Project Based Learning Model Using Articulate Storyline 3 Application on the Topic of the Human Digestive System," which states that the use of media with software-articulate storyline effectively improves the learning outcomes of fifth grade students. Previous research also stated that interactive media created with software articulate storyline is considered very good so it is effective and suitable for use at the elementary school level (Arwanda et al., 2020;Nugroho & Arrosyad, 2020;Sari & Harjono, 2021).Learning using interactive media can also increase students' critical thinking skills (Arwanda et al., 2020;Fitriyah et al., 2023;Heliawati et al., 2022).Apart from that, the use of interactive media created with software articulate storyline can motivate and improve student learning outcomes (Kumullah & Tayibu, 2021;Mumlahana et al., 2020;Sari & Harjono, 2021). Based on this, it can be concluded that interactive media is created with software Articulate storyline can be used to motivate and achieve student learning outcomes in IPAS subjects.However, there has been no study regarding interactive media with software Android-based articulate storyline with interaction material in the ecosystem (IDE IPAS V).The advantages of interactive media with software Androidbased articulate storyline with interaction material in the ecosystem (IDE IPAS V). Novelty the presence of more interactive features equipped with audio, video, images, and animations in the form of interesting applications.In this media there are also buttons equipped with audio commands, making it easier for students to operate the application.Apart from that, this Android-based media can be operated anywhere and at any time, making it easier for students to learn.This media update also lies in the existence of explanatory videos and quizzes on each material.Based on this, this research aims to develop interactive media with software Androidbased articulate storyline with interaction material in the ecosystem (IDE IPAS V) that can increase students' motivation and learning outcomes in IPAS subjects.Apart from that, this development was also carried out to test the feasibility, practicality, and effectiveness of the product being developed. Method The type of research used is research and development (R&D) research, which will produce interactive media with an Android-based articulate storyline and interaction material in the Class V ecosystem of SD N Kemiri Lor, Purworejo Regency.In the research and development of this interactive media, the researcher implemented development according to the procedure developed by Sugiyono (2019), which consists of 10 steps, but the researcher only limited it to step 8, namely trial use, due to time and cost constraints.So, the steps in this research are: problem and potential; data collection; product design; design validation; design revision and product development; product testing; product revision; and implementation/ trial use. Figure 1. Borg and Gall Development Model The initial stage carried out by researchers was to determine the potential problems that exist in the school by conducting observations, interviews, and documenting the learning outcomes of fifth-grade students at SD N Kemiri Lor.The next stage is collecting data and information to plan the product to be developed using a student and teacher questionnaire.After analyzing the needs questionnaire, the researcher designs the product that will be developed, starting with the design, materials, and language that will be used.The product design is adjusted to the IPAS learning outcomes (CP) contained by the decisions in phase C, namely, students investigate how the interdependent relationship between biotic and abiotic components can influence the stability of an ecosystem in the surrounding environment.After the product has been designed, validation is carried out by experts who are competent in their field, namely media experts and material experts, by filling out the validation sheet that has been prepared by the researcher using a Likert scale. The next stage is design revision.The product that has been assessed by the expert validator is then revised based on the wishes given by the expert validator.After the product was revised, it was continued by testing the product on students on a small scale in class VI, consisting of 12 students, using a purposive sampling technique based on different levels of cognitive ability.At the product trial stage, learning was carried out using Android-based interactive media using tablets at the school.After carrying out the lesson, teachers and students were asked to fill out a response questionnaire regarding the use of Android-based interactive media.Next, the results of the response questionnaire are analyzed, and if there is input, it can be used as material for revising products that have been tested.The final trial stage involves testing the product being developed on a larger scale.Researchers conducted trials of its use on all 21 students in class V of SD N Kemiri Lor to determine the effectiveness of the product developed based on student learning outcomes. The type of data used in this research is primary data.Primary data is data obtained directly when conducting research; in this case, qualitative and quantitative data.The qualitative data in this research was obtained from observations, questionnaires, and teacher interviews conducted at SD N Kemiri Lor.Quantitative data in this research were obtained from the learning results of fifth-grade students at SD N Kemiri Lor on interaction material in the IPAS subject ecosystem as well as the results of pretest and posttest assessments. The research design used is a pre-experimental design with a one-group pretest-posttest design model, namely, a pretest before treatment is given and a posttest after the research.The aim is to find out more precisely the results of treatment because you can compare conditions before and after treatment using Androidbased interactive media (Sugiyono, 2019).Data collection techniques use test and non-test techniques.The test technique is in the form of 30 multiple-choice questions, while the non-test technique is in the form of observation, questionnaires, interview results, and document data.To determine the feasibility of the product being developed, data analysis was carried out in the form of an assessment by material and media expert validators using a Likert scale.To determine the practicality of the product, a student and teacher response questionnaire was used after the product was developed using the Guttman scale.Then, to determine the effectiveness of the product, data analysis was carried out in the form of a gain test based on students' pretest and posttest scores in large-scale trials. Potential and Problems Based on the pre-research results, several problems were found, such as existing teaching aids and media that were starting to become damaged, limited learning resources for students, and minimal development of learning media, especially those based on technology.These limitations in the development of learning media have an impact on students' lack of interest in learning, especially in subjects such as IPAS, because students have to memorize and understand in depth the material provided using monotonous learning methods from teachers with lectures.This makes students less interested, bored, and have difficulty understanding the material provided.Apart from that, it can be seen that the learning outcomes data for IPAS lesson content is still low and does not meet the minimum criteria for achievement of learning objectives (KKTP).The KKTP value set is 70, but of the 25 students, only 9 students (36%) have completed the KKTP, while the remaining 16 students (64%) have not completed the KKTP. Initial Data Collection Data collection was carried out by researchers using a questionnaire regarding teachers' and students' needs for the desired learning tools.Based on the results of data collection, learning resources such as teaching aids, media, and reference books are minimal, and many have been damaged.This lack of varied learning resources causes students to be less enthusiastic about participating in learning and makes it difficult for students to understand the material presented.Apart from that, the use of technology-based media is still very limited, which has an impact on students' lack of interest in learning, especially subjects such as IPAS, because students have to memorize and understand in depth the material provided using monotonous learning methods from teachers with lectures.This makes students less interested and bored, and they have difficulty understanding the material provided.Therefore, teachers need additional learning media to increase students' knowledge about interaction material in the ecosystem.It is necessary to develop interactive learning media so that it is interesting by selecting designs, images, animations, and video inserts to increase students' interest in learning. Teachers need interactive media that can be used by utilizing tablets available at school, namely Androidbased interactive media.The media is adapted to the student's abilities and language and is equipped with videos and images to increase understanding of the material presented.Learning media is created interactively so that students play an active role in learning and are more motivated to learn.Students also agree and are happy when using Android-based interactive media, which can contain interactive material, quizzes, and evaluation questions. Product Design The interactive media design is tailored to the learning outcomes (CP) and learning objectives (TP) to be achieved.Interactive media is developed with an attractive design consisting of writing, audio, video, images, and animation that suit the characteristics of students.Interactive media design was created on Canva.Then, interactive media is created using Articulate Storyline software to edit it into an application that can be used on Android by adding text, audio, video, images, and animation.Apart from that, it is also equipped with an interactive quiz.The evaluation questions are also made interactive with two types of questions, namely multiple-choice questions and dragand-drop questions.The final result of this media is an application that can be used on Android, both smartphones and tablets, so it does not require an internet network to operate.Android-based interactive media consists of the following parts.The product design steps carried out by researchers include preparing materials, format, and layout for customized designs.The second step is the creation of a developed product design.The final step is the application of creating interactive media with articulate storyline software so that it can become an application. Feasibility of Android-Based Interactive Media Product Design Validation At this stage, the researcher validated the product with competent media expert validators, namely lecturers in learning media subjects in the elementary school teacher education study program, and material experts, namely lecturers in IPAS subjects in the elementary school teacher education study program, to test the feasibility of the product developed by the researcher.After being assessed by an expert validator, there will be input regarding the product being developed so that researchers can revise the product being developed.1 above shows that the validation provided by media expert validators was 91.3% in the very appropriate category, and material validation by material experts was 91.2%.The average validation value obtained was 91.25%.So, it is categorized as valid because it obtained a value.above 70% is included in the feasible criteria (Arikunto, 2018).Android-based interactive media is declared valid in its entirety of content or material, appearance or media, and language and is ready to be tested.This is in line with the results of research by Jubaerudin (2021) and Rahayu (2023), which obtained a score of more than 70%, so that it is feasible and can be used as additional alternative teaching material in the IPAS learning process in elementary schools. Design Revision Researchers revised the design according to suggestions provided by media and materials experts.The advice given by media experts is to change the title letters on the cover to all capital letters.Suggestions given by material expert validators include adding slides to make them more interactive and equalizing the characters displayed.In the small-scale trial, there were 12 class VI students with heterogeneous selection based on the level of student ability, namely 4 students with low scores, 4 students with medium scores, and 4 students with high scores.After students carry out learning, students and teachers are given an answer sheet containing 15 questions on the Guttman scale, which must be filled in based on experience using the product that has been developed.The questionnaire has assessment criteria: (1) assessment with very positive criteria if the value is 76%-10%; (2) positive criteria if the value is 51%-75%; (3) negative criteria if the value is 26%-50%; and (4) very negative criteria if the value is 0%-25%.The calculation to measure the percentage of teacher response questionnaire answers is as equation 1. NP = x 100% (1) In testing the practicality of Android-based interactive media, teacher and student response questionnaires were distributed, which had three technical qualities: appearance, presentation of material content, and language.These 3 aspects are then described in 6 indicators: media display, instructions for use, presentation of material, presentation of practice questions, use of language in the media, as well as display of images, text, and colors.Table 2 shows that the response of teachers and students to Android-based interactive media has a very positive value because they obtained a score above 85%.So, it can be concluded that Android-based interactive media is practically used in learning activities.Androidbased interactive media is stated to be very positive and practical based on 15 questions on the Guttman scale.Because all questions received a score of 1, there was no product revision in the small-scale trial.Table 3 shows that the responses of teachers and students to Android-based interactive media had very positive results because they obtained scores above 85%.Based on 15 questions on the Guttman scale, almost all of them got a score of 1, which shows that Android-based interactive media received a very positive and practical response.This is from research that has been conducted, which explains that the teacher and student response questionnaire to the Android-based interactive media that was developed obtained a score above 85%, thus showing a very positive value, which means that Android-based interactive media is practically used in learning activities (Lestarani et al., 2023;Rahayu et al., 2023).Based on Table 4, it is known that the average student learning outcomes show an increase of 31.75 in small-scale product trials.Meanwhile, in large-scale trials, there was an increase of 19.67.The data in the table shows that there are differences in student learning outcomes in IPAS subject material on interactions in ecosystems in class V of SD N Kemiri Lor before and after using Android-based interactive media.It can be seen that there is an increase in the average learning outcomes of students after using interactive media based on Android.To find out the criteria for increasing the average pretest and posttest, N-gain analysis was carried out by comparing the difference between SMI and pretest. Effectiveness of Trial Use of Android-Based Interactive Media Products The effectiveness test was continued with a normality test and then tested using the t-test and Ngain test.After carrying out the normality test, a homogeneity test was carried out using the T-test.The results of the t-test, both small and large scale, obtained a sig value of 0.00 <0.05, which means that there is a significant difference between learning outcomes before and after being given treatment.8 and 9, it is known that the difference between the average pretest and posttest scores is 19.67 in large-scale trials.This shows that the grades of class V students at SD N Kemiri Lor have increased by 0.59, which is included in the medium criteria.This increase shows that the use of Androidbased interactive media in IPAS and technology subjects and interaction material in the ecosystem has succeeded in improving student learning outcomes.This is in line with research that has been conducted that shows that Android-based interactive media with the help of articulate storyline software can motivate and improve the learning outcomes of students with N-gain in the medium category (Septiani et al., 2021).Meanwhile, research conducted by Heliawati (2022) shows that Ngain is in the high category.Based on this, it can be concluded that Android-based interactive media is suitable and effective for use in learning because it can improve student learning outcomes. Conclusion Based on the results of the research that has been carried out, it can be concluded that Android-based interactive media can motivate and improve student learning outcomes in IPAS subjects, including material on interactions in ecosystems.This is proven by the results of product validation assessments by experts, with product assessment results obtained on average of 91.25%.Based on the pretest and posttest results, it is known that Android-based interactive media is effective in improving student learning outcomes as seen from the effectiveness of the t-test and N-gain tests.The t-test shows that Sig.< 0.05, meaning there is a significant difference between learning outcomes before and after treatment.The N-gain test shows scores increased by 19.67 with an N-gain of 0.73, which is classified as high criteria.Meanwhile, on a small scale and 31.75 with an N-gain of 0.59, which is classified as medium criteria.Based on the results of the response questionnaire distributed, a very positive response was obtained from teachers and students.From these results, it can be concluded that Android-based interactive media is effective for improving the learning outcomes of IPAS as well as feasible and practical to use in the learning of fifth-grade elementary school students. Figure Figure 14.Quiz Figure 15.Evaluation questions multiple-choice Figure 22 . Figure 22.Large-scale trialA large-scale trial using Android-based interactive media in IPAS learning subjects and interaction material in the ecosystem was conducted to determine the effectiveness of the product based on student learning outcomes.The design used is a pre-experimental design with a one-group pretest-posttest design model, namely a pretest before treatment is given and a posttest after the research. Table 1 . Android Based Interactive Media Expert Validator Assessment Results Table 2 . Results of Teacher and Student Responses toSmall-Scale Android-Based Interactive Media Table 3 . Results of Teacher and Student Responses to Large-Scale Android-Based Interactive Media Table 4 . Pretest and Posttest Results of Students on Small-Scale Product Use Trial Table 5 . Pretest and Posttest Results of Students on Large-Scale Product Use Trial Table 6 . Normality Test in Small Groups Table 7 . Normality Test in Large Groups Smirnov type with 12 respondents on a small scale and 21 on a large scale.Data is said to have a normal distribution if Sig. > 0.05.Based on the largescale normality test, the normality test results obtained on a small scale in table 6 are 0.162.While on a large scale in table 7, they are 0.054, indicating that the learning outcome data is normally distributed. Table 8 . Small Scale N-gain Test Results Table 9 . Large Scale N-gain Test Results
6,245.4
2024-06-20T00:00:00.000
[ "Education", "Computer Science" ]
Original Sin, Prophets, Witches, Communists, Preschool Sex Abuse, and Climate Change Many theologians, including Pope Francis, assert that the increase of carbon dioxide into the atmosphere, caused by burning fossil fuel, endangers the planet, and urge us to stop.nbsp This article notes that fossil fuel has helped civilization advance worldwide, has alleviated abject poverty for billions, and that there is no substitute for it at this time.nbsp Thus there is a strong moral component on this side of the issue as well, a moral component which many theologians, politicians, commentators, and scientists, neglect.nbsp The bulk of this paper concerns assertions of damage from climate change, and then checks them out against available measurements in a way, which anyone can do.nbsp While increasing CO2 in the atmosphere may be a concern, it is hardly a planetary emergency.nbsp It is very likely treated as such by some, because of a new set of modern day lsquoprophetsrsquo who claim that they have access to knowledge that ordinary people cannot have.nbsp It compares climate lsquoprophetsrsquo to other such lsquoprophetsrsquo in American history. I. INTRODUCTION The climate change controversy has a scientific, religious, and historical side. This article attempts to explore all three aspects, with emphasis on the scientific side. It starts with a brief discussion of original sin and biblical prophets. It continues with a discussion of several false 'prophets' in American history. It compares them to biblical prophets; i.e. claiming knowledge ordinary people cannot have. However unlike their biblical predecessors, these modern 'prophets' have no direct pipeline to God. This article asserts that those calling for a nearly immediate end to the use of fossil fuel fall into this category. Regarding the science, this publication is the attempt of an experienced scientist, although not a climate scientist, to navigate through piles of universally available data so as to evaluate the claims of the human induced climate change believers and alarmists are making. This paper lists some of the claims the believers and alarmists have been making, and will use an Internet search to find the appropriate data to check these out. The author used Google, and more often Google images to search for a graph for this or that. This is something anyone can do. While this is not characteristic of the way scientific papers do referencing, there is an overwhelming advantage to it for our purposes here. Anyone can do this anywhere, anytime. He does not have to go to say the Library of Congress to and search out a bunch of dusty, obscure journals.. However there is one word of caution. A Google search is not constant. Occasionally data changes from search to search. Several instances in course of preparing this paper, I had to eliminate a graph that seemed particularly convincing and important, because a day or so later, I could not find it again on Google images. Generally I have listed the link along with any graph presented, and to the extent possible have used links of well known organizations, NOAA, NASA, Institute of Energy Research, various government statistics, etc.. The graphs presented here did seem to occur regularly in the search, and generally there were many similar graphs to choose from. I have been as careful as possible, and trust no substantial distortion has occurred. It is important to note, that all such a search can do is give information up to the present, it cannot predict the future. There may many theories that predict disaster if we follow our present course; they may be correct, they may not be. Such a Google search has nothing to say about these predictions of the future. However it does give an accurate picture of the past and present. Furthermore, often there are obvious extrapolations of present data, which give important indications as well. In a nutshell, this simple search shows that the claims of the believers and alarmists are for the most part wildly exaggerated. To this author, it is rather amazing that the mainstream media has not performed this simple check. Any competent science reporter for any major media outlet could do this, and almost certainly come up with these same results. Instead almost all of the major the media outlets have just swallowed the spoon fed claims of the alarmists, hook, line, and sinker. It is very likely that this will damage the media's reputation for decades to come. II. ORIGINAL SIN AND PROPHETS One does not have to read very far into the bible to see that God was often quite dissatisfied with his creation and was more than willing to punish. He had hardly finished with creation when he told Adam and Eve in the Garden of Eden that "But from the tree of knowledge of good and evil shall not eat … (Genesis 2-17)". As we know the serpent tempted Eve to eat the fruit, and this is often regarded as original sin. As punishment God banished Adam and Eve from the garden and forced the serpent to crawl only on its belly. Not too many generations had passed before God again grew dissatisfied. "Now the earth was corrupt in the sight of God, and the earth was filled with violence (Genesis 6-11)". God resolved to destroy the earth. However at this point something new arose, God decided to take a particular person, a person we will call a prophet, into his confidence warn him of the disaster and give him instructions on how to save himself and his family. Then God said to Noah "The end of all flesh has come before Me, for the earth is filled with violence…. I am about to destroy them with the earth" (Genesis 6-13)". As we know, He told Noah to build an arc While this author is hardly a biblical scholar, the concept of human sin, and prophets who communicated directly with God, is very much a recurring theme of the bible. But are there prophets in the modern era, who use their specialized training, to see sins that nobody else can see? Our theme is that this concept is very much alive in the modern era, and generally these are false prophets with the capacity to do tremendous harm. III. WITCHES One of the strangest incidents in American history has been the Salem witchcraft trials (Starkey, 1949). "The Devil in Massachusetts" (Marion Starkey, 1949) published a very authoritative account. The contagion began in the house of Reverend Samuel Parris where his daughter, Betty, 9, and her cousin, Abigail, 11 lived. Also in there lived a lady slave Tituba, whom the family acquired in Barbados. Tituba regaled the girls with stories of voodoo and witchcraft. In January, 1692, the girls began to have frequent fits of hysteria. Soon other town girls began to join. Conferring with other clergy, Reverend Parris concluded that the devil and witches haunted the girls. While Ms Starkey wrote a decade or so before Elvis or the Beatles, she likely would have compared the Salem girls to those at one of these more contemporary concerts. In any case, encouraged by Reverend Parris the town became convinced that witches haunted the girls. But who were the witches? The only way to find out was to have the girls point them out. It took some convincing, but finally the girls pointed out Tituba and two other ladies lower class women. But how do you prove witchcraft? Surely there was no physical evidence. The examinations and trials relied on what was called specular evidence. It is not easy to explain this to a sophisticated 20 th and 21 st century audience, and in fact, Ms Starkey had a hard time doing so. The girls claimed they saw the specter, or essence, or spirit of the person performing witchcraft. In one instance at church, they fell into a fit, claiming they saw a witch's Sabbath in the rafters above them. Others looked, but saw nothing. Yet the girl's words were taken as absolute gospel. The spectral forms for late 17 th century Puritans in Salem, were as real to them as your husband or wife, sitting with you at the dinner table is to you today. The girls accused more and more people during the winter, spring and summer, including respectable people. One was Rebecca Nurse, a 70 year old woman who worked a farm with her husband and her 8 children. She was tried as a witch, and went to the gallows denying her guilt. Challenging the girls in any way could get you accused of witchcraft. One courageous man who did was John Proctor. He and his wife Elizabeth were jailed, creating 5 orphans. John was executed, but Elizabeth was spared due to her pregnancy. An image from the time of the execution of John Proctor is shown in By September 1692, 20 had been executed and over 150, including several children, had been jailed. Conditions in the jail were horrible; the people who built the jail had never anticipated such a gigantic crime wave. Furthermore the time spent on the panic was time taken away from work; fields lay fallow, starvation was a real possibility. At this point, the new governor, William Phips had no choice but to take an interest, even though his main responsibilities lay elsewhere. He conferred with ministers not only from Puritan Massachusetts, but also from New York, where the Dutch influence was still strong. The upshot was he forbade spectral evidence. Without spectral evidence, the cases all collapsed. Also confessed witches were allowed to recant their confessions. The panic was over, it lasted less than a year. So here we have our first example of a self appointed prophet, Reverend Parris and his team of assistants, pointing out sin, which nobody could see except them. He created only chaos in his wake. History lists him as a sinner, not a prophet. www.ijeas.org IV. COMMUNISTS Another 'witch-hunt' in American history, involving another false 'prophet' who saw human sin before anyone else could, is the McCarthy era in from about 1950 to 1954. On February 9, 1950 Senator Joseph McCarthy gave a speech in Wheeling, West Virginia in which he asserted that he had in his hand a list of 205 known Communists working in the State Department. Later that number changed to 57, then to 284, then 79, then 81, then 108; the number kept changing from one speech to another. But he never revealed the names on various lists. It reminds one of the 1962 movie The Manchurian Candidate, starring Angela Lansbury, Lawrence Harvey, and Frank Sinatra. The movie was about a senator like McCarthy who kept asserting that he had lists of a large, and always varying, number of Communists in the United States Government. While McCarthy was a bachelor until 1953, the evil genius in the movie was Angela Lansbury, the senator's wife. The movie senator (not too bright), kept asking his wife why he could not just give a number. Angela Lansbury kept insisting that the varying numbers were vital, they kept people interested, nobody disputed the presence of Communists in government, only the number. But he kept badgering his wife, and finally she reluctantly agreed. While he was shaking Heinz ketchup on his dinner, she allowed him to say okay, the number will be 57. By the way the movie had dream sequences, which constituted some of the most spectacular film making ever, as the scene shifted back and forth from dream to reality. To get back to the actual Senator McCarthy, he grabbed more and more power in the Senate and used it to investigate Communist infiltration. He publically accused many, and many lives were ruined by these accusations. He finally came undone when the Army accused him, and his chief counsel, Roy Cohn of improperly pressuring the Army to give a former associate, David Schine favorable treatment. McCarthy's senate committee (actually chaired by South Dakota Republican Karl Mundt) investigated this. The hearings were televised and they transfixed the country. They went on for 36 days, involved 32 witnesses and millions of words. McCarthy's bullying tactics finally turned off the country. The key moment came when McCarthy asked the Army's chief counsel, Joseph Welsh about communist leanings of one of his junior associates, Fred Fisher. Here is Welsh's response: Welch: Until this moment, Senator, I think I have never really gauged your cruelty or your recklessness. Fred Fisher is a young man who went to the Harvard Law School and came into my firm and is starting what looks to be a brilliant career with us. Little did I dream you could be so reckless and so cruel as to do an injury to that lad. It is true he is still with Hale and Dorr. It is true that he will continue to be with Hale and Dorr. It is, I regret to say, equally true that I fear he shall always bear a scar needlessly inflicted by you. If it were in my power to forgive you for your reckless cruelty I would do so. I like to think I am a gentleman, but your forgiveness will have to come from someone other than me. After the hearings, he had lost all of whatever support he had in the senate and had lost the trust of the country. He was censured by the senate after the hearings, and died of cirrhosis of the liver (he was a very heavy drinker) in 1957. So here we have another example in American history of a false prophet (McCarthy) convincing a large of people that he had access to knowledge that ordinary people could not have. He used this knowledge to create chaos in his wake and in the process ruined countless lives. V. PRESCHOOL SEX ABUSE In the1980's and 1990's, there was another hysteria gripping the United States, brought on by another group of false prophets. These were the prosecution of preschool teachers for sex abuse of their students. The similarities between the trials of these day care workers in 1990's and the Salem witchcraft trials of the 1690's are so close as to be almost spooky. The original accusation was made by a McMartin mother, one diagnosed with acute paranoid schizophrenia and who later died of chronic alcoholism. In all cases the children (then 6 or 7, trying to recall events when they were 3 or 4) were prodded by social workers and psychologists, in some cases for months before they told about the abuse these interrogators wanted to hear about. The stories the children told were fantastic. From one court record "Gerald Amirault had plunged a wide blade butcher www.ijeas.org knife into the rectum of a 4 year old boy, which he then had trouble removing." Other children told about satanic rituals in secret and magic rooms, in tunnels beneath the schools; they said they were forced to drink urine, were tied to a tree, were taken up and tortured in balloons, …. Who in his right mind would believe this? A large number of teachers were arrested and brought to trial. In the McMartin school case, all were acquitted or had hung juries. However many of the teachers were jailed as long as 5 years awaiting trial. Those in Edenton and Malden were not so lucky. They were mostly convicted, several being handed multiple consecutive life sentences. Gerald Amirault served the longest sentence, 18 years. Ultimately all convictions were overturned as the various communities gradually came to their senses. It is difficult to escape the conclusion that Salem in the 1690's handled the panic better than Los Angeles, Edenton or Malden did in the 1990's. In Salem, the panic lasted less than a year, these others lasted for years, decades. After the panic, Reverend Parris was fired. To my knowledge the psychologists, social workers and prosecutors have not been. Quite the contrary, Martha Coakley, one of the lead prosecutors in the Amirault cases won the Democrat nomination for the 2010 Massachusetts senate race. Republican Scott Brown defeated her. After Reverend Parris left they hired a new reverend, one who attempted to bring the community together and largely succeeded. Years later the Massachusetts Bay colony provided partial compensation to the some of the victims and their relatives. But most important, none of the 1990's governors of Massachusetts, California, or North Carolina showed the wisdom and courage that Governor Phips showed in the 1690's. Confronted with what was obviously the 20 th century version of spectral evidence, they could have devised reasonable rules of evidence for such cases. Instead they did nothing. There is one thing, which the prosecutors got right. These children were abused and even brutalized, but not by their teachers. They were brutalized by the real 20 th century witches, the psychologists and social workers, with their anatomically correct dolls and pseudo science, who forced fantastic, untrue testimony of abuse from innocent children. None of this evidence would pass the laugh or smell test. These children, now adults, all know that their testimony sent many innocent people to prison, some for long periods of time. How can they possibly live with themselves knowing that? Fortunately, there is one good witch in the story. This is Dorothy Rabinowitz, a reporter for the Wall Street Journal. From the beginning, she perceived what was happening, she recognized the tremendous injustice involved. She wrote many columns exposing the fraud. Ultimately this series won her a Pulitzer Prize. Finally, and largely due to her efforts, everyone wrongly convicted was freed, the last one being Gerald Amirault, after he served 18 years. Her description of her meeting with him after he was released from prison could bring tears to the eyes of the most hardened cynic (Rabinowitz, 2004). Figure 3 is a photo of Gerald Amirault reunited with his family after 18 years. So here we are again. There are different prophets, this time the psychologists and social workers. They see what others cannot. Using their specialized training, they can interview children and get them to recall what never happened, and in doing so, send many innocent people to prison. They were not prophets, but were villains, better they should have been jailed. VI. CLIMATE CHANGE Carbon dioxide in the atmosphere and the 'unanimous' scientific consensus One can hardly open a newspaper or turn on the TV these days without seeing claims of the damage carbon dioxide into the atmosphere is doing to the environment. We must end the use of fossil fuel, sooner rather than later. But who can observe this damage or understand the detailed science? Since most cannot, we rely on another set of prophets, this time the scientists and their spokesmen, politicians, and commentators. But are these people false prophets? There is a good argument that for the most part they are. However it is also worth pointing out that there are many climate scientists who do their job, earn their living, and let the science, however they see it, one way or the other, speak for itself. They do not insist that society must do this or that to avoid catastrophe. By no means does this article imply they are false prophets. Since the beginning of the industrial age, humans have been burning coal, oil and natural gas, and as such, have been putting carbon dioxide into the atmosphere. It is a greenhouse gas, which tends to warm up the atmosphere, in a way, which is easily understandable to most scientists. During the industrial age, the CO2 content of the atmosphere has risen from about 280 to about 400 parts per million. But the atmosphere is very complicated, and there is much more going on than just the greenhouse effect. Excess CO 2 in the atmosphere is just one of the many things that can cause climate change. Carbon dioxide is an odorless, colorless, harmless gas in small quantities. Every breath we inhale has less than 0.1% carbon dioxide; every breath we exhale, about 4%. It is not a International Journal of Engineering and Applied Sciences (IJEAS) ISSN: 2394-3661, Volume-4, Issue-7, July 2017 www.ijeas.org pollutant in the sense of sulfur dioxide or mercury. It is a vital nutrient for plants. Greenhouses generally operate with carbon dioxide rich atmospheres. Without atmospheric carbon dioxide, life on earth would not be possible. Nearly every carbon atom in our bodies and in the food we eat, has its origin in carbon atoms in plants and decayed organic matter in the soil, which in turn has its origin in atmospheric CO 2. (The carbon atoms in the fish we eat had its origin in CO 2 dissolved in oceans.) In fact, it is almost certain that the added CO 2 has aided plant life over the era of satellite measurements. Figure 4 is a plot of the earth provided by NASA, with the greener areas in Green, and the less green areas in red. Clearly the earth has supported much more plant life since the advent of satellite measurements, very likely largely because of the increase of atmospheric CO 2 , at the very least this increase in plant life is coincident with the increase of atmospheric CO 2 . Furthermore, there are claims of great unanimity within the scientific community of the human fingerprint on climate change and global warming. This author asserts that these do not stand up to careful analysis. For want of a better word, I'll call those who believe in human induced climate change believers, or more emphatically alarmists. Most of the American mainstream media, New York Times, The Washington Post, NBC and CBS news etc. express the believer's point of view so emphatically, that they sweep away the views of skeptics like so much dust. It is important to note that no skeptic denies climate change; everyone agrees that the earth's climate has been changing for billions of years. What they are skeptical of is the human cause of climate change. Believers point out that 97% of scientists who publish in the scientific journals on the subject are themselves believers. They get this figure by skimming large number of scientific articles in the major scientific journals, and counting those that see a human finger print on climate change, and those who do not; they come up with the 97% figure. But what are the editorial policies of the journals? As we will see, at least one very prestigious, high impact journal makes no bones about it; it will not accept articles by skeptics. What about the policy of those in the government who sponsor the scientific research? If you are a scientist and apply for government support of your research, your chance will be slim, if you are a skeptic. This author personally knows of one extremely capable scientist at a major Ivy League university, a skeptic of human induced global warming (Bernstein, 2010), whose grant was suddenly canceled for whatever reason (Popkin, 2015). Like oil and coal, green is big business now with lots of very powerful, well-funded interests protecting it. Perhaps it is even too big to fail. "The public is largely unaware of the intense debates within climate science. At a recent national laboratory meeting, I observed more than 100 active government and university researchers challenge one another as they strove to separate human impacts from the climate's natural variability. At issue were not nuances but fundamental aspects of our understanding, such as the apparent-and unexpected-slowing of global sea level rise over the past two decades." www.ijeas.org So much for the '97%' and "the science is settled". In this author's opinion, the reluctance of the mainstream press to further investigate the validity of these claims of scientific unanimity is one of the greatest examples of journalistic irresponsibility and dereliction of duty he has ever seen. On the effort to replace fossil fuel with solar power This author, and many others, are disturbed that those he calls alarmists are almost always concerned only with ending fossil fuel, but show little or no concern with what would replace it. Furthermore, they have little appreciation of the fact that fossil fuels have lifted billions out of abject poverty in the past few generations. The replacements they do propose (solar, wind and biofuel) are very unlikely, any time soon, to be able to fill the hole they are attempting to create, and they show little appreciation for that reality. How will we get the power we need? Modern civilization does depend critically on fossil fuel to power it. They cannot be concerned with such trivia. They are too busy saving the planet; powering it without fossil fuel is someone else's problem, it is not their department! It reminds one of the rhyme from the old Tom Lehrer song about Werner von Braun: Once rockets go up, who cares where they come down? That's not my department, says Werner von Braun! The total power the world uses now is roughly 13-14 terawatts. Roughly a billion people in the US, Europe, Russia and Japan each use about 6 kW (i.e. 6 terawatts total), leaving about 1 kW for each of the other 6 billion people on the planet. Currently the average Chinese uses about 25% of the power of the average American. In 2000, this figure was about 10%. In 2009 I was at a scientific meeting, where a high ranking member of the Chinese Academy of Science remarked on this, and said that they would not rest until their per capita power use is about the same as ours. They know that there is an unbreakable link between power and prosperity. What is important is that fossil fuel cannot and will not be eliminated until another power source, becomes available at about the same quantity and price. The Chinese, Indians, Brazilians, Mexicans, Indonesians, Nigerians, … understand this unbreakable link between fossil fuel and prosperity, no not just prosperity, human civilization; even if we do not. They are sick of poverty, and who are we to blame them. Who are we to condemn them for escaping poverty the only way anyone knows how to do so; namely by using fossil fuels. To illustrate how unlikely it is that renewable solar power can play any role in the world energy budget anytime soon, and the fact that the less developed world will not heed our advice to move away from fossil fuel, consider the Figs (5 and 6), taken from the BP statistical review of world energy 2013. Clearly renewables have a long, long way to go before they can supplant fossil fuel. Also it is the less developed parts of the world that are increasing the use of fossil fuel. The use of fossil fuel by the more developed parts of the parts of the world has leveled off. Figure 5: Clearly it is extremely unlikely that solar power can replace fossil fuel any time soon as many insist. Simply ending fossil fuel without a replacement would impoverish the world and set civilization back centuries. https://ourfiniteworld.com/2015/06/23/bp-data-suggests-weare-reaching-peak-energy-demand Figure 6; It is the less developed parts of the world that are increasing energy use as they struggle to end their persistent poverty. https://ourfiniteworld.com/2015/06/23/bp-data-suggests-weare-reaching-peak-energy-A major effort has been made to support renewable solar power. It has been heavily subsidized for at least a quarter of a century. The American Federal support for climate change research over the past 20 years is shown in Figure ( Some have argued that fossil fuel receives larger federal subsidies than renewables. While every industry, including fossil fuel, gets a variety of tax breaks (eg. business expenses, depreciation, …), fossil fuel receives far less in direct subsidies than solar power. The American Energy Information Agency publishes data on federal support for various energy options. Their chart is in Figure (8). Renewable power is subsidized about $15B (a bit more than Fig. (7) indicates), and fossil fuel about $3B. However since renewable power only produces ~1% of the world's power, it gets about 500 times as much subsidy per energy unit produced. Furthermore, fossil fuel pays taxes, as anyone driving up to a gas station to fill his or her tank knows. An enormous effort has been made to bring up solar power to a point where it can contribute to the world economy; clearly it has failed at this point. The assertions of the climate 'establishment' A good place to start is with former President Obama. Apparently he saw a good portion of his legacy as his fight against climate change. On the White House web site was an entry, recently removed by President Trump, the clean power plan, asserting THE CLEAN POWER PLAN The Clean Power Plan sets achievable standards to reduce carbon dioxide emissions by 32 percent from 2005 levels by 2030. By setting these goals and enabling states to create tailored plans to meet them, the Plan will: SAVE THE AVERAGE AMERICAN FAMILY: • Nearly $85 a year on their energy bills in 2030 • Save enough energy to power 30 million homes in 2030 Save consumers $155 billion from 2020-2030 Also, in the summer of 2015, President Obama was in Alaska inspecting the retreat of glaciers, especially on a boat ride in Resurrection Bay. He pointed out the recent retreat of glaciers, arguing that this is proof of climate change caused by fossil fuel, and argued that government action can somehow prevent this in the future. Now take a look at a December, 2014 speech of Hillary Clinton, who had hoped to succeed him as president, to the league of conservation voters (Pantsios, 2014 ). "The science of climate change is unforgiving, no matter what the deniers may say. Sea levels are rising; ice caps are melting; storms, droughts and wildfires are wreaking havoc. … If we act decisively now we can still head off the most catastrophic consequences." Another claim (McNutt, 2015), is in the editorial of Science Magazine, the prestigious magazine of the American Academy for the Advancement of Science (AAAS). But now with climate change, we face a slowly escalating but long-enduring global threat to food supplies, health, ecosystem services, and the general viability of the planet to support a population of more than 7 billion people. The time for debate has ended. Action is urgently needed. (we must) set more aggressive targets, developed nations need to reduce their per-capita fossil fuel emissions even further… Notice that she claims that 'the time for debate has ended'. But in view of her editorial, can anyone believe that a skeptic would be able to publish a skeptical article in Science? Does the 97% really have any meaning in view of her statement? But in case anyone still does not get the idea, Dr. McNutt says that skeptics belong in one of the circles of Dante's inferno. Figure 9, is her picture of this. The previous three authorities are moderate. At least they do not seem to insist upon an immediate, or nearly immediate end to the use of fossil fuel. Now let us take a look at a few of the more extreme alarmists. Another candidate who hoped to succeed President Obama is Bernie Sanders. At the first Democratic presidential debate in October 2015, the last question asked, was what is the biggest national security threat facing the United States. You might think there are many such threats, North Korean nuclear weapons, ISIS, a nuclear Iran, and aggressive Russia, China building arming and claiming islands in the South China Sea. However to Bernie Sanders, the greatest national security www.ijeas.org threat the United States faces is climate change! Another organization that advocates a nearly immediate break away from fossil fuels is 350.org, (web site at www.350.org), an organization led by Bill McKibben. Its goal is to reduce the concentration of CO 2 in the atmosphere to 350 parts per million. Considering that it is now over 400, and the CO 2 in the atmosphere lasts for centuries, it is unlikely to achieve this goal any time soon. On their web site, they state their goals: 1) Keep carbon in the ground • Revoke the social license of the fossil fuel industry • Fight iconic battles against fossil fuel infrastructure Counter industry/government narratives They illustrate this in Figure 10, taken from their web site. To accomplish their goals, they use political pressure and protest marches that have attracted large crowds. But how many come to these protest marches by car, bus, or airplane; instead of by foot, bicycle, or on horseback? How does Bill McKibben get to them? And how do they propose to find the energy that powers modern civilization? Again, that is not their department! Another organization advocating a nearly immediate abandonment of coal, oil and natural gas is the Sierra club, whose web site has links to 'beyond coal', 'beyond oil', and 'beyond natural gas', http://www.sierraclub.org. For instance on their web site they state in the Beyond Oil part, they clearly state that "where innovative green industries provide good jobs and supply 100 percent of our energy needs" Apparently they wrongly believe that the world can convert to solar and wind right now, this only being prevented by corrupt coal, oil and gas companies. Powering civilization? A secondary consideration, and anyway, not their department! Al Gore, the former American vice president has gone one step further. He suggests a specific time for ending the use of fossil fuel. In 2008, he called for completely ending the use fossil fuels in 10 years, by 2018! (Schor, 2008). What about his mansion and private jet? The Paris Agreement Recently the world has come together to sign a UN sponsored Paris agreement to limit climate change by restricting the use of fossil fuels. This has received a great deal of publicity; and recently even more as the President Trump has withdrawn from the agreement. Here is a link to the statement. http://unfccc.int/resource/docs/2015/cop21/eng/l09.pdf. Among other things, the agreement states: "Also recognizing that deep reductions in global emissions will be required in order to achieve the ultimate objective of the Convention and emphasizing the need for urgency in addressing climate change,". It continues "Emphasizing with serious concern the urgent need to address the significant gap between the aggregate effect of Parties' mitigation pledges in terms of global annual emissions of greenhouse gases by 2020 and aggregate emission pathways consistent with holding the increase in the global average temperature to well below 2 °C above preindustrial levels and pursuing efforts to limit the temperature increase to 1.5 °C…". It assumes that an increase of 1.5 degrees centigrade, or at most 2 degrees will be calamitous. Does the claim that a one and a half degree temperature rise will cause calamity make any sense at all? Where the temperature has already risen by one degree centigrade since the start of the industrial age, and there is no sign of any impending calamity, will another half degree really produce one? In fact, in all likelihood, this one-degree rise has been beneficial. Over the millennia of human civilization, warm periods have been beneficial; cold, harmful. If a degree and a half rise would cause a calamity, I would think that once the temperature rose one degree, as it already has, things would be pretty bad. Notice that the agreement gives no recognition to the role fossil fuel has played in advancing modern civilization; 'global emissions' instead are portrayed as something more like smoking, something one can just quit. There is no recognition of the fact that without fossil fuel, or a different energy source available at about the same quantity and price, the world will sink back into abject poverty, for all but the privileged few, as had been humanity's fate for most of its existence. No recognition that even if their assessment of the climate threat is correct, there are competing priorities. No recognition that these competing priorities would have to be balanced in some way. No recognition that it is extremely unlikely that what it calls sustainable power (solar thermal, solar photovoltaic, wind and biofuel) can come anywhere near filling the void the agreement is attempting to create. No recognition of the wisdom of Richard Feynman when he said regarding the Challenger disaster: "For a successful technology reality must take precedence over public relations, for nature cannot be fooled." The consequences of enacting the treaty are major for human civilization, lifestyle, health and prosperity. Is it really necessary, or are they shouting "FIRE" in a crowded theater? Is it worth changing the lifestyle of billions, forcing most of International Journal of Engineering and Applied Sciences (IJEAS) ISSN: 2394-3661, Volume-4, Issue-7, July 2017 www.ijeas.org the world back into abject poverty? because of these theories, which, as we will see, have little data confirming them? But the main question is whether the Paris agreement has its facts and assertions right. The rest of this paper addresses this extremely crucial point. With the liquid fuel equivalent to only 2% of our gasoline, there certainly will not be enough to power very many cars or airplanes. Hence no cars or airline travel for anyone except for society's grand pooh-bahs. Getting more than 20 miles from your house will be a real challenge. Every few years you might be able to take a trip on a crowded, uncomfortable railroad car. Never mind airplanes, what about cars powered by electricity? Take a look at Figure 5. If we eliminate fossil fuel, only about 1/3 of electric power will remain; if the anti nuclear activists have their way as well, that 1/3 becomes 1/6. Think of what this would mean for your life style. Air conditioning will be gone and space heating in the winter will be greatly reduced. Everyone will be cold all winter, indoors and out, and hot all summer. Getting to the store for food and clothing will be a difficult and time-consuming process. Modern high tech health care will be gone except for the very wealthy, as few people will have the time or energy to make the difficult trip to the doctors or dentists. Your house might have a small refrigerator and a few low wattage light bulbs. Manufacturing, which takes a lot of power will come to a nearly crashing halt. So will construction, especially large buildings in large cities, and large ships. This takes vast amounts of energy which solar and wind are unlikely to be able to supply. Look around your house at all the manufactured items; few of them will remain. As figure 5 shows, solar power (i.e solar photovoltaic, solar thermal, wind and biofuel) hardly registers as an electric power source, in fact about the only solar source which produces any significant electric power is hydro electric, a power source we have been utilizing for about a century. Other solar sources are stuck at the few percent level, even after a quarter century of heavily subsidized development. Is there any possibility that these sources can provide power, any time soon, at the same quantity and price as fossil fuel? Judging from Figure 5, the answer has to be no. 6.6 The world temperature record We start with the temperature record. For years NOAA developed the graph shown in Figure (11), along with the link. The obvious conclusion is that there has been a nearly 20 year hiatus in the increase of the world's ground based temperature measurements. http://www.carlineconomics.com/archives/303 Figure 11: NOAA data on ground based worldwide temperature measurements showing a recent 20 year hiatus in warming. The temperature has risen about one degree centigrade since the start of the industrial age However NOAA now claims that there is no pause in global temperature rise and offers a new graph shown in Fig. (12), along with the link. Note Fig 12 is in Fahrenheit. http://blogs.discovermagazine.com/collideascape/2013/04/0 8/about-that-global-warming-pause/#.VkHZFoRhNSU This latest graph shows data which could present a convincing case that man made global warming might well be happening. But what is striking to this author is that after nearly 20 years of measurements, NOAA decided that its measurements are incorrect. It suddenly presents new measurements much more in line with the attitude of its political bosses. Notice that both Figures (11 and 12) have a NOAA seal affixed. This is extremely important. For this author, who spent a career as a civil service scientist, it is vital that civil service labs, NOAA, NASA, NIH, NRL, …maintain their integrity regardless of the wishes of their political bosses. In this author's opinion, www.ijeas.org NOAA's ground based temperature measurements have lost all credibility; the data should be reexamined by a different expert organization, one with no position on climate change. So far NOAA has refused to make its data and new methodology publically available (Tollefson, 2015), asserting: "Because the confidentiality of these communications among scientists is essential to frank discourse among scientists, those documents were not provided to the Committee," the agency said. "It is a long-standing practice in the scientific community to protect the confidentiality of deliberative scientific discussions." This author has been a practicing scientist for over 50 years and this is the first he has ever heard of confidentiality of deliberative scientific discussions. Are we doctors, lawyers or priests all of a sudden? This is 'confidentially' is especially inappropriate because these 'discussions' could have a major impact on the lives of billions of people. Perhaps there has been a pause in the ground based world temperature rise, perhaps not. It will take more than this changing NOAA data to convince this author one way or another. However it is important to note that ground based measurements are not the only way to measure temperature. They can also be measured from space, and this has certain advantages. It uses a single suite of instruments and samples the entire world simultaneously. NASA has been taking space based temperature measurements since 1979 and the record, archived by Roy Spencer at the University of Alabama Huntsville, is in Figure 13, along with the link. The space based measurements show a series of oscillations of varying periods. The raw data is shown in blue. A 13 month running average shows an oscillation with a period of about 5 years. Superimposed on this, in black is a much longer period oscillation of about 45 years. The space based measurement do show an increase in temperature, but a considerably smaller increase than the ground based measurements. Furthermore, this increase may not be a secular increase at all, but may result from the fact that they do not yet have data on a full period of the 45 year oscillation. Future measurements will answer this. https://wattsupwiththat.com/2012/10/09/uah-global-temperat ure-up-slightly-in-september/ Figure 13: NASA data on space based temperature measurements. Raw data is in blue, a 13 month average showing a rough 5 year oscillation is in red, and a rough 45 year oscillation in black. An Internet check on the assertions of the 'alarmists' Let us go through the assertions of the climate change 'alarmists'. First consider President Obama's assertion that reducing fossil fuel use by 30% will lower the utility cost for Americans. A useful data point here is Germany. It has decided to embark on an energiewende, or energy transition. It has heavily subsidized solar and wind power; not only that, it has decided to phase out its 17 nuclear reactors. It has succeeded in transitioning about 25-30% of its electrical power to solar and wind, just as President Obama hopes to do in the United States. But despite the large government subsidy, the price of electricity in Germany is now at least triple its price in the United States, and it is rising fast. Shown in Figure 14 is a plot of the price of a kilowatt of electricity in many different countries, along with the link. http://www.theenergycollective.com/lindsay-wilson/279126/ average-electricity-prices-around-world-kwh Based on this, the author believes that with President Obama's plan, it is much more likely that the American consumers will be hit with large price hikes, just like their brethren in Germany. But even with the energiewende, Germany still needs coal fired power for when the sun does not shine, the wind does not blow, or to replace lost nuclear power. Shown in Fig 15 is a plot, along with the link, of per capita carbon input into the atmosphere of a bunch of countries. German carbon input is considerably greater than that of its European neighbors. If powering the country without carbon dioxide input into the atmosphere is the goal, isn't nuclear powered France a better example than solar powered Germany? The French pay about half for their electric power and input just over half the carbon dioxide per capita into the atmosphere as the Germans. that this is something the government can control. Again, this is something one can check out with a Google or Google image search. Simply search 300 years of glacial retreat. The results are shown in Figure 16 along with the link. Clearly, worldwide, glaciers have been retreating at about the same rate for at least 200 years. As an example of a single Alaskan glacier system, consider Glacier Bay. This had been explored many times since the 1700's. Shown in Figure 17 is a map of Glacier Bay with red lines indicating the glacier's edge at various times. Clearly most of the glacial retreat in Glacier Bay took place before 1907. In other words glaciers have been retreating at about the same rate both before and after a great deal of carbon dioxide had been emitted into the atmosphere. Next consider Hillary Clinton's December 2014 speech where she made many assertions about climate change: Sea levels are rising; ice caps are melting; storms, droughts and wildfires are wreaking havoc… . There have always been storms and wildfires, so let us assume that she meant that these problems are getting worse because of the emission of carbon dioxide into the atmosphere. Let us check out these assertions out one by one. Her first assertion is that sea levels are rising. This is very simple to check out. Figure 18 is a graph of sea level rise, along with the link. Note that this is IPCC data, the very data the UN uses to produce its reports on climate change. Clearly sea levels have been rising at about 20 cm per century since about 1920. There is no indication of an increase in rise as more carbon dioxide has been emitted into the atmosphere. Her next assertion is that ice caps are melting, and this is somewhat difficult to check out. First of all, one must be careful to distinguish between floating ice in the Arctic and land based ice in Greenland and Antarctica. If there former melts, there will be no rise in sea level. If the latter were to melt, there could be an enormous rise, and this is what we consider here. However it is very cold in these two places. It has long been known that in Greenland and Antarctica, ice has been melting in some places and thickening in others, but it has been difficult to measure the net effect (Graham, 1999) However these days you can hardly turn on your TV these days without seeing a gigantic ice mass, thousands of year old, breaking off and floating into the sea to begin its melt, with the commentator saying doom is at hand. Nevertheless a study (NASA, 2015) seems to indicate that melting ice in some places (for instance near the Antarctic peninsular) is more than balanced by thickening ice in others (Eastern and interior western Antarctica). Here is quote from Jay Zwally, the leader of the NASA study; We're essentially in agreement with other studies that show an increase in ice discharge in the Antarctic Peninsula and the Thwaites and Pine Island region of West Antarctica," said Jay www.ijeas.org Zwally, a glaciologist with NASA Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the study, which was published on Oct. 30 (2015) in the Journal of Glaciology. "Our main disagreement is for East Antarctica and the interior of West Antarcticathere, we see an ice gain that exceeds the losses in the other areas." Zwally added that his team "measured small height changes over large areas, as well as the large changes observed over smaller areas." Figure 19 is a summary of ice melting and forming provided by NASA's most recent measurements. Clearly ice is melting in some places (in fact is seems to be melting fast in a few regions) and thickening in others, but the net effect is an increase in ice of about 0.82 gigatons (billion tons) per year. https://www.nasa.gov/feature/goddard/nasa-study-mass-gain s-of-antarctic-ice-sheet- Now let's take a look at data for droughts, which she also claims is wrecking havoc. Figure 24, shows the percentage of American land suffering extreme drought over the past century, taken from the National Climactic Center of NOAA. The worst droughts were in the 1930's and 1950's. Other than that, there has been no particular, observable increase in droughts, at least up to now. What about Marcia McNutt? In addition to preemptively rejecting a paper like this for the journal Science, and saying that this author belongs in one of the circles of Dante's Inferno, she also said that man made climate change will cause slowly escalating but long-enduring global threat to food supplies. Let's see what the data says. One graph is shown in Figure 25 (Max Roser (2015)). If there is to be any "escalating but long-enduring global threat to food supplies", there is no evidence of it yet. http://ourworldindata.org/data/food-agriculture/food-per-per son/ To summarize, not a single one of the assertions quoted here by President Obama, Hillary Clinton or Marcia McNutt, which can be checked out by measured data up to now, can stand up to serious scrutiny. Specular evidence in the climate change discussions? One question is whether there is an analog to specular evidence in the global warming controversy. Obviously there is not in the literal sense. However broadening the definition to include evidence, which seems reasonable, but on closer examination is meaningless, there is specular evidence. Either side can use it, but so far the believers have used it more, perhaps because it is more difficult for the skeptics to use it to prove a negative. The data set describing the earth's climate is vast, but we know that over the last century the earth warmed by about 1 o C. However a believer might point out that one large country has seen a temperature rise of 10 o C and say it proves global warming. True, but meaningless. Given the average, some other part of the planet about the same size must have cooled by 9 o . The Antarctic ice is a very rich area for specular evidence. As Figure 19 shows, ice is melting rapidly on a portion of the coast, regions 21 and 22 on the map. Surely this ice melt makes for dramatic footage on the evening news. However looking at all of Antarctica, the net effect is ice forming, not melting. A recent instance involved no less a climate observer than President Obama. In the winter of 2013-14, he pointed out that in the west, the winter was very mild and there was virtually no snowpack in either the Rockies or Sierras. He www.ijeas.org used this to argue the case for government action on global warming. However had he expanded his view, he would have seen that the east and Midwest had a very cold, snowy winter. Chicago did not get warmer than 0 o F for 23 days, and every state in the eastern half of the country, except Florida, was completely or partially snow covered for weeks. For those of us in the east, all we could talk about was the 'polar vortex'. Would the believers seriously claim that the extra CO 2 in the atmosphere is responsible for both the heat in the west and the freeze in the east? Let's get real! The lesson: If there is a vast data set, it is always possible to pick out one small subset, which agrees with your case. To this author's mind, it is the equivalent of spectral evidence in the physical world. Numerical simulations of climate So the actual data on what is claimed to be the many effects of climate change up to now, gives no support to notion that we are anywhere near a calamity. These assertions then rest entirely on theory. The theory of greenhouse warming is simple, and most reading this paper in this journal know it. However, the earth's atmosphere is extremely complicated and much more is going on besides the greenhouse effect. To do theory requires that one perform computer simulations. However these computer simulations are difficult to do, and depending on the assumptions the modeler makes, one can get many different answers. A typical graph showing the enormous variation in the results of numerical simulations is shown in Figure 26. All of these calculations show more temperature rise than were measured from 1975 to 2012 (i.e. the present). This makes the case that climate computer simulations have a long way to go before one can base public policy on them, especially public policy that would have a major effect on the lifestyle of billions of people. Amazingly, (Vossen) in Science Magazine has recently published what can only be characterized as an expose on the fact that many different groups doing numerical simulations use many different 'fudge factors' in their codes in order to get their desired result. These simulations are far from being first principle simulations. VII. THE CLIMATE 'PROPHETS' So here we are with what may be another group of self appointed 'prophets', these claiming that we have to cease use of fossil fuel immediately so as to 'save the planet'. However unlike their biblical predecessors, these prophets have no direct pipeline to God. They claim that their assertions are based on the nearly unanimous conclusion of scientists. Never mind that the scientific community is far from united on this issue. Also they point out that we are sinners. We burn coal, oil and gas and despoil the natural environment in doing so. All we have to do is stop doing this and 'leave the carbon in the ground'. What could be easier? Or as God herself said "Now what I am commanding you today is not too difficult for you or beyond your reach". (Deuteronomy 30.11). Never mind that this coal, oil and gas have allowed civilization to flourish in many parts of the world, producing a more prosperous, healthier, longer lived, and better educated population; as well as a cleaner environment. It has alleviated abject poverty for billions. Turn off the oil, coal and natural gas, and the poverty comes roaring back for all but the privileged few. The world would then be as it has been for most of human history, the privileged few living well off of animal and human energy, that is the energy of other humans, while the rest of us live in squalor. Following their guidance would create only chaos and poverty, but this time for the entire civilized world, not just a few as was the case for Reverend Parris, Joe McCarthy and the psychologists and social workers. There is an incredibly important moral issue here too. But more realistically, there is no need to panic and end fossil fuel use anytime soon. The measurements today simply do not indicate the need to; and the computer simulations of the future cannot even predict the present. In a nutshell, neither is reliable enough to justify an enormous change in lifestyle for billions of people. Even in a worst-case scenario, there is plenty of time to react. After all, over the centuries, the Dutch have reclaimed thousands of square miles from the sea, and it is possible, given time, to develop economical carbon free fuel, most likely nuclear. Nevertheless, according to these new 'prophets', we are all guilty of an original sin, which only they can discern. I will bet that nobody reading this can say for sure that he or she has actually observed climate change in his or her lifetime. I'll bet that anyone can recall intense summer heat spells, and freezing, as well as very mild winters, as far back as they can remember. But these new 'prophets' see what we cannot. Unless we drastically change our ways, these modern prophets warn us of impending heat waves, floods, intense storms throwing down fire from the heavens, rising sea levels, wildfires…. What could be more biblical? As Jules Winnfield (AKA Samuel L. Jackson) interpreted God's will in Pulp Fiction: "And I will execute great vengeance upon them with furious rebuke; and they shall know that I am the www.ijeas.org Lord, when I shall lay my vengeance upon them" (Ezekiel 25:17). Or as God himself said "But if your heart turns away and you are not obedient, and if you are drawn away to bow down to other gods (i.e. material prosperity) and worship them, I declare to you this day that you will certainly be destroyed" (Deuteronomy 30.17 and 18). VIII. CONCLUSIONS Since solar power is so far from becoming an important player in the world energy budget, and since the scientific community is far from united on whether excess CO2 in the atmosphere will have a significant environmental effect, and since none of the assertions of imminent climate crisis can stand up to serious scrutiny, the question is should we be on a breakneck pace to reduce fossil fuel use in the hope that solar power can replace it? This author's answer is no. The cost to civilization would be catastrophic if solar power should fail, as it has so far. ACKNOWLEDGMENT This paper is an essay on both physical and social science; it has not been supported by any outside agency public or private. This paper is based largely on three of the author's works. These are papers in the International Journal of Advance Research (IJAR), June 2016, a paper with the same name as this, http://www.journalijar.com/article/9945/original-sin,-prophe ts,-witches,-communists,-preschool-sex-abuse-and-climate-c hange/ An earlier paper in IJEAS: A simple way to check on the assertions of damage from climate change, IJEAS Dec 2015 https://www.ijeas.org/download_data/IJEAS0212036.pdf and the American Physical Society Forum on Physics and Society, July 2017, http://www.aps.org/units/fps/newsletters/201707/climate.cfm Wallace Manheimer received his SB and Ph.D degrees in physics at MIT. Since 1970 he has been a physicist (initially employed full time, currently a consultant) in the Plasma Physics Division at the US Naval Research Laboratory in Washington, DC, USA. He has published over 150 scientific papers on such topics as magnetic fusion, inertial fusion, advanced microwave tubes, advanced radar systems, intense electron and ion beams, plasma processing, and a nuclear disturbed upper atmosphere. For about the past 15 years, he as researched fusion breeding, that is the use of fusion reactors to generate nuclear fuel for conventional thermal nuclear reactors. He asserts that this could be a sustainable, mid century, carbon free, affordable power source with little or no proliferation risk. More recently he has taken an interest in the religious, sociological and scientific aspects of the human induced climate change debate.
13,785.8
2016-06-30T00:00:00.000
[ "Environmental Science", "Political Science", "Philosophy" ]
Data Collection Techniques for Forensic Investigation in Cloud Internet plays a vital role in providing various services to people all over the world. Its usage has been increasing tremendously over the years. In order to provide services efficiently at a low cost, cloud computing has emerged as one of the prominent tech-nologies. It provides on-demand services to the users by allocating virtual instances and software services, thereby reducing customer’s operating cost. The availability of massive computation power and storage facilities at very low cost motivates a malicious individual or an attacker to launch attacks from machines either from inside or outside the cloud. This causes high resource consumption and also results in pro-longed unavailability of cloud services. This chapter surveys the systematic analysis of the forensic process, challenges in cloud forensics, and in particular the data collection techniques in the cloud environment. Data collection techniques play a major role to identify the source of attacks by acquiring evidence from various sources such as cloud storage (Google Drive, Dropbox, and Microsoft SkyDrive), cloud log analysis, Web browser, and through physical evidence acquisition process. Introduction In today's world, users are highly dependent on the cyberspace to perform all day-to-day activities. With the widespread use of Internet technology, cloud computing plays a vital role by providing services to the users. Cloud computing services enable vendors (Amazon EC2, Google, etc.) to provide on-demand services (e.g., CPU, memory, network bandwidth, storage, applications, etc.) to the users by renting out physical machines at an hourly basis or by dynamically allocating virtual machine (VM) instances and software services [1][2][3]. Cloud computing moves application software and databases to large data centers, where the outsourcing of sensitive data and services is not trustworthy. This poses various security threats and attacks in the cloud. For instance, the attackers use employee login information to access the account remotely with the usage of cloud [4]. Besides attacking cloud infrastructure, adversaries can also use the cloud to launch an attack on other systems. For example, an adversary can rent hundreds of virtual machine (VM) instances to launch a distributed denial-of-service (DDoS) attack. A criminal can also keep secret files such as child pornography, terrorist documents, etc. in cloud storage to remain clean. To investigate such crimes involved in the cloud, investigators have to carry out forensic investigations in the cloud environment. This arises the need for cloud forensics, which is a subset of network forensics. Cloud forensics Types of forensics The forensic process is initiated after the crime occurs as a post-incident activity. It follows a set of predefined steps to identify the source of evidence. It is categorized into five groups, namely digital forensics, network forensics, Web forensics, cloud forensics, and mobile forensics. • Digital forensics: According to National Institute of Standards and Technology (NIST) standards, it is the application of science to the identification, collection, examination, and analysis of data while preserving the integrity of the information and maintaining a strict chain of custody for the data. • Network forensics: It identifies and analyzes the evidence from the network. It retrieves information on which network ports are used to access the information. • Web forensics: It identifies the evidence from the user history, temporary log files, registry, chat logs, session log, cookies, etc. as digital crimes occur on the client side with the help of Web browser. • Cloud forensics: It is the application of digital forensics in the cloud and it is a subset of network forensics. It is harder to identify evidence in cloud infrastructure since the data are located in different geographical areas. Some examples of evidence sources are system log, application log, user authentication log, database log, etc. • Mobile forensics: It is the branch of digital forensics that identifies evidence from mobile devices. The evidence is collected from the mobile device as call history, SMS, or from the memory. Cloud forensic process flow The cloud forensic process flow is shown in Figure 1, which is described as follows: • Identification: The investigator identifies whether crime has occurred or not. • Evidence collection: The investigator identifies the evidence from the three different sources of cloud service model (SaaS, IaaS, and PaaS) [8]. The SaaS model monitors the VM information of each user by accessing the log files such as application log, access log, error log, authentication log, transaction log, data volume, etc. The IaaS monitors the system level logs, hypervisor logs, raw virtual machine files, unencrypted RAM snapshots, firewalls, network packets, storage logs, backups, etc. The PaaS model identifies the evidence from an application-specific log and accessed through API, patch, operating system exceptions, malware software warnings, etc. • Examination and analysis: The analyst inspects the collected evidence and merges, correlates, and assimilates data to produce a reasoned conclusion. The analyst examines the evidence from physical as well as logical files where they reside. • Preservation: The information is protected from tampering. The chain of custody has been maintained to preserve the log files since the information is located in a different geographical area. • Presentation and reporting: An investigator makes an organized report to state his findings about the case. Evidence collection Evidence collection plays a vital role to identify and access the data from various sources in the cloud environment for forensic investigation. The evidence is no longer stored in a single physical host and their data are distributed across a different geographical area. So, if a crime occurs, it is very difficult to identify the evidence. The evidence is collected from various sources such as router, switches, server, hosts, VMs, browser artifacts, and through internal storage media such as hard disk, RAM images, physical memory, etc., which are under forensic investigation. Evidence is also collected through the analysis of log files, cloud storage data collection, Web browser artifacts, and physical memory analysis. Cloud log analysis Logging is considered as a security control which helps to identify the operational issues, incident violations, and fraudulent activities [9,10]. Logging is mainly used to monitor the system and to investigate various kinds of malicious attacks. Cloud log analysis helps to identify the source of evidence generated from various devices such as the router, switches, server, and VM instances and from other internal components, namely hard disk, RAM images, physical memory, log files etc., at different time intervals. The information about different types of attacks is stored in various log files such as application logs, system logs, security logs, setup logs, network logs, Web server logs, audit logs, VM logs, etc., which are given as follows: • Application log is created by the developers through inserting events in the program. Application logs assist system administrators to know about the situation of an application running on the server. • System log contains the information regarding date and time of the log creation, type of messages such as debug, error, etc., system-generated messages related to the occurrence, and processes that are affected by the occurrence of an event. • Firewall log provides information related to source routed packets, rejected IP addresses, outbound activities from internal servers, and unsuccessful logins. • Network log contains detailed information related to different events that happened on the network. The events include recording malicious traffic, packet drops, bandwidth delays, etc. The network administrator monitors and troubleshoots daily activities by analyzing network logs for different intrusion attempts. • Web server log records entries related to the Web pages running on the Web server. The entries contain history for a page request, client IP address, date and time, HTTP code, and bytes served for the request. • Audit log records unauthorized access to the system or network in a sequential order. It assists security administrators to analyze malicious activities at the time of attack. The information in audit log files includes source and destination addresses, user login information, and timestamp. • VM log records information specific to instances running on the VM, such as startup configuration, operations, and the time VM instance finishes its execution. It also records the number of instances running on VM, the execution time of each application, and application migration to assist CSP in finding malicious activities that happen during the attack. Due to the increase in usage of network or new release of software in the cloud, there is an increase in the number of vulnerabilities or attacks in the cloud and these attacks are reflected in various log files. Application layer attacks are reflected in various logs, namely access log, network log, authentication log, etc., and also reflected in the various log file traces stored on Apache server. These logs are used for forensic examination to detect the application layer attacks. Table 1 indicates the various attack information and the tools used for log analysis of different types of attacks. Figure 2 shows the sample access log trace ( Evidence collection from cloud storage It is the process of collecting evidence from cloud storage such as Dropbox, Microsoft SkyDrive, Google drive, etc., using the Web browser and also by downloading files using existing software tools [11][12][13]. This helps to identify the illegal modification or access of cloud storage during the uploading or downloading of file contents in storage media and also checks whether the attacker alters the timestamp information in user's accounts. The Virtual Forensic Computing (VFC) tool is used by forensic investigators to identify evidence from VM image file. The evidence is accessed for each account using the Web browser running in the cloud environment by recording the encoded value of VM image. The packets are captured using network packet tools, namely Wireshark, snappy, etc., of each VM instance running in hosts. The account information is synchronized and downloaded using client accessing software of each device which is used to identify the source of evidence. The evidence is isolated from the files found in VM using "C:\Users\[username]\ Dropbox\" for Dropbox as shown in Figure 3. The zip file contains the name of the folder that can be accessed via the browser to determine the effect of a timestamp in a drive. If an attacker modifies the contents of a file, the evidence is found by analyzing the VM hard drive, history of files stored in the cloud, and also from a cache. It can also be analyzed by computing the hash value of the VM image. The evidence of Google Drive cloud storage is depicted in Figure 4. Evidence collection via a Web browser The clients communicate with the server in the cloud environment with the help of a Web browser to do various tasks, namely checking email and news, online shopping, information retrieval, etc. [14][15][16][17][18]. Web browser history is a critical source of evidence. The evidence is found by analyzing the URLs in Web browser history, timeline analysis, user browsing behavior, and URL encoding, and is recovered from deleted information. Here is an example of Web browser URLs, https://www.nitt.edu/en#files:/Documents/<Folder name>, https://www.nitt.edu/en#files:/E:<Folder ID>. Similarly, the evidence stored in Web browser cache at the root directory of a Web application is used to identify the source of an attack. Table 3 indicates the evidence collection process and recovery method for various Web browsers. Here is an example of a Chrome forensic tool that captures and analyzes data stored in Google Web browser. It analyzes the data from the history, web logins, bookmarks, cookies, and archived history. It identifies the evidence from C:\Users\ USERNAME\Appdata\Local\Google chrome\UserData\Default. Figure 5 depicts the Google Chrome analysis forensic tool. Evidence analysis Identifying patterns from the evidence collection process to determine the source of attacks in cloud environment Determining the attack patterns from cloud log files and analyzing these patterns using cloud traceback mechanism to identify the source of evidence. Evidence presentation and reporting Forensic investigator examines the evidence and presents the evidence in court Identifying the evidence from analysis and reporting the evidence Table 4. Evidence collection process for cloud forensics. Physical memory analysis This has the ability to provide caches of cloud computing usage that can be lost without passive monitoring such as network socket information, encryption keys, and in-memory database. They are analyzed from the physical memory dump using the "pslist" function, which recovers the process name, process identifier, parent process identifiers, and process initiation time. The processes can be differentiated using the process names ©exe© on the Windows, and ©sync© on the Ubuntu and Mac OS. Table 4 indicates the evidence collection process for cloud forensics in cloud storage and cloud log analysis. Cloud forensics challenges This section elucidates the forensic challenges in private and public cloud. It is observed from the literature that most of the challenges are applicable to the public cloud while fewer challenges are applicable to the private cloud environment. Accessibility of logs Logs are generated in different layers of the cloud infrastructures [2][3][4][5][6][7]. System administrators require relevant logs to troubleshoot the system, developers need logs for fixing up the errors, and forensic investigators need relevant logs to investigate the case. With the help of an access control mechanism, the logs can be acquired from all the parties, that is, from a user, CSP, and forensic investigator. Physical inaccessibility The data are located in different geographical areas of the hardware device. It is difficult to access these physical access resources since the data reside in different CSPs and it is impossible to collect the evidence from the configured device. If an incident occurs, all the devices are acquired immediately in case of a private cloud environment since an organization has full control over the resources. The same methods cannot be used to access the data in case of a public cloud environment. Volatility of data Data stored in a VM instance in a cloud will be lost when the VM is turned off. This leads to the loss of important evidence such as syslog, network logs, registry entries, and temporary Internet files. It is important to preserve the snapshot of the VM instance to retrieve the logs from the terminated VMs. The attacker launches an attack and turns off the VM instance, hence these traces are unavailable for forensic investigation. Identification of evidence at client side The evidence is identified not only in the provider's side but also the client side. The user can communicate with the other client through the Web browser. An attacker sends malicious programs with the help of a Web browser that communicates with the third parties to access the services running in the cloud. This, in turn, leads to destroying all the evidence in the cloud. One way of collecting the evidence is from the cookies, user agent, etc., and it is difficult to obtain all the information since the client side VM instance is geographically located. Dependence of CSP trust The consumers blindly depend on CSPs to acquire the logs for investigation. The problem arises when CSPs are not providing the valid information to the consumer that resides in their premises. CSPs sign an agreement with other CSPs to use their services, which in turn leads to loss of confidential data. Multitenancy In cloud infrastructures, multiple VMs share the same physical infrastructure, that is, the logs are distributed across various VMs. The investigator needs to show the logs to court by proving the malicious activities occurring from the different service providers. Moreover, it also preserves the privacy of other tenants. Decentralization In cloud infrastructures, the log information is located on different servers since it is geographically located. Multiple users' log information may be collocated or spread across several layers and tiers in the cloud. The application log, network log, operating system log, and database log produce valuable information for a forensic investigation. The decentralized nature of the cloud brings the challenge for cloud synchronization. Absence of standard format of logs Logs are available in heterogeneous formats from different layers of a cloud at CSP. The logs provide information such as by whom, when, where, and why some incidents occurred. This is an important bottleneck to provide a generic solution for all CSPs and all types of logs. Table 5 indicates the survey of literature that deals with the challenges of cloud forensics mainly for evidence collection process. Table 5. Challenges of cloud forensics. Forensic tools There are many tools to identify, collect, and analyze the forensic data for investigation. Juel et al. developed the PORs tool for the identification of online archives for providing integrity and privacy of files [19]. Dykstra et al. proposed a forensic tool for acquiring the cloud-based data in management plane [6]. It ensures trust in cloud infrastructures. Moreover, Encase and Access data FTK toolkit are used for the identification of trusted data to acquire the evidence. Similarly, tools such as evidence finder and F-response are used to find the evidence related to social networks. Dystra et al. proposed FROST, an open source OpenStack cloud tool for the identification of evidence from virtual disks, API logs, firewall logs, etc. [20]. Open research problems in cloud forensics Many researchers have proposed various solutions to mitigate the challenges of cloud forensics. Some of the researchers have proposed new approaches to test the attacks in real-time environment. CSPs have not adopted the proposed solutions yet. Customers or investigators rely on CSPs to collect the necessary logs since they do not have direct physical access. Customers or investigators depend on CSP to collect the various information from the registry, hard disk, memory, log files etc. Even though various forensic acquisition process is proposed still the dependence of CSP remain unsolved. The critical issue is the usage of bandwidth resources. If the cloud storage is too high, then it results in more utilization of bandwidth. There is insufficient work evolved to preserve the chain of custody to secure provenance. There is no ideal solution for cyber crime scene reconstruction and preservation of evidence. Another critical issue is based on the modification of existing forensic tools that may lose evidence. Some researchers have proposed logging as a service to provide confidentiality, integrity, and authentication [3]. This solution is not suitable for IaaS cloud. Case study This section introduces a hypothetical forensic case study related to a cloud storage service and also describes a forensic investigation of the case. Case study: cloud storage The organization "X" found that their document named as "X_new.pdf " about the new release of a product has been leaked to their competitor [21][22][23][24]. "Mr. Morgan" was managing the credential files of the document stored in the cloud. At the initial stage of the investigation process, the suspect of the leaked file case was "Mr. Morgan." The forensic investigator has to identify the suspect by checking the organization network, or by the analysis of log files, or by collecting the trace of relevant file in the network. Mr. Morgan's network does not have any clue about the secrets since he uses only the personal computer (PC) and Android phone for business. To identify the suspects, the forensic investigator seized the PC and Android phone since these are the target devices used by the adversary. From the suspected devices, the leaked file has not been detected. Later, the investigator started analyzing the unallocated area in the file system, operating system, external devices such as hard drive, tablets, etc., and the Web service, but no evidence was found in the investigation. The investigator found that the Dropbox was installed in the PC and five files of config.db have been accessed recently. The forensic investigator issued © 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Author details Thankaraja Raja Sree* and Somasundaram Mary Saira Bhanu Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, India *Address all correspondence to<EMAIL_ADDRESS> Conclusion Cloud computing offers on-demand services (CPU, memory, network bandwidth, storage, applications, etc.) to users by allocating virtual instances and software services. Security is a major concern in the cloud wherein investigation of security attacks and crimes are very difficult. Due to the distributed nature of attacks and crimes in cloud, there is a need for efficient security mechanism. As cloud logs are spread across different virtual/physical machines (VM instances), switches, routers, etc., and also the customer (end user) is not aware of the activities of VM instances, cybercriminals exploit these sources to exhaust all the resources running in the cloud. Hence, evidence collection plays a crucial role to identify the suspects. However, collecting logs from the cloud infrastructure is extremely difficult because the investigator/security analyst has to depend on CSPs for collecting the logs and they have little control over the infrastructure. So, in order to identify the suspicious activity involved in the cloud, this chapter surveys the various forensic processes, evidence collection techniques for cloud forensics and the various challenges faced in cloud environment for forensic investigation.
4,859.4
2020-09-30T00:00:00.000
[ "Computer Science" ]
Noise Is Not Error: Detecting Parametric Heterogeneity Between Epidemiologic Time Series Mathematical models play a central role in epidemiology. For example, models unify heterogeneous data into a single framework, suggest experimental designs, and generate hypotheses. Traditional methods based on deterministic assumptions, such as ordinary differential equations (ODE), have been successful in those scenarios. However, noise caused by random variations rather than true differences is an intrinsic feature of the cellular/molecular/social world. Time series data from patients (in the case of clinical science) or number of infections (in the case of epidemics) can vary due to both intrinsic differences or incidental fluctuations. The use of traditional fitting methods for ODEs applied to noisy problems implies that deviation from some trend can only be due to error or parametric heterogeneity, that is noise can be wrongly classified as parametric heterogeneity. This leads to unstable predictions and potentially misguided policies or research programs. In this paper, we quantify the ability of ODEs under different hypotheses (fixed or random effects) to capture individual differences in the underlying data. We explore a simple (exactly solvable) example displaying an initial exponential growth by comparing state-of-the-art stochastic fitting and traditional least squares approximations. We also provide a potential approach for determining the limitations and risks of traditional fitting methodologies. Finally, we discuss the implications of our results for the interpretation of data from the 2014-2015 Ebola epidemic in Africa. INTRODUCTION Mathematical models play an increasingly central role in the analysis of infectious disease data at both the within-host and epidemiological levels (Perelson et al., 1996;Heesterbeek, 2000;Molina-París and Lythe, 2011). The traditional modeling approach involves formulating a set of structural assumptions about the processes involved, such as infection, recovery, death, etc. Often, these structural assumptions are then implemented in terms of differential equations, predominantly ordinary (ODE), but sometimes partial (PDE), or delayed (dODE) differential equations. The advantage of this approach is its amenability for both analytical treatment and powerful numerical and fitting algorithms even for non-linear problems. We will refer to those approaches collectively as deterministic. However, stochasticity is an intrinsic feature of infections at multiple levels from the cellular/molecular world to the level of epidemics (Süel et al., 2006;Bressloff and Newby, 2013). The deterministic framework conceptualizes all deviation from the model prediction as error. For example, in a simple univariate linear regression we say that the data are equal to a linear predictor plus some error. Put another way, we can say that error is the density of the data conditional on the model. However, stochasticity generates intrinsic fluctuations in the underlying dynamics of a system (for instance, in the number of secondary cases an incident case generates), even when the process follows the structural model envisaged. That is, stochasticity generates noise, which we define as the set of outcomes that are consistent with a fixed set of assumptions (i.e., a model). One of the central challenges of using the deterministic framework is to delineate its limitations (Roberts et al., 2015). If the world and its data truly are stochastic, then how much of a problem is it to conflate noise with error? Likewise, how much information in the data are we neglecting by treating all deviation as uninformative error? To what extent is the assumption of deterministic dynamics plus error providing misleading results? This question is not gratuitous as some parameters estimated within the deterministic framework, such as the basic reproduction number (R 0 ), are often invoked to quantify the aggressiveness of a pathogen and to determine the conditions under which a pathogen will go extinct (Dietz, 1993;Heffernan et al., 2005) or to create public health information such as risk maps (Hartemink et al., 2011). The potential problems in applying the deterministic framework can become even more pronounced when we have data that represent multiple realizations of a heterogeneous stochastic process. For example, a set of viral load profiles in different infected individuals (e.g., primary HIV infection; Ribeiro et al., 2010) or epidemic curves in different regions (e.g., cases of Ebola in multiple counties of the same country ; Krauer et al., 2016), that is, any data that can be represented as a panel over discrete units. In those scenarios, an important question is whether the variability seen between units can be attributed to a genuine difference in the process that generated the data (e.g., some parameters of the dynamics are different for each unit), simple stochastic fluctuation, or a mixture of the two, in addition to measurement error. Given a common error model across the units, the deterministic framework assumes that all deviation that cannot be explained by error must be due to parametric variability between units, that is the units are fundamentally different from one another. For this reason, the deterministic framework is ill-suited to tackle the question of stochastic effects. We address in this paper two related questions regarding modeling of panel data: (i) can we use a stochastic modeling approach to partition variability into stochastic and parametric components? and (ii) can we quantify the bias induced by modeling the data by a deterministic approach with error? Put in other words, is there a best and a good-enough fitting method for the practitioner? In section 2.1, we consider two simple structural models that will help us emphasize the essence of the problem without having to invoke unnecessary complexities that may cloud our main arguments. In section 3.1, we present our approach to analyze those models, which will then be used to benchmark comparisons between traditional (deterministic) fitting methods and more sophisticated stochastic ones, that we explore in section 3.2. As a case study, in section 4, we compare deterministic and stochastic modeling approaches to data from the 2014-2015 Ebola epidemic in West Africa. We use epidemic data from multiple counties of those countries that were most heavily affected. If one thinks of each county as a realization of some epidemic generating process, then the relevant question is whether differences between the counties can be accounted for by stochastic variability or if it is possible to detect a signal for different growth rates of the epidemic in different counties. Finally, in section 5 we summarize our results and discuss the implications of our work. Simulated Data The general framework we employ is to simulate data in silico from two structural models, birth-only or birth-death process (see Karlin, 2014), by a discrete-time stochastic simulation and then fit those data using both deterministic and stochastic methods under a variety of assumptions. The code used to generate the data and fit the models is given in Appendix A. We simulate panel data according to the following process where U is the number of units in the panel, O is the number of observations (time points) per unit, and x j,k is the number of new infected cases in each time period k for unit j-this is the output of the simulation used for the fits described below. If the number of deaths exceeds the infected population size, I, this variable is set to 0. These simple models capture both the initial exponential growth phase when infected population sizes are small and stochastic die out that is common in many epidemiological processes. For simplicity, we focus only on the early stages of the epidemic, i.e., the approximately exponential phase in the growth of infected individuals. Note that throughout we use arbitrary time units. Each simulated data set is specified by 6 parameters: mean growth rate, µ A ; standard deviation of the growth rate, σ A ; mean death rate, µ B ; standard deviation of the death rate, σ B ; the number of units in the panel, U; and the number of observations per unit, O. From this we consider 4 possible scenarios: birth-only without parametric variability (µ B = σ B = σ A = 0), birth-only with parametric variability (µ B = σ B = 0), birth-death without parametric variability (σ A = σ B = 0), and birth-death with parametric variability. In all cases with parametric variability, we assume a Gamma distribution for the respective parameter (where µ and σ are the corresponding mean and standard deviation). We chose the Gamma distribution because it can easily be re-parameterized into its mean and standard deviation, which makes interpreting the parameters straightforward. We set up four sets of simulated experiments to explore the effects of (1) model misspecification, (2) the number of observations per unit, (3) the number of units in the panel, and (4) the heterogeneity in parameters (growth rates) between units (see Table 1 for reference). In the first set of experiments, we simulate data from a birth-only process without parametric variability (µ A = 0.15), birth-only with parametric variability (µ A = 0.15, σ A = 0.02), birth-death without parametric variability (µ A = 0.25, µ B = 0.1), and birth-death with parametric variability (µ A = 0.25, σ A = 0.02, µ B = 0.1, σ B = 0.01). In each case, we assume (U =) 20 units per panel and (O =) 20 observations per unit, at equal time intervals. We then fit each of these four data sets using each of four possible models (birth or birth-death with and without random effects) with both stochastic and deterministic approaches for a total of 32 fits. In the next three sets of experiments we use the birth-only model with parametric variability and the default parameters Parameter Inference To infer the parameter values, we use a fitting scheme based on simulations that can account both for the intrinsic stochasticity of the process and the potential variation among individuals. Here all model formulations (both stochastic and deterministic versions) are fit using the iterated filtering method implemented In all cases (in particular in Experiment 4), we compare fitted parameters using the stochastic and deterministic methods described in section 3.1. In all cases, we made use of simulated data with and without random effects to account for the impact of parametric variance. in the R library pomp (King et al., 2016). This approach allows us to fit all the models to the data using the same framework and likelihood functions, such that the model fits are all comparable. We specifically use the iterated filtering for panel data (IFPD) formulation detailed in Romero-Severson et al. (2015). Code used to specify the pomp process are given in Appendix A. Models were fit using 5,000 or 15,000 particles for the deterministic and stochastic models respectively. For stochastic fits, the density of the number of incident cases in the kth time period of the jth unit, x j,k , is assumed to be Poisson(x j,k |I j,k−1 α) were I j,k is the simulated number of extant infected cases in the kth time period of the jth unit and α is the growth rate, which itself may be sampled from a Gamma distribution. For the deterministic fits, x j,k is simply x j,k = αI j,k−1 . To obtain confidence intervals (CIs) for the parameters, we used a profile likelihood method (Romero-Severson et al., 2014) where the parameter of interest was varied over a grid of values and the likelihood was calculated, by refitting the data allowing all other parameters to be free. We used the mif2 method (King et al., 2016) in pomp. A local regression (loess) curve was fitted to the profile likelihood curve and both the MLE and CIs were calculated from the interpolated curve (King et al., 2015(King et al., , 2016). Ebola Data and Analysis The Ebola case count data was compiled from publicly available datasets published by the World Health Organization (from the "Ebola Data and Statistics" section of the WHO website). Case counts were stratified by country and county of origin. All descriptive analyses were done on the full data. However, to fit the models to the data using the simulation-based method described, we restrcited the data in the following way. (i) For every county, we define time = 1 as the first week where the total number of cases is larger or equal to 1. (ii) We truncated the data at 10 weeks after that time, in order to have homogeneous sets (same number of points) during the approximately exponential initial growth of the epidemic. To emphasize this latter point, we re-plot the data in linearlog scale. (iii) Finally, we removed those counties where the data does not include at least 10 data points. Note that in the simulated data, we assumed no measurement error in time or in number of infected. However, this is not a good assumption for real epidemiological data. Thus, for the Ebola data, we fit a modified version of both the deterministic and stochastic birth-only model accounting for measurement error (e.g., missed cases and reporting delays) in a simple way, by assuming that the number of new cases is distributed according to a Negative Binomial, rather than a Poisson, conditional on the simulated state of the system at the previous time. We reparameterize the typical NB(n, p) as NB(δ, µ µ+δ ) where µ is the mean of new cases and δ is an overdispersion parameter such that lim δ→∞ NB(δ, µ µ+δ ) = Poisson(µ). Therefore, the mass of the data conditional on the simulated state of the system is NB y j,k δ, x j−1,k a x j−1,k a+δ . The parameter δ controls the level of overdispersion (smaller values, more overdispersion) in the data conditional on the simulated state and is free (estimated) for each point in the likelihood profiles. This formulation puts the stochastic and deterministic models on a level playing field in that the deterministic model can model variance between epidemic trajectories with increased overdispersion rather than increased population-level heterogeneity. The deterministic and stochastic models were fit with 5,000 and 15,000 particles, respectively, for each value in the profiles (Figure 10). Motivation: Noise as Parametric Heterogeneity Traditional inference is based on maximum likelihood estimates of some well-defined functions. For instance, for the cases considered here (pure birth and birth-death) an ODE-based deterministic approximation provides differential equations that, upon solving, can be fit to the data to determine the parameters (µ A = α and µ B = γ ) that best describe the data (see Table 2, and Appendix B for a succinct derivation for the pure birth case). Similarly, the stochastic version of those models can be solved and in that case one could also fit the mean and variance of a given observable (last two rows in Table 2), and indeed higher moments. In these cases, as the models are linear, both deterministic and stochastic predictions for the average are the same (because averaging and integrating the evolution equation are exchangeable operations). However, the latter has the benefit that it also allows to fit the variance of the data (thus, in principle, increasing the reliability of the inferred parameters). The main point that we wish to address is how to interpret different trajectories of an intrinsic stochastic process. To illustrate this point, Figure 1 shows 100 realizations of the simple stochastic pure birth model with rate parameter α = 0.1 timeunit −1 measured without error at integer times. If we use a naive deterministic approach (top of Table 2), we fit I(t) = e αt to each trajectory (data set) and estimate α independently, obtaining a distribution for this parameter (Figure 1, bottom Birth process Birth-death process In all cases, the epidemic starts with one infected case, namely, I(0) = 1. Here we only consider models without parametric variability (σ A = σ B = 0). panel). If this process were observed at time 25, it would be tempting to conclude that there is a high degree of heterogeneity in the growth rates of these epidemics. Even by time 75, when the expected population size is over 1,000, we still see a large heterogeneity in the estimated rates. If we used the stochastic version of the pure birth process (bottom of Table 2), by definition we would assume that there was just one value for the α parameter and could fit the mean and variance (and possibly other moments) of the trajectories to estimate that growth rate. Another possible deterministic fitting approach is to allow for random effects, where we assume an underlying distribution (e.g., normal) for the growth rate parameter (α) and allow each trajectory to be the realization of a pure birth process with parameter drawn from that distribution (Gelman and Hill, 2007). In this case, the estimation method yields the parameters of the distribution ( i.e., the mean and variance). This is a mixed effects approach, where we still assume no stochasticity and that all differences are due to parametric variability. This approach of assuming parametric variability can also be used with the stochastic version of the model. In fact, it is instructive to analyze in more detail such situation by calculating analytically the distribution of the number of infected accounting both for the stochasticity of the process and the parameter distribution for the pure birth process. If we assume that the growth rate, α, is distributed according to a normal, α ∼ N (µ A , σ A ), then the probability of having I(t) total infected is the product of the geometric distribution for fixed α, which is the solution of the pure birth process, (see Allen, 2010), and the normal distribution for α, namely From this expression, we can obtain the mean and variance of I, including the contributions of both stochasticity and parametric variability (see also Appendix C) and (These expressions reduce to the forms in Table 2, when σ A = 0). It is worth noting that both the mean and the variance of I depend Frontiers in Microbiology | www.frontiersin.org on µ A and σ A , suggesting that an ODE or stochastic fit to the mean ignoring parametric variability would estimate the growth rate incorrectly. These four different ways to fit the same data set (e.g., Figure 1) beg the question of which one is the best approach and whether that depends on the data containing actual parametric variability or not. On the other hand, the explicit knowledge of the stochastic form of σ I , both in the presence of parametric variability (expression 3) and pure stochastic variability ( Table 2), suggests the definition of a quantity, R 2 (analogous to a coefficient of determination) as For the pure birth process (see Appendix C for details): This expression helps us to determine (in a prescriptive way) whether the process is governed by stochasticity (R 2 → 0) or by parametric variability (R 2 → 1). Also, as it can be expected, the variance at shorter times is governed by pure random fluctuations but as time proceeds, parametric variance, if present, is increasingly more relevant. We plot R 2 as a function of time in Figure 2 To analyze these issues in more detail, we now use in silico generated data fitted in multiple ways, with and without stochastic effects and with and without assuming parametric variability, to assess the quality of the parameter estimation. Comparison of Fitting Methods With Simulated Data In Appendix D (Tables I to IV) we summarize the fitted parameters discussed in the Sections 3.2.1 to 3.2.4. Experiment 1: Model Misspecification We fit 4 models (birth-only and birth-death, with and without random effects) using both deterministic and stochastic model formulations allowing us to consider the effect of both model structure misspecification and other model assumptions. Parameter estimates for each data set are given in Table I in Appendix. Also, in Figure 3 we summarize succinctly the main conclusions of this section. Correct model When the data are generated without population heterogeneity (i.e., σ A = σ B = 0) and fit with the correct structural model, both the deterministic and the stochastic fits have reasonable point FIGURE 2 | Plot of R 2 as a function of time for heterogeneous stochastic exponential growth. Each line shows R 2 for the specified level of σ A assuming µ A = 0.1. The horizontal gray line indicates 90% of the variance being due to parametric heterogeneity; the dashed vertical gray lines indicate the time at which each line reaches 90%. Table I in Appendix). However, CIs on the death rates are very broad suggesting that the incidence data are only weakly informative. When we introduce population heterogeneity into both the data and fits, the stochastic fit still contains the true parameter values in its CIs; although fitting all 4 parameters leads to very broad estimates for the mean and standard deviation of the death rate. The deterministic model, however, is unable to estimate either the mean or standard deviation of the growth rates correctly. Random effects in the model but not the data When the fit attempts to estimate random effects when no parametric variability is actually present, the CIs for the estimated standard deviation of the parameters in the stochastic fits contain 0, while the deterministic CIs do not. That is, the deterministic model finds evidence for population-level heterogeneity when none actually exists. Random effects in the data but not the model When there is population-level heterogeneity in the data but the model assumes that there is none, the stochastic fit still obtains correct point estimates and CIs of the mean effects for both the birth-only and birth-death models. However, in the deterministic fits the CIs for the mean effects did not contain the true values of the growth rates. Death in the data but not in the model When fitting the birth-death data with a birth-only model, we found that, in both the stochastic and deterministic fits, the The symbols correspond to the estimates of the growth rate (circles) and death rates (diamonds) under different scenarios. In the (Left) , the data was generated without parametric variability and in the (Right) the data was generated with parametric variability. Top row: results of fits with a birth-death model to data generated by a pure birth process. In each case, we used stochastic or deterministic fits, without ("No RE") or with ("RE") random effects. The horizontal dashed blue lines indicate the value µ A (birth rate) and the dot-dashed red line the value of σ A used in the data generation. Bottom row: results of fits with a pure birth model to data generated by a birth-death process. The horizontal dashed blue lines stand for µ A and the dot-dashed green lines for µ B . In all cases, the vertical whiskers are the 95% CI obtained in the fits. Note that the estimates of the death rate for the random effects fits (in the top panel) are off the plot, and only the bottom segments of the whiskers are visible). estimate of the growth rate is close to the net growth rate (i.e., birth rate minus death rate). However, if we allow random effects on the growth rate, the deterministic fits finds a very high level of heterogeneity in the growth rate when none actually exists. The CI for the standard deviation of the parameter in the stochastic fit correctly contains 0, suggesting limited evidence for heterogeneity in growth rates. Death in the model but not in the data Conversely, if there is death in the model, but not in the data, both the fixed effects stochastic and deterministic fits the CIs for the death rate correctly contained 0. However the deterministic fit overestimated the growth rate while the stochastic fit did not. Experiment 2: Number of Units in the Panel Results for data generated by a pure birth process, with different number of units in the panel, are shown in Figure 4. Using the stochastic or the deterministic fits resulted in point estimates for the mean growth rates that were very close to the mean value and the CIs contain the true value for all cases. Increasing the number of units in the panel causes slightly narrower CIs for the mean growth rate as well. The standard deviation of the growth rates was correctly estimated in the stochastic model for all but one case; however, the deterministic model overestimated the population-level heterogeneity in all cases. Also, as the number of units in the panel increases, the CIs narrow suggesting a higher degree of certainty in an incorrect conclusion. Experiment 3: Number of Observations Per Unit The effects of increasing the number of observations per units was similar to increasing the number of units in the panel. For both the stochastic and deterministic fits, the mean growth rates where correctly estimated. As before, the deterministic fit consistently overestimated the standard deviation in the growth rates and increasing the number of observations per unit led to narrower but wrong CIs. Increasing the number of observations per unit is more efficient at improving the accuracy of the estimation compared to increasing the number of units in the panel for the stochastic model. Results are shown in Figure 5. Experiment 4: Increasing Heterogeneity Between Units We also analyzed the effect of different values for the heterogeneity of the parametric variability. As before, the FIGURE 5 | Results of fits when there is a variable number of observations in each unit. The data in all cases was generated by a pure birth process with parametric variability. The top row shows the estimates for the mean growth rate with stochastic or deterministic fits, and the bottom row the estimate of the standard deviation of the growth rate. The horizontal dashed blue lines indicate the parameter values used in the data generation. The vertical whiskers are the 95% CI obtained in the fits. In each case, the number of units was U = 20, the growth rate was α = 0.15 and the standard deviation of the growth rate was σ = 0.02. deterministic fit consistently overestimated the level of heterogeneity regardless of the actual value of the standard deviation of the growth rate, however, these estimates became closer to the true value with increasing heterogeneity in the data. In the stochastic fits, when the heterogeneity was less than 0.04, the estimated CIs included the true parameter and increasing heterogeneity led to a narrower CI. At the highest heterogeneity levels the CI did not contain the true value; we found that using a stochastic fit to data with high levels of parametric heterogeneity leads to numerical instability making estimation of the CIs difficult. Results are shown in Figure 6. Quantifying Parametric Variability With R 2 As shown in Figure 6, the deterministic CIs do not include the real value of σ A , albeit the estimate of µ A is accurate enough. To test the ability of different methods to quantify the relevance of parametric variance vs. noise (through R 2 ), we use the estimation of σ A from the different methods with Equation (5), at the final observation, t = 20. The results are shown in Figure 7. Note that the stochastic prediction, at least, is able to include the real R 2 inside the whisker, especially at low values of parametric variability. This means FIGURE 6 | Results of fits with increasing standard deviation for the growth rate. The data in all cases was generated by a pure birth process with parametric variability. The top row shows the estimates for the mean growth rate with stochastic or deterministic fits, and the bottom row the estimate of the standard deviation of the growth rate. The horizontal dashed blue lines indicate the parameter values used in the data generation. The vertical whiskers are the 95% CI obtained in the fits. In the right panels, the red empty squares are the estimated values obtained from standard linear mixed-effect models (regression). In each case, the number of observations per unit was O = 20 and the numbers of units was U = 20. FIGURE 7 | Estimated R 2 with increasing standard deviation for the growth rate. The data in all cases was generated by a pure birth process with parametric variability. The horizontal short dashed lines indicate the parameter values used in the data generation. The left panel corresponds to the stochastic fits and the right panel to the deterministic fits, where the vertical whiskers are the 95% CI obtained in the fits. The red empty squares in the right panel stand for the value of R 2 calculated with Equation (5) at time t = 20 with parameters estimated using standard linear mixed-effect models (regression). The empty green diamonds are an alternative way to estimate R 2 using the empirical data variance and the theoretical (stochastic) noise variance, Equation (8). In each case, the number of observations per unit was O = 20. that this fitting method is able to capture (in a probabilistic way) the cases where parametric variance is not as relevant as fluctuations. We have used throughout simulation-based inference, because it allows us to compare directly likelihood profiles between stochastic and deterministic implementations of the models. Nevertheless, it is worth remembering that traditional methods (based, loosely speaking, on regression) are usually the preferred way to estimate parameters from the data. This is not a matter of taste but of computational efficiency. Even for the simple models in the present work, simulation-based inference is computationally expensive (and, as such, not suitable as of writing for models with many parameters). Thus, for the sake of completeness we discuss briefly the role of regression-based methods in our framework and fit the data in Experiment 4 using a standard linear mixed-effect model (Gelman and Hill, 2007). We find that this fit results in a systematic underestimation of the mean, µ A (red squares in Figure 6 top), and in an overestimation of the standard deviation σ A (red squares in Figure 6 bottom). While Equation (5) was derived under the assumption of an unerlying stochastic process, and traditional methods ignore the stochasticity of the underlying process, we can still use hybrid information to obtain a rough estimate the relative weight between noise and parametric variance. We can mix both approaches (linear mixed-effect models and stochatic predictions) in two ways: In the first one (corresponding to the red empty squares in Figure 7) we use µ A and σ A from the linear mixed-effects model fit to the data in Equation (5). The second method, consists in calculating the empirical variance of the data and the expected value of the noise variance from Equation (8) and calculate R 2 using Equation (4). Remarkably, inspection of Figure 7 (green empty diamonds) suggest that using this second method, the estimated value of R 2 is sometimes closer to the original one. In summary, combining standard methods with analytical results coming from the exact solution of the stochastic process might be useful to estimate the level of noise in the data. Notwithstanding, in all cases, this hybrid method used to calculate R 2 also overestimates the true value. Heterogeneity of Epidemic Spread of Ebola In Figure 8 we show the total number of cases reported for the 2014-15 Ebola epidemic in Guinea, Liberia and Sierra Leone. In each case, the solid line is the fit of an exponential function to the data for the first 29 weeks. Despite the fluctuations (specially in the first days) the fit provides an (apparently) accurate account for the growth during those early weeks. Note that the estimated slopes are highly variable among countries. Since for simple models, the slope in the exponential fit (α) is proportional to the basic reproductive number minus one (R 0 − 1) (Heffernan et al., 2005), with this approach one would conclude that the severity of Ebola in different countries is highly variable. Indeed, this variability has been reported for the 2014-15 epidemic (with R 0 ranging between 1.51 and 2.53), see (Althaus, 2014;Kucharski Krauer et al., 2016) , as well as for earlier outbreaks (Chowell et al., 2004). From a traditional deterministic approach we might come to two conclusions: (1) The Ebola epidemic is well described by a deterministic model that predicts accurately the initial exponential growth and (2) the epidemic was more aggressive in Guinea, followed by Sierra Leone and Liberia. However, a closer inspection of the data (collected by counties) before the aggregation shows a different picture. In Figure 9, we plot the same dataset (for Liberia and Sierra Leone) but separately for the different counties. Now, the conclusions that can be drawn are more nuanced and perhaps contrary to the picture of uniform growth suggested by Figure 8. On the one hand, the starting dates of the epidemic in different counties are highly variable, and the initial slopes (the plot is in logarithmic scale) also display a large variability. This suggests that assigning a simple value per country (and, consequently a single R 0 ) can be misleading and lead to erroneous interpretations and, more importantly, interventions or policies. On the other hand, and this is what we are interested in, this fine grained view of the data begs for a stochastic approach to fitting. Even when the data is aggregated (which tends to smooth the underlying stochasticity), the initial part of the curves are reminiscent of the trajectories in Figure 1 (left panel). Ebola Model Fits We fitted both deterministic and stochastic versions of a birthonly model with random effects to the Ebola data, allowing for negative binomial measurement error (see section 2.3 for details). The stochastic model was, in terms of the likelihood values, objectively better than the deterministic model (−556.4 vs. −565.0) despite being identical in all respects except stochasticity. The estimate of the mean growth rate was nearly identical in both models, 0.62, with CI (0.53, 0.73) deterministic and 0.59, with CI (0.52, 0.67) stochastic (Figure 10). However, the deterministic model found a very high level of heterogeneity, 0.16 CI (0.11, 0.25), while the stochastic model found low levels of heterogeneity, 0.03 CI(0, 0.15). In the stochastic model, the profile likelihood for the standard deviation in growth rates, σ A , suggests that the likelihood surface is virtually flat around very small values of σ A (see Figure 10 right). However, in the deterministic model-even when we allow variable levels of overdispersion-the likelihood rapidly drops off as the heterogeneity decreases from the MLE. Overall, these results show that, while deterministic fitting is as good as stochastic fitting to estimate the mean growth rates, it performs poorly as a predictor of the parametric variability. Specifically, using our definition of R 2 , and the MLE of σ A = 0.03, obtained with the stochastic method, we can estimate the contribution of parametric variability to overall variability in the data. Using Equation (5) results in R 2 ≃ 0.21. This analysis would suggest that, in the case of Ebola, 10 weeks after the start of the epidemic, around 79% of the measured variability could be attributed to noise rather than to inter-county differences. Taking into account that, as we showed in Figure 7, this empirical way to calculate R 2 overestimates the true coefficient, the conclusion is even more substantiated. Doing the same calculation with the value obtained in the deterministic fitting, σ A = 0.16, we get R 2 ≃ 0.88, so we would conclude that 88% of the variability is due to true differences among counties. DISCUSSION AND CONCLUSIONS The aim of modeling is not to capture every specific feature of the system under consideration but, rather, to describe succinctly the main mechanisms of the process and, ideally, to be able to differentiate among competing hypotheses (Ganusov, 2016). The art of modeling involves balancing multiple levels of complexity to achieve predictability, accuracy, and tractability. In this context, here we have added another concern: is the methodological approach suitable? Following an approach of keeping things simple, we have shown that even for the most basic cases, deterministic fitting methods, which assume that all variability is either error or parametric, provide misleading results. Although, not all aspects of the models were sensitive to the assumption of determinism, since for example the mean of a parameter was usually reasonably estimated. This study is not a purely academic exercise on the role of fluctuations for small populations because our results point to important practical implications. A case in point is our example of the initial spread of the Ebola epidemic. Although different counties seem to have different growth rates, our fitting indicates that the variability is also well explained by stochastic (i.e., nonsystematic) differences among the counties. This does not mean that there are no differences in epidemic spread among the counties, only that stochasticity alone is a statistically better and more parsimonious explanation. That is, when stochasticity is taken into account the evidence for differences in early growth rates is negligible. The ability to accurately detect and measure heterogeneity is an important topic with practical implications. Take, for example, the expanding field of personalized medicine, where individual FIGURE 10 | Profile likelihood plots for the parameter estimates for the Ebola data. The plot on the left shows the profile likelihood for the mean growth rate, µ A , while the plot on the right shows the profile likelihood for the standard deviation of growth rates over counties, σ A . Horizontal dashed lines indicate the MLE and 95% CIs for the parameter estimates. The overdispersion parameter was free to vary in the calculation of each point along the profile. treatment plans may be designed under the potentially faulty assumption that there is heterogeneity in response to treatment regimes. Likewise, scientific resources may be wasted in a quest to search for individual-level correlates of heterogeneity that may not exist. Our results suggest that measuring heterogeneity in panel data time series is prone to bias and misinterpretation and that including more data in terms of additional observations per unit or increasing the number of units will not alleviate this bias caused by methodological misspecification. In this regard, it is important to note that the simulation-based stochastic fits, generally speaking, appropriately partitioned variability into stochastic and parametric components even with relatively short time series. This means that such methods should be preferred for fitting data. However, there are practical issues with implementing stochastic fitting methods when the models are complex (e.g., multiple populations or many parameters) or the populations involved are large. This is because the computational resources needed and the time to fit a given model would be, in most cases, prohibitive. As an alternative, if a fully stochastic model is not possible, one could explore the possibility of using stochastic models for a limited time window (for instance, early on). Although, this will need the development of hybrid fitting methodologies. Generally, one should be cautious when interpreting the fit of deterministic models to panel data, since the observation of parametric heterogeneity or even structural heterogeneity in terms of model selection may be the result of overfitting stochastic fluctuation. Also, the term R 2 can be estimated numerically for a given model to provide a warning of potential problems based on deterministic model fits. In summary, here we analyzed the effect of neglecting stochastic noise (i.e., in addition to the error term) in panel data of biological time series. We found that deterministic approaches usually overestimate the parametric variability, although (at least in our simple models) the parameter average is less difficult to estimate. On the other hand, stochastic fitting, in general, did a good job of dividing variability between stochastic and parametric.
9,186.8
2018-07-12T00:00:00.000
[ "Mathematics" ]
A multi-organization epigenetic age prediction based on a channel attention perceptron networks DNA methylation indicates the individual’s aging, so-called Epigenetic clocks, which will improve the research and diagnosis of aging diseases by investigating the correlation between methylation loci and human aging. Although this discovery has inspired many researchers to develop traditional computational methods to quantify the correlation and predict the chronological age, the performance bottleneck delayed access to the practical application. Since artificial intelligence technology brought great opportunities in research, we proposed a perceptron model integrating a channel attention mechanism named PerSEClock. The model was trained on 24,516 CpG loci that can utilize the samples from all types of methylation identification platforms and tested on 15 independent datasets against seven methylation-based age prediction methods. PerSEClock demonstrated the ability to assign varying weights to different CpG loci. This feature allows the model to enhance the weight of age-related loci while reducing the weight of irrelevant loci. The method is free to use for academics at www.dnamclock.com/#/original. Introduction Methylation occurs mainly at the 5th carbon atom of cytosine in CpG dinucleotides, and changes in methylation are directly correlated with changes in gene expression (Sun et al., 2011).Recently, there has been an increasing interest in the relationship between aging and methylation (Tang et al., 2018;Luo et al., 2023;Sinha et al., 2023).Many literatures have proposed that age increase is displayed by DNA methylation changes (Fraga et al., 2007;Christensen et al., 2009;Horvath et al., 2012).Since DNA methylation levels at specific sites could be variable over time, the DNA methylation-based epigenetic clocks could be used to effectively quantify biological aging, which can be widely used in anti-aging applications. Genome-wide DNA methylation is widely measured using microarray-based technology, including Illumina HumanMethylation27 (27 K), HumanMethylation450 (450 K) and HumanMethylationEPIC (850 K) (Moran et al., 2016).By calculating the beta value of DNA methylation for each specific cytosine locus (Levy et al., 2020), the methylation level of each CpG can be quantified.Most of the emerging age prediction methods have been developed based on the transformed data from these techniques. Machine learning methods have been relatively well developed in the field (Wang et al., 2023a;Lv et al., 2023).In 2013, Horvath (2013) developed a 353-locus age prediction model using data from 51 different tissues, and the difference between their model predicted age and chronological age was around 3.6 years.Although the error rate was high in some tissues, such as the breast, it is still considered the most accurate pan-tissue epigenetic clock by far (Chen et al., 2019;Fahy et al., 2019;Fitzgerald et al., 2021).Hannum et al. (2013) developed an age prediction model using 71 CpG loci with blood data.It firstly revealed that factors such as sex and weight affect the prediction of methylation age.Most published methods used linear regression (Zou and Hastie, 2005) for age prediction (Lin et al., 2016;Shireby et al., 2020;Zhang et al., 2019).They collected loci with a high impact on predicting age to form an epigenetic clock for methylation age prediction.The linear regression method is computationally simple and can predict age using fewer loci, but the method ignores the effect of the remaining loci on the predicted age.The linear regression method also has some limitations in predicting age at methylation, which can lead to high prediction errors. During the past few years, deep learning was introduced to address this challenge.Compared to machine learning methods, deep learning technology is emerging as a promising approach to improving this area since it is more inclusive for multi-feature tasks to achieve higher accuracy (Wang et al., 2023b;Ispano et al., 2023;Qi and Zou, 2023;Sreeraman et al., 2023).Thong et al. (2021) demonstrated an artificial neural network model which model using three genes that outperformed linear regression models.Levy et al. (2020) used the MethylNet deep learning model to predict the age of DNA methylation and demonstrated its significant advantage over machine learning models.Li et al. (2021) used correlated pre-filtered neural networks (CPFNN) for age prediction and found that appropriately weighting features highly correlated with prediction results is a critical factor in improving prediction accuracy.de Lima Camillo et al. (2022) propose a model AltumAge using deep neural networks by referring to DeepMAge, a model trained on blood samples by Galkin et al. (2021).They reduce the prediction error to 2.153 and the correlation between relevant CpG loci in issues discussed in some detail.However, in the experiments, it was found that the deep learning models that have emerged so far have poor generalization ability in independent datasets and poor prediction accuracy for independent samples.So, there is still room for further optimization in the prediction of methylation age using deep learning models. In this study, we propose a perceptron prediction model based on the channel attention mechanism, which is a nonlinear regression algorithm.This paper uses the 24,516 CpG loci common to all 3 Illumina platforms for age prediction to ensure that all CpG loci are able to participate in the task.The model uses a channel attention module to assign different weights to individual loci so that the model focuses on task-relevant CPG features and reduces the weights of irrelevant CpG features to provide more valid information for the age prediction task.Compared with the simple perceptron model, the inclusion of the channel attention mechanism in our method leads to a greater improvement in the generalization ability and prediction accuracy of the model. Datasets We collected 50 datasets from GEO (Barrett et al., 2012) with a total of 13,658 health samples respectively from Infinium 27 K, 450 K, and 850 K platforms (Zhang et al., 2019;Qi and Zou, 2023;Sreeraman et al., 2023), of which 35 datasets were used for model construction and the remaining 15 datasets were used for independent testing.The raw data were separately saved to the clinical data and beta value matrix using the R package GEOquery (Davis and Meltzer, 2007), Then filtered to remove samples in the dataset with more than 50% of the beta value missing.Finally, a method based on simple linear regression (methyLImp) (Di Lena et al., 2019) was used to fill in some of the missing values in the beta value matrix.Figure 1 shows an overview of the 35 data sets by organization. Different tissues may require different markers to achieve a high level of accuracy in prediction accuracy.Woźniak et al. (2021) developed VISAGE enhanced tool and statistical models based on blood, oral cells and bone.They used three different combinations of loci to construct models that provide accurate DNA methylation age estimates for each tissue separately.The number of CpG loci in 27 K, 450 K, and 850 K data is usually 27,578, 485,577, and 868,564, respectively.Since many datasets are missing CpG loci, this paper takes the 24,516 loci common to the three platforms for model training.The beta value on each CpG locus indicates the degree of DNA methylation.A beta value of 1 indicates that a CpG locus is fully methylated on the allele, while a beta value of 0 indicates that the CpG locus is entirely unmethylated. Model construction Figure 2 shows the neuro network structure of the model in this paper, which mainly consists of a channel attention module and a perceptron module.The channel attention module assigns different weights to the data according to the importance of the CPG loci.In Organizational chart of the data sample.The figure divides the dataset into nine parts according to organization, with blood data and bucca data accounting for a larger share.Tissue data that account for less than one percent are summarized in the "Others" set, for a total of 9%.The remaining 6 tissue data are sorted in order of their share. contrast, the perceptron module uses a 4-layer network for continuous fitting to accomplish the purpose of age prediction. The first reshape operation in the attention module is to transform a column of data into a block structure, and 1,362 denotes the number of channels, each containing 3 × 6 beta values.The one-dimensional data with 24,516 components is transformed into three-dimensional data with the structure 3 × 6 × 1,362 by placing the beta values in the input data in different channels in order.Global Average Pooling denotes the calculation of the average of 18 beta values in each channel, and each channel is calculated to obtain one value for a total of 1,362 values.2fc (,W) denotes two fully connected.The 1,362 data are updated according to these two fully connected layers.The updated data are subjected to Scale operation, and the updated 1,362 values are assigned to the first reshaped post-block structure, explicitly using the values of each channel multiplied by 18 beta values of different channels respectively.This will result in a feature value that is weighted by the channel attention module.The perceptron module feeds the processed eigenvalues into a four-layer fully connected layer for regression operations and finally outputs a methylated age prediction. For the DNA methylation age prediction task, an exact age value was required, so a regression model was used to construct the network.Initially, the model consisted of only a perceptron network, which was erected by 5 fully connected layers with a simple model structure.However, it was found in the experiments that using a simple perceptron model was not sensitive enough to epigenetic factors (CpG loci), and the model Model architecture.The model has four components, Input, Channel Attention, MLP, and Output.The CpG loci of a sample are input and reshape as 3D data, where the number of channels is 1,362.Global Average Pooling is calculated as a value on each channel, and the data is fitted in a two-layer network to scale with the original 3D data, the process is able to continually enhance the features that are relevant to the prediction results.Afterwards, the 3D data is transformed into one-dimensional data to be fit into a four-layer MLP network to output the predicted age values.The closer the mean error is to the 0-axis, the better the prediction; the further it is from the 0-axis, the worse the prediction. Experimental platform Our model is built based on Python 3.7.11.We use Pytorch and the Sklearn library to build the network for data preprocessing. The AltumAge method is based on Python 3.8, and the network is built using the TensorFlow framework for testing.Several other ways apply R language to reproduce the code.Among them, the Horvath method uses data normalization operation when predicting the data, the code prediction time is longer and the effect is not much different from that without normalization operation, so the data is not normalized in the comparison experiment. 3 Results and discussions Data sample arrangements We here preprocess the data before loading it into the model.First, the order of the read-in data is disordered, and then the disordered data are divided into training set, validation set, and test set with the number of data samples of 8,577, 953, and 1,059, respectively.Finally, the data are processed into tensor format and put into the model for training.Since the beta value (0-1) in the beta value matrix represents its degree of methylation, which has a special meaning in DNA methylation age prediction, the data in this paper were not normalized. Evaluation indicators In order to evaluate the accuracy of the model in predicting age, four evaluation metrics, correlation coefficient (R-squared), mean absolute error (MAE), mean squared error (MSE), and median absolute error (Med) were used.The definition of R-squared is shown in Eq. 1: Where SSE is the error sum of squares and SST is the total sum of squares. The MAE and MSE are defined as shown in Eqs 2, 3: Where m denotes the number of predicted samples, preAgei denotes the predicted age of a single sample, and chrAgei denotes the actual age of a single sample.The definition of Med is shown in Eq. 4: Where preAge indicates the predicted age of all test data, chrAge indicates the actual age of all test data, and median function indicates taking the median of a set of numbers. The evaluation metrics of the model in this paper on the training set, validation set and test set are shown in Table 1, where the R-squared is only used to show the correlation between the actual age and the predicted age, and the other three metrics are used to evaluate the accuracy of the model in predicting the age of DNAm.All_data indicates all the data used for development, and Train_ data, Val_data, Test_data represent the data for training, validation and testing, respectively.Combining the evaluation metrics commonly used in other literature, Med is finally used as the evaluation metric to measure the accuracy in this paper. Training and testing performance In order to verify the prediction accuracy of the model, we first tested the data used for model training.The test results are shown in Figure 3A shows the prediction results of all samples including the training set, validation set and test set (age correlation = 0.95, median absolute error = 1.65).The horizontal axis is the actual age of the samples, and the vertical axis represents the predicted age of the model.Samples of different data sets are distinguished according to different colors, which shows that the model of this paper performs well on most of the data sets, and only the prediction results of some samples have significant errors.The Figures 3B-D plots represent the prediction results of the training, validation, and testing data in this model, respectively.The actual age and the predicted age all show a high age correlation in the plots, the Med in the training set is 1.57, and the Med in the test set is 2.04.Overall, the model's prediction accuracy in this paper is high, and the prediction effect is relatively stable in each data set. Performance comparison with peer methods To verify the prediction accuracy and generalization ability of the model, 15 separate datasets with a total of 3,069 healthy samples are used in this paper, and the model of this paper is tested separately with other methods on these datasets.The datasets were divided into two batches for separate comparisons.Seven datasets in Table 2 contain the training sets of certain other methods, which are indicated by * in the table; Eight datasets in Table 3 are independent test sets of these methods and are not in the training sets of several methods. Horvath method, LinAge method, and PhenoAge method (Levine et al., 2018) use traditional elastic net method for age prediction by combination of different CpG sites.Zhang et al. (2019) used elastic net model to screen 514 CpG sites for prediction in order to improve prediction accuracy.The Cortical Pred method is Shireby et al. (2020) developed a methylation age prediction clock for brain tissue, which performed well in other tissues, so it was compared with this paper's model.AltumAge and BNN methods are methods that use neural network prediction.In this paper, comparison experiments are conducted with the above seven methods.Since the experimental code is not given in DeepMAge by Galkin et al. (2021) and the 71CpG clock of Hannum et al. (2013) is only developed for the 450 K blood dataset, it cannot be tested for datasets with missing loci.Therefore, it was not compared with these two methods. The comparison results with the seven different prediction methods are shown in Tables 2, 3. Column one in the table is the name of the dataset, columns two and three indicate the number of samples and sample organization of the dataset, and the last eight columns are the Med values of each method on the test dataset.The bolded font in Table 2 is the best predicted result in addition to the set training set results, and the bolded font in Table 3 is the best performing result among the eight methods.From the tested Med results, the model in this paper outperformed other models in 10 datasets, and although it performed slightly worse in three datasets, GSE19711, GSE42861, and GSE61431, it was not much different from the best model in terms of Med results.Due to the small number of samples of brain tissues in the training dataset of the model in this paper, the Med tested in the dataset of brain tissues is slightly larger.To ensure that the test data has a small effect on the test results, an 850 K data was used for testing, and the test results are shown in the row of GSE152026 in Table 3.The BNN method cannot be predicted due to the missing CpG sites, so the # sign is used.Overall, the model in this paper has a low Med and a relatively stable prediction ability tested in each dataset, while confirming this in the 850 K dataset (Hannon et al., 2021). Differences among multiple organizations To test the prediction accuracy of the model in different tissues, the model was compared with four methods, and Figure 4 shows the prediction error of each model in eight tissues respectively.The horizontal coordinates in the figure represent eight different tissues, and the vertical coordinates indicate the error between the predicted age and the actual age, which is calculated by subtracting the actual age from the predicted age.The comparison results of our model in different tissues showed that the prediction error in entorhinal cortex and dorsolateral prefrontal cortex was slightly larger, but the average error in other tissues was around 0 and the error span was relatively small.Combining the error results of each method, we found that whole blood samples performed relatively stablely in each technique, while entorhinal cortex and dorsolateral prefrontal cortex had more significant errors in each method.This also confirms the significant differences in the epigenomes of different tissue types as suggested by previous researchers (Illingworth et al., 2008;Li et al., 2010). Conclusion Accurate age prediction can help clinicians determine whether the body's tissues are normal or not.By identifying changes in the genetic characteristics of human tissues, individual disease risk can be effectively reduced.We discuss the usability, accuracy, and the advantages and significance of multi-tissue methylation age prediction methods based on deep learning compared to other methods.Although the prediction has been very accurate using the elastic net method in human aging prediction methods, its exploration of the correlation between CpG loci is not detailed.Deep learning models can not only outperform linear regression models in terms of accuracy, but are also more helpful in investigating the linkages between loci. In this paper, we propose a perceptron prediction model based on the channel attention mechanism, which has a better learning ability and can improve the accuracy of the model prediction compared with the simple perceptron model.It can be seen from the Med of the test dataset that the model in this paper predicts more accurately than most of the current methods.After experimental validation, this model outperforms other methods on most data sets.Although the error of testing the model in this paper is slightly larger in frontal cortex, it performs well in the test results in various tissues such as blood and saliva.Therefore, compared with other methods, the model in this paper has better predictive power and model generalization ability. In future works, the changes of CpG loci after adding the attention mechanism and the interconnection between CpG loci need to be further explored.The CpG loci in different tissue samples have different degrees of influence on the prediction results, and comparing them will help to improve the accuracy in different tissues.In the future, the self-attentive mechanism module will be used to update the model parameters, which will make it easier and more explanatory to explore the changes of CpG locus coefficients. FIGURE 1 FIGURE 1 training was more likely to be overfitted.Since the number of feature loci used makes network learning more difficult, attention mechanisms are introduced in the deep learning model, specifically, combining the perceptron model (MLP) with the channel attention mechanism SENet.The channel attention module makes the model pay more attention to those CpG loci with high correlation with age, automatically adjusts the weights according to the loss values of model training, and then assigns the weights to the initial features, thus speeding up the model fitting. Firstly , the beta values of CpG loci of the training samples are loaded into the model in batches.The input data in Figure 2 takes one sample data as an example, and transforms (reshape) the onedimensional data with 24,516 components into the threedimensional data with the structure 3*6*1,362 (adjacent CpG loci put into different channels).After the global average pooling operation, 1*1*1,362 values are calculated to obtain the initial weights of each channel.Then 1*1*1,362 are fed into the two fully connected layers, and the weights are updated according to the loss values of the model training.The updated 1,362 weights are assigned to the corresponding channels of the original 3D data.Finally, the data are transformed (reshape) into one-dimensional data and sent to the four-layer perceptron model for regression operations.The training is ended when the loss values of the model training tend to be stable, and then the validation set and test set are tested and the trained model parameters are saved.The network model in this paper consists of an input layer, channel attention module, 4 hidden layers and 1 output layer.Each hidden layer consists of 32 neurons, LeakyReLU activation function and BatchNorm1d (32) batch normalization, Dropout (0.1) regularization, and finally the network is optimized using Adam's algorithm.The model training process is monitored using the loss function of MSELoss.The decreasing trend of the validation loss is observed at the output training loss, and the training is ended at the FIGURE 3 FIGURE 3 Model visualisation metric results.The horizontal coordinates in the figure represent the actual age of the samples and the vertical coordinates represent the model-predicted age.(A) represents the prediction results of all samples, and different colors represent the sample data in different datasets.(B) represents the prediction results of the samples trained by the model.(C) represents the prediction results of the samples for model validation.(D) represents the prediction results of the samples for model testing. FIGURE 4 FIGURE 4Prediction errors of different methods in eight tissues.Where (A) represents our method (B) represents the AltumAge method (C) represents the eight different organizations (D) represents the Horvath method, and (E) represents the BNN method.There are eight different colored boxes in each figure representing data from different tissues.We calculated the mean error between the predicted age and the actual age for the different organizations.The closer the mean error is to the 0-axis, the better the prediction; the further it is from the 0-axis, the worse the prediction. TABLE 1 Metrics results from model training, validation, and testing.The model updates the weights based on the loss function.By weighting the CPG loci with different levels of importance, the important features on different channels are strengthened and the invalid features are weakened to enhance the feature representation of the feature map.Using the changed feature data for model age prediction reduces the number of model parameters and computing pressure to a certain extent, thus enabling the model to perform age prediction more effectively and improve model accuracy. TABLE 2 Comparison of the Med of the seven methods on the independent data sets (training set containing the other methods).Indicates the presence of this dataset in the training set of the method.The bolded font is the best predicted result in addition to the set training set results. a TABLE 3 Comparing the Med of the 7 methods in the independent dataset. a Indicates that the method cannot be measured due to the lack of CpG sites.Bolded font is the best performing result among the eight methods.
5,427.8
2024-04-24T00:00:00.000
[ "Computer Science", "Medicine" ]
An Assessment of the Impact of Information Technology on Marketing and Advertising The present study aimed to evaluate the impact of Technology on Marketing and Advertising by using structural equations modeling. To do this, 200 marketing and information technology (IT) experts participated in the study. They answered questionnaires regarding IT, marketing mix and advertising. For data analysis, structural equations modeling using SMARTPLS were used. The results showed that the effect of IT on marketing mix and advertising was positive and significant. The effect of marketing mix on advertising was positive and significant. The indirect effect of IT on advertising via marketing mix was therefore positive and significant. Totally, the results emphasized the effect of technology on marketing mix and advertising. Keywords-technology; mass media; marketing; advertising INTRODUCTION Technology is defined as a combination of information equipment, techniques and processes needed to convert the data to output [1].Main technology components like software, hardware, mind mapping software and support network, interact with each other [2].Adopting appropriate technology does not mean the application of the advanced technology, but rather of the most appropriate which is consistent with a country's development objectives and available resources.Transfer of technology could be the most useful or useless of transactions and the effective use of transferred technology depends on the recipient country's efforts to adapt [3].New technologies have a profound impact on structure and production.In recent decades, different systems and management tools have been provided to improve the structures, goals, and strategies [2].The forms of "market uncertainty" and "uncertainty in technology" are two properties proposed for technology.Technological uncertainty refers to doubt about technology achievement and capability to respond to customer needs [4].Hence it is necessary to examine industries and areas changed by technology in order to enhance the chances of success of technology.One of these areas is marketing.Marketing is one crucial factor for economy and society.As long as the gap between the producer and the consumer is slight and manufacturer is familiar enough with customers' needs and interests, there is no need for distribution and marketing agents.Moreover, when production is limited, factors like production planning, distribution strategy, marketing system completion etc. are of no concern for the manufacturers.However, in the case of mass production or when manufacturers target global markets and plan to distribute their products in a global scale, a wider range of marketing activities are required to deliver the product to the consumer.In the changing market conditions, it is expected for the existing marketing processes to face transformations and similarly, it is the market condition causing the success of a marketing plan.Thus, identifying and analyzing environmental factors such as the impact of new technologies in this field is of great importance. The advertisement phenomenon was first performed through face-to-face verbal messages in the streets.Later, books became the most important means of disseminating information and advertisement while religious propaganda was conducted in mosques and minarets, churches, temples, schools and books.Progress of science, industry, and modern technology led to new ways of advertising with new equipments and tools [5].In fact, advertising as a marketing technique plays a considerable role in the transfer of information (about products, services and business) to the consumer.Over time, advertising methods have changed due to the coming of new technology.The internet and mobile technology were presented as new advertising media.Mobile technology is able to create new markets, change organization competitive vision, create new opportunities and change the status of communication and market structures.Considering the above, the question emerges.What are the changes in the marketing and advertising industry imposed by the entrance of new technology? II. LITERATURE REVIEW With the arrival of communication technologies, the design of marketing mixture has more or less been altered.Basic marketing elements have been fundamentally transformed and continue to do so. A. Product Mix Product packaging and the presenting of product information in the internet reduces the need for physical testing and leads to saving time and money [7].In e-commerce, acquisition of the customer's likes and designing products according to customer's opinion is facilitated [8]. B. Price Mix Customers' quick access to competitors and their capability to quick compare prices, increased price competition between suppliers [10].Also, when it comes to virtual stores design, the lack of opening cost consequently reduces product cost and thus the selling price [8]. C. Distribution Mix For most buyers, the most important advantage of new technologies like internet is the ease of accessibility of the purchase place.Using new technology, home is a place to buy products [8].In addition, via the internet one manufacturer can remove the distribution networks and get connected to the final customer.A customer can gather information, negotiate with vendors, order products and pay via the internet.These distribution network duties are now in the responsibility of the customer [9]. D. Advertisement Mix Advances of propagation tools have changed data transfer from its most basic form of verbal messages and face to face interaction to e-mail or real time messages.New methods of advertising and marketing use the latest developments in technology to send advertising messages.New technologies have affected advertising in several domains. E. Advertisement Instruments TV used to be an effective mass media in presenting adds.Messages, letters, brochures and e-magazines accessible through the internet are the most recent advertisement instruments in the modern era. F. Advertisement Content Although, information essentially could function as an ad, it must be mentioned that in ads the content of information intends to affect audiences' attitudes and values.This impact occurs in two ways.As information presentation in the form of specific ads that could introduce new values in the society and create some desire towards the new values while making existing values insignificant.On the other hand, ads reflect the current society values along with individuals' ideas and desires.For years advertising has been involved more in transferring social values and attitudes and less in the notification of basic information about goods and services [10]. G. Effectiveness of Advertising One of the most critical features of the internet is its reciprocal nature.When reviewing content of the web, customers can make a mutual relationship and determine how much and what kind of information they require [11].Changing the communication patterns from in-person to group to group and group to individual are two other ad properties.Due to new technologies like internet the ad effectiveness and cost are affected.Through changing the communication models, efficiency of the ads will considerably increase [11]. III. RESEARCH METHODOLOGY The study method is descriptive (non-experimental) and correlation of structural equations was done by the least partial squares method and the relationship between variables in the causal model [12].Present study applies the partial least squares due to the benefits compared to covariance-based approach. A. Participants The participants are marketing and IT experts in Iran.220 questionnaires are distributed by purposeful sampling method, of which 207 questionnaires were answered, 7 questionnaires were excluded from the analysis and finally, 200 questionnaires entered the analysis. B. Data Collection Measure To measure IT capability, the questionnaire presented in [6] was used.This questionnaire consists of 16 items.8 relational items, 3 technical capability items and 5 managerial capability items.To measure marketing mix, a researcher-built questionnaire consisting of 14 questions was used.4 items regarded product, 3 items regarded price, 4 items regarded distribution and 3 items propaganda.Another researcher built questionnaire was applied regarding advertisement.This questionnaire consists of 9 items.3 items regarded content, 3 items tools and 3 items the effectiveness.All questions were answered on a five-item Likert scale of strongly disagree (1) to strongly agree (5). A. Measures Validity and Reliability The measurement test includes the evaluation of reliability (internal consistency) and validity (discriminant validity) of study constructs.To evaluate the reliability of constructs, authors in [13] proposed three criteria: Reliability of each of the items, composite reliability of each of the constructs and the average variance extracted.Regarding the reliability of each of items, the factor loading is 0.6 and above and in confirmatory factor analysis, it shows the suitability (Figures 1, 2, 3).Also, the factor loading of items should be significant at the level 0.01 [14].To calculate t-statistics to determine the significance of factor loading, boot strap (with 500 subsamples) is applied.To evaluate the composite reliability of each of constructs, Dillon-Goldstein's (ρ c ) is applied.The acceptable values of ρ c should be 0.7 or higher.The third criterion of reliability is the average variance extracted [13], which is recommended to be 0.50 or above [15].Tables I and II show factor loading, ρ c and AVE of study variables.These values show suitable reliability of constructs.To evaluate validity or divergent validity, author in [15] recommended that items of a construct should have the highest factor loading on the construct.Authors in [14] proposed that the factor loading of each item on the construct should be at least 0.1 higher than the factor loading of the item on other constructs.Another criterion indicates that AVE square of a construct should be higher than its correlation with other constructs.This shows that the correlation of the construct with its indicators is higher than its correlation with other constructs.Table III shows the cross load of items on the study constructs.As shown in Table III, all dimensions have the highest factor loading on their construct, the least distance between the relevant factor loading is higher than 0.1 and the constructs have good validity.Table IV shows the results of correlation and second criterion of validity, square AVE.Note: All factor loading is significant at the level 0.01 and above Fig. 1. The second order of factor analysis of IT Fig. 2. The second order of factor analysis of marketing mix Fig. 3. The second order of factor analysis of advertising As shown in Table IV, the square AVE of all study variables is higher than their correlation with other variables.Thus, second criterion of divergent validity of study variables is established.In addition, values below the diameter of correlation matrix are reported to evaluate the relationship between the variables.As shown, the correlation coefficient between variables is positive and significant. B. Structural Εquations Μodeling Τest To predict advertising, the conceptual model via structural equation modeling is evaluated and based on the study hypotheses and the partial least squares method is used to estimate the model.Also, boot strap method (with 300 subsamples) is used to calculate the t-values to determine the significance of path coefficients.Figure 4 shows the tested model of the relationship between study variables.As shown, the effect of IT on marketing mix and advertising is positive and significant.The effect of marketing mix on advertising is positive and significant.Table V shows the estimation of path coefficients and the variance of study variables.The values inside the parentheses are t coefficients.Figure 4-The tested model of study In addition to the indices of Table VI, total GOF (Goodness Of Fit) index of PLS model is used to evaluate the reliability or quality of PLS model as general.This index evaluates the total model prediction and whether the tested model is successful in the prediction of endogenous latent variables or not [16].The present study has achieved GOF index 0.55 and the value of fit indicates good fit of the tested model.The present study aimed to evaluate the effect of IT on marketing and advertising by structural equations.The results showed that the proposed model has good fit with the study data and can explain 43% variance of advertising and 31% variance of marketing mix.Results showed that technology had positive and significant effect on marketing mix and advertising.We can say that the adoption of new technologies, the promotion of IT projects, the increase in key commercial technology use, the support in assignment of required resources to implement IT strategy, the consideration of IT training for employees, lead to the improvement of marketing mix and advertising.Results showed that marketing mix had positive and significant impact on advertising.We can say that if an organization decides to outperform the rivals, it should have suitable marketing strategies for success.Marketing mix provides a set of controllable marketing tools to respond the target market, includes all the tasks a company performs to affect the demand and leads to the advertising improvement. V. CONCLUSION Recent media technological advances have led to great changes in the contemporary world and new concepts like information technology, explosion of information and information community have been included in world languages.Within less than three decades, many outstanding transformations have occurred in the field of mass media.These transformations have also maximized the capacity of data transfer in terms either of information volume or transfer speeds.Main concern nowadays is not the lack of information, but rather the way to find strategies to manage and regulate a large bulk of information.All social, political and economic institutions are bound to take big steps and try to adjust with the rapid pace of technological advances.Development of communicative technologies in form of satellites, computers, cable TV, video conferences and computer networks have enabled people to access information more rapidly.Therefore, it is important to investigate the impact of technology on different parts of the market and marketing.Present research findings indicate that the new technology and its entrance into the marketing area have affected different aspects of marketing mix.According to marketing experts, advertising mix is the first part affected by the use of new technologies.The major role of advertising is to introduce a broad range of goods to the public and thus to reinforce the market economy.Accordingly the impact of this part of technology as the results and fitness of different aspects of marketing mix show, have influenced other aspects of marketing mix like product, price and distribution.Moreover, the results indicate that advertising and its components are affected by the new technologies.The advertising instruments according to the marketing experts have received the highest rate of effect. TABLE I . FACTOR LOADING, COMPOSITE RELIABILITY AND AVE OF SECOND ORDER FACTOR ANALYSIS OF IT AND ADVERTISING Note: All factor loading is significant at the level 0.01 and above TABLE II . FACTOR LOADING, COMPOSITE RELIABILITY AND AVE OF SECOND ORDER FACTOR ANALYSIS OF MARKETING MIX TABLE III . www.etasr.comHosseini et al.: An Assessment of the Impact of Information Technology on Marketing and Advertising TABLE IV . CORRELATION MATRIX AND SQUARE AVE OF STUDY VARIABLES
3,274.6
2018-02-20T00:00:00.000
[ "Business", "Computer Science" ]
Characterization of Kitchen Utensils Used as Materials for Local Cooking in Two Culinary Media This study is inscribed in the framework of the valorization of traditional kitchen utensils recycled from aluminum waste in Burkina Faso. In fact, these traditional kitchen utensils made of recycled aluminum alloys occupy a very important place in Burkina Faso’s kitchen. The effect of foods for consumption on its local utensils was studied using the non-stationary technique and electrochemical impedance spectroscopy. For this purpose, a sample of utensil has been deducted on traditional production site. The corrosion behavior of the recycled aluminum alloy ok know chemical composition was evaluated by analyzing the impedance spectra obtained at the open circuit potential, in the salt media titrated at 3 g·L and rice. Modeling electrical properties by using of a simple equivalent circuit made it possible to interpret the results obtained by impedance spectroscopy. The results showed a susceptibility to pitting corrosion and were confirmed by the electrochemical impedance spectroscopy method. Introduction The pots made by craftsmen from recycled aluminum alloys play an important role in the cooking process in Burkina Faso. These alloys stand for very reactive materials and react instantly to media containing oxygen. This is why their outside surface is covered with an isolating oxide film. The thickness of this film reached around 10 nm and plays a protective role in those materials to corrosion which was generally observed in some aggressive media. The state in which these materials are located is called passivity state. The passivity condition can be stopped at any time when defects are found in the oxide film (discontinuity and heterogeneity) or the presence of aggressive ions in the electrolytic media (halogen, cyanide, etc.). This can lead to a release of a localized aggression [1]. Aluminum alloys have a low density (2.7 g·cm −3 ), a good thermal and electrical conductivity, a low melting point, easy to shape, a relatively low price, which is advantageous for local people [2]. Moreover, they are of high mechanical characteristics which make them to be utilized as structural materials. In Africa and particularly in Burkina Faso, craft industry turns to profit these properties in the recycling of aluminum alloys for kitchen utensils manufacturing; the raw material used in this field is made of combined or non-aluminum waste, from old car spare parts, beverage cans and tins [3]. Manufacture techniques remain empirical and recycled aluminum alloys are not homogeneous. Corrosion phenomena is favored when utensils are used for food cooking at high temperature or for long cooking time and when acidic or alkaline food are stocked in these same containers for long time [4,5]. The humidity, the high temperature, and the cooking times are factors which favor the metallic materials corrosion, from which some of the component elements of the corroded material get through the surrounding aqueous media. Despite the numerous studies related to aluminum and their alloys corrosion, few scientific, strict and comprehensive studies on the behavior to corrosion, recycled allows in the craft industry have been conducted. The objective of this work was to study the corrosion behavior of a recycled aluminum alloy collected in the city of Ouagadougou (Burkina Faso) in various culinary media and to evaluate the anti-corrosion effect in these media. This study was carried out by the use of an electrochemical technique: electrochemical impedance spectroscopy (EIS). Sample preparation The chemical composition of the recycled aluminum alloy is shown in Table 1 [6]. Before each measurement, the aluminum allow surface preparation of these discs for electrochemical tests was the following: the discs were first ground with 400 through 4000 grit SiC papers and then polished with diamond down to 6 μm and followed by 1 μm alumina -30% chrome oxide suspension, and finishing with 5% oxalic acid solution. Later, each polished sample was rinsed with acetone and put in an ultrasonic cleaner for 10 min. Subsequently, it was rinsed with milliQ water (with a conductivity of 0.7 μs/cm) and ethyl alcohol and finally dried under a hot air flow. The contact area with the culinary media is 3.46 cm 2 . Culinary media preparation To simulate similar Burkina Faso operation, the testing media was local culinary media whose composition is given below. The media used in this study are: salt water (WS) titrated at 3 g·L −1 and broken rice (WR) in tap water (5 g of broken rice in 250 ml water) reserve for local consumption. The selection is made based on the fact that rice is the most consumed cereal in Burkina Faso. In this country, the people consume on the average once daily prepared with vegetables, fish, and meats. These media were chosen to simulate a cooking process similar to that of Burkina Faso. All electrochemical measurements were performed in five replicates for each cooking media and show a reproducibility up to around 3-9%. Before each test, we assure that all the electrodes are submerged in the media, at the same depth in the electrochemical cell. As the cooking is most of the time performed at a hot temperature, the media were tested at boiling temperature (100°C) in order to simulate the real cooking conditions [7,8]. Principle Eriochrome Black T is a (3-hydroxy-4-[(1-hydroxy-2-naphthalenyl)azo]-7-nitro-1-naphthalenesulphonic) acid sodium salt, Mordant Black 11. In the presence of colored indicator [9], diluted aluminum in the buffer solution forms a complex which changes at wine-colored. The formed complex is more stable. Acidity of the obtained solution depended on aluminum content. The various measures of aluminum content in culinary media will be done by colorimetric with a spectrophotometer 7315 of JEWAY mark at 560 nm wavelength [10,11]. Aluminum quantitative analysis method Local kitchen utensil samples have been thoroughly washed and rinsed using distilled water then air dried. Each kitchen utensil has been filled out with studied solution WS and WR then heated at boiling temperatures. To compensate evaporation during heating phase, the final volume is adjusted to the end of each operation with distilled water [12]. Colorimetric dosage of aluminum released from local kitchen cooking The loss of aluminum quantity released from two local kitchen utensils was determined by colorimetric dosage to 5 ml sample for each cooking solution. For that it was placed in the graduated flask of 50.0 ml containing 10.0 ml distilled water: 5.0 ml of EBT solution, 20.0 ml of buffer solution acetyl acetic acid (C 4 H 6 O 3 ) permit to hold a pH at 6; 1.0 ml ascorbic acid 2% and a volume of solution S 0 specified in Table 2 then filled up to the line of gauge with distilled water. After stirring and resting during 20 minutes, samples were analyzed with spectrophotometer. The standard was measured with a solution without aluminum and tally with no absorbance. The concentration of aluminum in the different solution was expressed in mg/L. Data analysis Data obtained were analyzed for duration and temperature variations using the Student's t-test and XLSTAT 7.5.2 statistical software. Mean parameter concentrations were compared according to the Ryan-Einot-Gabriel-Welsch (REGWQ ) test. EIS-electrochemical impedance techniques Electrochemical impedance spectroscopy (EIS) is a well-established quantitative method for the accelerated evaluation of the anti-corrosion performance of protective coatings. Within short testing times, EIS measurements provide reliable data, allowing for the prediction of the long-term performance of the coatings. The result of EIS is the impedance of the electrochemical system as a function of frequency. EIS is a versatile testing procedure and can be performed under different conditions of stress, depending on the performance of the tested coatings. Electrochemical impedance spectroscopy (EIS) is a powerful technique that utilizes a small amplitude, alternating current (AC) signal to probe the impedance characteristics of a cell. The AC signal is scanned over a wide range of frequencies to generate an impedance spectrum for the electrochemical cell under test. EIS differs from direct current (DC) techniques in that it allows the study of capacitive, inductive, and diffusion processes taking place in the electrochemical cell. Electrochemical measurements The electrochemical measurements were conducted in the Analytical Chemistry and Interfacial Chemistry (CHANI) of the University of Brussels (ULB). The EIS measurements were determined in a three electrodes electrochemical cell containing the culinary media. There are three electrodes -the reference electrode, the auxiliary electrode, and the working electrode. A saturated calomel electrode (SCE) was used as the reference electrode, a platinum metal gate as the auxiliary electrode, and a recycled aluminum alloy as working electrode (WE) made in the laboratory. The EIS measurements were performed with employed Princeton Applied Research potentiostat (model PGSTAT 50). A microcomputer was used for data acquisition. The measurements were carried out after 60 minutes of cooking. Open-circuit potential measurements Open-circuit potential, E oc , changes were measured against a standard saturated calomel electrode placed in the same compartment. The recycled aluminum alloy was immersed in the culinary media exposing a circular area of about 3.46 cm 2 . A copper wire was soldered at the rear of the electrode which was housed in a glass tube to protect it from the test culinary media. Results potential (E oc ) are shown in Figure 1. In the curve (Figure 1), a rapid increase of the open circuit potential was observed followed by a decrease of the value in the two culinary media. Open-circuit potential was studied for 15 min of cooking in the various culinary media. From the curve, a rapid increase of the open-circuit potentials followed by a decreasing of the value in the two culinary media (salt water at 3 g.L −1 and broken rice) were observed. It can be noticed that these curves vary toward higher values during the first 150 seconds of cooking but after that, an almost decrease of the potential is observed. In this case, we can observe the aluminum passivation tendency which could have many forms: passivation caused by hydroxides which are absorbed at the metal surface, that caused by absorption of the existing components of the two cooking media or their combination. A comparison of the behavior of recycled alloy in the media (broken rice and salt water) indicated that significantly higher corrosion potential was recorded in the salt water compared to broken rice media. This could be explained by their negative effect susceptible of influencing the passivation during the first minutes of cooking. According to literatures, the presence of chloride ions in study media could compete with media hydroxides ions when absorbed at the surface, allowing a localized corrosion and then a deterioration of the passive film [13]. In order to understand more about the existing behavior for metal/media in the cooking media, a series of curves was set out by electrochemical impedance spectroscopy in the context of comparative study in the different media. Electrochemical impedance spectroscopy (EIS) measurements Behavior to corrosion from recycled alloy in the two cooking media simulating a similar process to Burkina Faso cooking habit was studied by electrochemical impedance spectroscopy at 100°C and different cooking times. The frequency ranged from 100 KHz to 100 mHz, and the amplitude was set at 10 mV. Nyquist and Bode plots were used in broken rice media and that of salt water titrated at 3 g.L −1 and up to boiling temperature after various cooking times in an open-circuit. Data acquisition and analysis were performed with microcomputer. The spectra were interpreted using the ZSimpWin program. These measures were performed in five replicates to ensure the results reproducibility. Effect of cooking times Measuring electrochemical impedance consists in studying the response of the electrochemical system, due to disturbance which is most often a low amplitude double signal. The strength of this technique is to differentiate the reaction phenomena from their relaxation times. Only quick processes are characterized in high frequencies; when the applied frequency decreases, appears the contribution of slower steps as transport phenomena or solution diffusion. To evaluate the behavior of the passive layer in various culinary media, the sample of aluminum alloy was immersed continuously for 60 minutes (00, 15, 30, and 60 minutes) for broken rice and salt water. During these cooking times, only measurements of impedances have been regularly performed since they do not disturb the system. Nyquist graph (Figures 2 and 3) illustrates the experimental impedance diagrams to corrosion potential obtained from the aluminum alloy in the studied culinary media. Indeed, Figure 3 shows a progressive decrease in the size of the impedance spectrum in a more or less flattened half circle shape, characterizing the formation of the protective layer (alumina Al 2 O 3 ). This leads to a decrease of the total recycled aluminum resistance with regards to the cooking time. In contrast to the salt water media, the broken rice media (Figure 2) show an increase in the spectra size, confirming the sample resistance of the media [14]. We find a phase difference with respect to axis of real (Figures 2 and 3), which may be explained by the surface none-homogeneity. However, for a better correlation between the experimental data and simulation, we introduced into the procedure for calculating a constant phase element and the surface none-homogeneity is realized through this constant phase element as follows (Eq. 1) [15 -18]. Despite a constant phase element being utilized for data fitting instead of an ideal capacitor, since n values obtained from data fitting were in the range from 0.85 to 0.95, the value obtained from data fitting was taken as the capacitance (Eq. (1)). Z dc = double layer capacitance. A true capacitive behavior is rarely obtained. The n values close to 1 represent the deviation from the ideal capacitive behavior [21]. The best simulation is obtained from the use of equivalent circuit proposed for metal/electrolyte interface and illustrated in Figure 4. This equivalent circuit was Electrochemical Impedance Spectroscopy 8 proposed by Zhang et al. [9] to describe the bi-layer oxide film formed on aluminum and aluminum corrosion aqueous media. This circuit is valid for all determinations. In the equivalent circuit, R is the salt water and the broken rice resistance, R 1 is the resistance to polarization, C is the corresponding capacity to the dense oxide layer, R 2 is the resistance in porous oxide position, and Q is the corresponding component of the constant phase to porous oxide positions. The results of the parameters in the equivalent circuit are shown in Table 3. For the recycled aluminum alloy, different resistivity profiles in both media, regardless of the cooking time are observed as the impedance diagrams vary with the immersion time (Figures 5 and 6). It shows that parameters in the salt water media decrease in contrast to those in broken rice media for different cooking times up to 60 minutes. This behavior may be associated with physicochemical variations which occurred in the oxide film (alumina) during cooking in the salt water media (penetration of the electrolyte into the oxide layer and hydration of alumina) containing chloride ions. Comparison of the curves (Figures 5 and 6) clearly shows that resistivities in the layer alumina developed on the alloy recycled aluminum are higher in the broken rice media than those in the salt water. This could be explained by the presence of a more homogeneous and dense layer for the recycled aluminum in the media and also that of chloride ions in the salt water. Because, the behavior of interface/media is completely different with the latter. The overall behavior is reflected in the impedance diagram by a decrease in size of the capacitive phenomenon. This can be explained by the weakening and destruction of a film which is likely to be developed on the surface of the studied alloy allowing disappearance of the distribution phenomenon and the decrease of the resistance. These differences may be explained by the oxide layer composition developed on the alloy which is influenced by the chemical composition of material solid media and by the chemical composition of the intermetallic particles [22][23][24]. In conclusion, resistivity profiles obtained for recycled aluminum alloy showed that the oxide layer developed is less protective in the salt water media than the broken rice. This result would be bound to the zinc presence which would return this less resistant system [25][26][27]. The negative effect of chlorides in the salt water media are presented in Table 3. This result was translated by the decrease in the polarization resistance. There also appeared an increase in the capacity associated with the polarization resistance. This increase may reflect the dissolution of the recycled alloy in the salt water media. The polarization resistance stands for the sum of the dense oxide layer resistance and that of the two cooking media (salt water and broken rice) [28]. In this case, R 2 is much larger than R 1 , therefore, it can be considered as the polarization resistance. Table 3 illustrated the simulation parameters. It shows that the polarization resistance increases gradually with the increased cooking time up to 60 minutes for media broken rice while for the salt water media, a decrease is observed followed by a slight increase. Highest values of the polarization resistance in broken rice media as compared to the salt water can be explained partly by the chemical composition of the recycled alloy capable of modifying the physical and chemical properties of the oxide layer into more or less noble depending on the studied media, and second, by the resistance of the charge transfer (R) which is not identical for both media. Figure 7 indicated a clear difference between the polarization resistance values from the two cooking media. Observation Figure 7 curves show that the sample from the broken rice media is less corroded than that from the salt water media. This confirmed the destructive effect of the salt water media on our sample [29,30]. Influence of media, WS, and WR Aluminum content released after 30 and 60 minutes in various cooking media is given in Table 4. The same absorbance measured 30 and 60 minutes of cooking duration in the media WS (titrated at 3 g.L -1 of salt) has given more important result in the other media (WR) of study. The high quantity of aluminum in this media has been probably linked to the presence of chloride ion and also to environmental pH. This result is according to the study conducted by Bommersbach and Duggan [31,32]. Similar increase of aluminum loss with the increase of alcohol-free drinks acidity package in the aluminum bottle. These contents are very comparable with those got using tape water, concentrated tomato, and media WR in the same conditions. Other minerals in the tape water added to chloride ion have a significant influence on aluminum leaching with local kitchen utensils. For media WR important contents of aluminum had been lost in the cooking media after 30-60 minutes in the four local kitchen utensils. These results are similar to those decrypted by some authors [33,34], in cooking breaking rice (WR) found to be not aggressive operation for sample containing more silicon. Studies showed that concentrated tomatoes caused more effect on cooking utensils [35]. Acidity of this product is so probability equivalent to those of fresh tomatoes, that is surely again a consequence of their origin and mode of production. Contributions of water at room temperature and tomato are so low that aluminum quantities swallowed and are relatively independent form the proportion of rice water. But, toxicity norm by some authors [26,36] do state of acceptable daily dose to 1 mg by kilogram of body weight for human. This dose is a maximal tolerable quantity by human organism above which aluminum became toxic for him [22,37]. This simplified outcome showed that we are far from the critical threshold for which human health is in danger. From this study, we can conclude that kitchen cooking utensils in Burkina Faso have not involved in particular toxicological danger. Conclusions This study contributed to the characterization by electrochemical impedance spectroscopy of the local kitchen utensils used for cooking. From this study, we conclude that variations of the impedance spectra in Niquist Z diagram based on the cooking time confirms the development of a protective oxide layer (alumina) of this alloys in electrochemical tests, resulting in an increase of the polarization resistance jointly with a decrease in the capacity of the double layer. Electrochemical tests showed a good efficacy of the sample in the broken rice media and having a good resistance to corrosion comparatively to salt water media. The low resistance to corrosion of sample in the salt water media is certainly caused by chloride ions. Susceptibility to corrosion by pitting has been confirmed by the method of electrochemical impedance spectroscopy. Analysis of two local kitchen utensils with known composition and with various cooking media frequently used in Burkina Faso showed that aluminum content released increases with temperature influence, cooking time, and media. However, insignificant values of aluminum concentration released at room temperature in all solution are may be caused by the short stocking time, may be a decreasing of stocking temperature or another factor not deal with in this study. This study permits to update literature data and must support agribusiness and socio-economic interest of local kitchen utensils made in Burkina Faso according to the area. As precaution to take for limit risk of aluminum migration in foods: • Avoid using spoil kitchen utensils, aluminum migrate more easily when kitchen utensils are worn; • Avoid cooking or preserving food in kitchen utensil in aluminum. Food will absorb more aluminum if it is cooked or preserved in kitchen utensil (pan, leaf,…) made in this material; • Avoid cooking vegetable or acid foods as tomatoes, citrus fruit in aluminum utensil, products absorb more easily this material.
5,028.8
2020-12-16T00:00:00.000
[ "Materials Science" ]
Content Adaptation and Depth Perception in an Affordable Multi-View Display : We present SliceView, a simple and inexpensive multi-view display made with multiple parallel translucent sheets that sit on top of a regular monitor; each sheet reflects different 2D images that are perceived cumulatively. A technical study is performed on the reflected and transmitted light for sheets of different thicknesses. A user study compares SliceView with a commercial light-field display (LookingGlass) regarding the perception of information at multiple depths. More importantly, we present automatic adaptations of existing content to SliceView: 2D layered graphics such as retro-games or painting tools, movies and subtitles, and regular 3D scenes with multiple clipping z-planes. We show that it is possible to create an inexpensive multi-view display and automatically adapt content for it; moreover, the depth perception on some tasks is superior to the one obtained in a commercial light-field display. We hope that this work stimulates more research and applications with multi-view displays. Introduction From old black and white televisions to 3D cinemas and virtual reality, displays have revolutionized the way we learn, work and entertain ourselves; they are present in almost every aspect of our life.Visual displays target our most developed sense, leveraging our inherent skills to interpret shapes and depth.However, most displays do not take advantage of all the features of our vision. Common displays like the monitor from a computer or the touch-screen from a mobile phone can provide static monocular cues such as occlusion, distance-size relationship, shadows or texture gradients [1].However, oculomotor cues such as binocular disparity, accommodation of the focal point or convergence cannot be generated. Head Mounted Displays (HMD) or glasses render different images for each eye, but current commercial products cannot provide oculomotor cues.Furthermore, they require wearing an extra device that can burden some interactions, disable the "come and interact" paradigm, or produce motion sickness.Volumetric and lightfield displays are capable of providing most of the visual cues, but creating content for them is more complicated; they are also not as affordable or widespread as regular displays. The main contribution of this paper is providing guidelines and software to adapt existing content to a multilayered display with proof-of-concept applications such as a 2D painting software, a 3D scene renderer, a volumetric data visualizer, a retro-gaming emulator and subtitled video. Secondly, we introduce SliceView, a simple and affordable multi-layered display inspired by the Pepper's Ghost illusion [2,3].The original Pepper's Ghost relies on projecting the image of a person in a translucent surface, which is superimposed with the real-world behind the surface.Instead of using a single reflector sheet, we use multiple reflector sheets at different depths: each sheet reflects different images that stack and are viewed additively.We made SliceView as simple as possible to enable everyone to build it using a regular computer screen and basic materials. Finally, a user study comparing SliceView with a commercial lightfield display (LookingGlass) revealed that 3D charts and text can be correctly perceived and analyzed at different depths when using SliceView. Related Work Multiple display technologies have been developed to provide the same visual cues that we perceive from the real world. Wearables and Head-Mounted Displays Wearables and HMDs provide depth information by showing different images to each eye and creating binocular disparity.Nowadays, Oculus Rift or Valve Index are widely commercially available.However, even novel HMDs can cause motion sickness [4,5] and they are relatively uncomfortable to wear.Furthermore, each user needs their own head-mounted display to observe the same scene. Autostereoscopic Displays On the other hand, autostereoscopic displays also provide binocular disparity without the need to wear any device [6]; they are commonly divided into volumetric displays and multi-view displays. Volumetric Displays A volumetric display emits points of light from each position within a volume.Swept volumetric displays, such as the Perspectra [7] or Voxon Photonics VX1 [8] use high-speed moving parts which can be noisy, expensive and dangerous.Plasma displays induce electric breakdown in mid-air using a powerful focused laser [9]; they involve severe dangers and have limitations in the size, resolution or color gamut of the graphics that can be displayed.Other approaches to volumetric displays have been studied, such as projecting on a vibrating bubble [10] or using a fast-moving particle held with optical [11] or acoustic [12] tweezers, but all of them present important challenges to rendering colorful, high-resolution, fast-changing images.Volumetric displays provide all the visual cues without the need to wear a device.However, when compared to common displays, most of them are complex, expensive and scarce. Multi-View and Lightfield Displays In a multi-view display, two-dimensional projections such as photographs or synthetic images are projected in different directions.Depth is implicitly encoded as positional disparity between different projections [6].The most common type of multi-view displays are the light-field displays, in which microlens arrays direct individual pixels into different directions, producing different points of view of the graphics.Commercial models such as LookingGlass [13] or FOVI3D [14] are available.Multi-view displays suffer from a dramatic loss of resolution [15] as they use the same display to provide multiple views (e.g., 48 different views in the case of the LookingGlass), and most of these displays only offer horizontal parallax with a limited viewing angle [6].Lightfield displays are not easy to scale up and require precise alignment of their micro-lens arrays or their multiple projectors.Tensor displays [16] provide better resolution, but they need complex software and alignment of the multiple Liquid Crystal Displays and lenses. Projection-Based Displays Projection-based displays emit different images at different planes.The Virtual Showcase [17] is a truncated pyramid of half-silvered mirrors that merges real and virtual content using stereo shutter glasses and head tracking.Pepper's Cone [18] uses a curved transparent surface to reflect a pre-distorted image.DepthCube [19] consists of a high-speed projector and a stack of liquid crystal panels that can switch between a transparent and light-scattering state; this concept has also been applied to mixed reality [20].Projection-based displays based on stacking require calibration between the slices for complex scenes and are affected by ambient light; the background should be rendered in black, as shadow-casting objects can hinder the projections, and can suffer from the cardboard effect [21]. Pepper's Ghost Previous work has explored extensions of Pepper's Ghost which use multiple layers instead of a single one.Using two layers in a large scale display can put the presenter between the layers and present information in two differentiated levels [22].A head-mounted display was designed such that each eye would have a set of three translucent layers reflecting at different depths [23]; the speed at which the eye can focus at different depths was studied.Differently, SliceView is a desktop display focused on ease of assembly and the automatic adaptation of existing content.The conducted studies, user studies, target higher-level activities, namely, text readability and chart analysis at different depths. Hardware SliceView does not contain moving parts and is a down-to-up projection-based auto-stereoscopic display that consists of two main components: a regular computer monitor acting as the light-source and a multi-planar optical element composed of tilted translucent sheets placed at different depths.A scene is sliced by our software into layers, then they are transformed and mirrored, and each slice is projected into a translucent sheet.The light from the monitor below is reflected on each sheet and directed towards the user's eyes, which perceives the accumulated images.A diagram of a SliceView is shown in Figure 1.It must be noted that the image on the first slice is reflected directly into the user, but light reflected on the other sheets refracts on the slices in front of it until reaching the user.We use a Lenovo L2251p Wide TFT 22" monitor screen and three 297 × 210 × 2 mm acrylic-polyethylene transparent sheets.Sheets should be ideally placed at 45• degrees to maximize projection area and maintain image proportions.However, they can be used at wider angles with more layers stacked to provide more depth levels in the displayed graphics, with the downside of reducing resolution along one of the planar axis.This reduction occurs because the image of the projecting monitor needs to be split into different slices, i.e., if the original monitor has a resolution of 1920 × 1200 and is divided into three slices, the resolution for each slice would be 640 × 1200.Pre-rendering transformations are needed to maintain proportions and avoid distortions in the perceived graphics; they are detailed later and shown in Section 3.2. A regular PC (i5 with 8GB of RAM) is used to render the 3D scenes, and a mouse and keyboard are situated in front of SliceView when user interaction is required.Alternative USB-powered interaction devices like a graphic tablet or a gaming controller are used for some applications. All the applications are rendered in real time at 60 Hz.With the hardware used, no rendering speed differences were perceived between the original content and the automatically adapted content.Furthermore, the required image transformation operations were achievable within this 60 Hz. Software We modified and employed different libraries (see Figure 2).Following our low-cost and easily accessible automatic content adaptation objective, the software is open-source and cross-platform.More explanations and images are provided in Section 6. SliceViewer is a ThreeJS component that slices the 3D scene at different depths, and renders the final images to be projected by the monitor.Different settings can be tuned, such as the number of layers, perspective of the camera, and occlusion behaviour.It can run on multiple devices through a web browser.SlicePainter is a web browser multi-depth canvas painting software developed with the P5JS library.We have modified the MegaDrive (aka Genesis) emulator Helios so that it renders each layer of the videogames separately, therefore many of the video games are suitable to be played in SliceView.VLC Media Player has been configured to test the use of overlaid subtitles at different depths into video. Some basic transformations are required to correctly render the final image (see Figure 3).Firstly, the layers are mirrored vertically to counter the image inversion from the reflective sheets.Secondly, as the layers are not completely parallel, there could be misalignment and deformations in the projected images, so keystone correction was employed in such cases.Afterwards, it is necessary to re-scale the layers proportionally to their distance.We made calibration templates to facilitate end-users' assembly of their SliceView.OBS Studio is used to apply transformations and calibration into the slices.The use of an OBS plugin known as StreamFX provides mirror and keystone transformations.Stretch deformation is required when layers are not placed at 45 o over the screen.When placing them at narrower angles, the projected images should be vertically shrunk.Conversely, when placing the layers at wider angles to stack more layers on the same surface area, images should be enlarged vertically. Technical Evaluation In this section, we measure the light reflection and light transmission of sheets of different thicknesses.SliceView takes advantage of the characteristics of the employed monitor, i.e., color range, refresh ratio, size and resolution depends on the monitor used, although an important amount of the emitted light is lost in the form of refracted and residual light (Figure 1). The amount of reflected light varies depending on the thickness and material chosen for the semitransparent sheets.As shown in Figure 4, reflectors that are too thick result in double reflections, which reduce the sharpness of the perceived image; using 2-mm thickness sheets produces minor double reflections.On the other hand, sheets which are too thin are hard to place uniformly stretched; our prototypes using 0.5-mm and 1-mm acetate sheets resulted in wobbly images, as shown in Figures 4 and 5. Using 2-mm sheets provides a good balance between image quality and ease of assembly.Light transmission and reflection was tested on the two most promising translucent sheets.The light source was a 1-mW red laser attenuated with a ND36 filter.The reflective sheet is placed at 45 o in front of the laser.A white surface is placed in front of the sheet to receive the transmitted light and another surface is placed to receive the reflected light.Transmitted and reflected light are captured using a camera (GM1903) with manual settings (f/1.75, 1/100s, ISO-400).The results are shown in Figure 6.In both cases, most light traverses the sheet and only a small fraction is reflected back to the user.In a 4-mm sheet, the brightness of the reflected light is 20% of the transmitted one; this ratio is 30% for a 2-mm sheet.Therefore, using a 2-mm sheet instead of a 4-mm sheet still means enough light is reflected and reduces double reflections, resulting in sharper images and brighter colours.Double reflections are almost nonexistent on 0.25-mm sheets, but the projection surface suffers from distortion caused by the non-uniform stretching of the sheet, as shown in Figure 5.All configurations are affected by environmental light; more discussion of this issue is present in Section 7 of this paper. User Studies We want to compare SliceView with a commercial lightfield display regarding the performance of tasks that require observation of information at multiple depths.We selected LookingGlass 8.9" (the Looking Glass Factory, NY, USA [13]) since it is the most popular and available lightfield display.We acknowledge that SliceView may have advantages over LookingGlass in the selected tasks, since they are based on discrete depths, but the obtained results are only intended to serve as a baseline.Additionally, the results of these tasks are transferable to multiple application scenarios, like the ones exemplified in Section 6.Finally, we also note that none of the tasks could be completed on a regular monitor with a static camera. The tasks are the analysis of 3D scatterplots and the reading of overlapping texts placed at different depths.A within-subject design with two conditions (SliceView and LookingGlass) was employed, and the order was counterbalanced. Participants A total of 22 participants (six females and 16 males) aged between 21 and 44 years (M = 28.3,SD = 6.46) took part in this study.All of them had perfect or corrected vision.The participants had experience using computers but had no previous knowledge in visualisation design or the use of autostereoscopic displays.Before the experiment, they were instructed to sit on a chair, in front of the setup table and conduct the experiment in a comfortable position after receiving an explanation of the evaluation method and the experimental procedure (see Figure 7). Conditions SliceView was assembled using a regular computer monitor, three acrylic sheets and a dark enclosure; the resulting dimensions were 25 × 18 cm and 47 cm depth.LookingGlass was used as-is on its 8'9" Development Kit.The users were seated during the experiment, in a dimly lit environment, with their eyes slightly above the displays.Both devices were connected to the same computer and keyboard.A paper questionnaire was provided for each task to be filled out with answers by the participants during the challenges.To obtain completion times, participants were requested to press a key to pass onto the next trial each time they answered a question.No camera movement or interaction with the 3D scene was allowed, since we were interested in the inherent perception of depth. 3D Chart Analysis For the Chart Analysis task, scatterplots were displayed on both conditions by using similar ThreeJS scenes rendered through perspective cameras.The charts were comprised of six horizontal columns and three depth rows.Rows were color-coded (i.e., red, green and blue) based on the row they were being displayed.Spheres were randomly placed at different continuous heights between 0 and 100.All the axes scales were removed, since we are not interested in the absolute values but in relative estimations.A yellow cuboid wireframe enclosing the chart was also rendered as a helper to identify maximum and minimum possible values.The employed datasets were randomly generated but manually inspected to ensure that they were adequate for the task.An example of this task, both in SliceView and LookingGlass, can be seen in Figure 8. The different subtasks for this task were derived from Jansen et al. in [24] that compared the exploration of a 3D-printed physical chart with a digital version of it.The subtasks are: 1. Order task: Order the values in ascending order for a given column (i.e., three values at different depths); 2. Range task: Estimate the range of the values of a given column; 3. Compare task: A pair of values (column and color) are given and the user must determine which one is larger. Overlapped Text Readability For the readability task, different pairs of short sentences (between four and seven words) were rendered overlapping at different depths.For both conditions, the same text type, font, and color (white Arial 24 pt = 0.848 cm height) was used to render each sentence over a black background.We used the first and third layers of SliceView for this task (see Figure 9).In the case of LookingGlass, the text was placed at 20% distance from the closest and furthest depths.We introduced word substitution errors on both conditions, i.e., between zero and two words were replaced by similar but incoherent words on each sentence.All words used for substitutions rhymed with the original word.The texts were designed and curated by an individual to be as similar as possible between conditions.The same number of substitutions were introduced in both conditions, providing a similar challenge.The task of the users was to read several pairs of overlapping sentences at different depths and detect any substituted words in each of them.We decided to take this approach to analyse readability as well as comprehension [25].Additionally, we based this task on Jankowski et al.'s study on overlapped text readability for traditional displays [26]. Procedure The participants were asked to read the initial instructions and forms, then the different tasks and conditions were presented.Before starting each condition, participants were asked to do a test trial to ensure that they understood the tasks and what was allowed in both systems.After the tasks, participants filled in the answers using a paper questionnaire attached to the setup table. To summarize, the study involved 22 participants × 2 conditions × 5 scatter plot datasets = 220 scatter plot datasets and, therefore, 220 sorting tasks, 220 ranging tasks, and 220 × 4 = 880 comparison tasks.Regarding readability, the study involved the same 22 participants × 2 conditions × 10 pairs of sentences = 440 pairs of sentences.We recorded Task Completion Time and accuracy for each trial.Additionally, after completing all the tasks on both conditions, the participants filled in the System Usability Scale (SUS) [27] and NASA TLX [28] questionnaires, since they are standard questionnaires which are widely applied for human-computer-interaction evaluations. Results Each participant took an average of 50 min to perform the evaluation and complete the questionnaires.We performed t-paired test on objective measures of user performance, i.e., Task Completion Time (TCT) and Accuracy between the two conditions (SliceView and LookingGlass) split by task and subtask.Wilcoxon tests were performed on the subjective reportings from the questionnaires SUS and NASA TLX score. Task Completion Time Task completion times are displayed in Figure 10 split by condition and subtask.In the Order subtask, completion time was 18.33 s (SD = 11.03) for SliceView display and 17.04 s (SD = 8.94) for LookingGlass; no significant difference was found (t(19) = 0.668, p = 0.512).In the Range subtask, it was 15.18 s (SD = 3.88) for SliceView and 15.79 s (SD = 4.81) for LookingGlass; the difference was not significant (t(19) = −0.569,p = 0.576).In the Comparison subtask, it was 11.44 s (SD = 3.06) for SliceView and 11.44 s (SD = 3.65) for LookingGlass; no significant difference was found (t(19) = 0.007, p = 0.994).In the Readability task, completion time was 22.66 s (SD = 12.06) for SliceView and 32.11 s (SD = 2.45) for LookingGlass; there was a significant difference (t(19) = −3.768,p = 0.001).Tasks completion times were similar in both conditions when performing chart analysis subtasks, but significant differences were observed in the readability tasks.This difference could be explained by the different depth-placing capabilities of both devices: SliceView has a larger depth span and LookingGlass suffers from severe loss of resolution when items are placed next to its front or back display area. Accuracy was similar in both conditions when performing ranging and ordering subtasks, but significant differences were observed when performing the comparing subtasks.We hypothesise that these differences may be caused by the differences in resolution between both displays and by the capabilities of both devices when providing vertical motion parallax. Subjective Questionnaires The System Usability Scale (SUS) [29] questionnaire had seven questions to be answered in a 5-point Likert scale; we selected the ones relevant to our evaluation, removing the ones related to interaction.Q1: I think that I would like to use this system frequently, Q2: I found the system unnecessarily complex, Q3: I thought the system was easy to use, Q4: I think that I would need the support of a technical person to be able to use this system, Q5: I would imagine that most people would learn to use this system very quickly, Q6: I found the system very cumbersome to use, and Q7: I felt very confident using the system.The average results can be seen in Figure 11.Wilcoxon tests were Q1 (W = 46, p = 0.264), Q2 (W = 52, p = 0.014), Q3 (W = 34.5, p = 0.151), Q4 (W = 16, p = 0.292), Q5 (W = 7, p = 0.269), Q6 (W = 112, p = 0.003) and Q7 (W = 3.5, p = 0.710).There were significant differences in Q2 and Q6, meaning that the users found SliceView more complicated to use than LookingGlass.We could explain these differences by the displayed visual aspect, as the users were only requested to observe the displays, and not to directly interact with them.SliceView is made with cardboard and affordable materials, whereas LookinGlass is a sleek final commercial product. Potential Applications In this section, we exemplify some applications that can take advantage of SliceView.These potential applications have been partially developed and automatically adapt to existing content and applications to take advantage of the discrete multi-depth capabilities of SliceView. Slicepainter: Layer-Based Painting Tool SlicePainter is a painting tool based on the layer system commonly used in image manipulation software, such as Photoshop or Gimp, providing a more physical representation of the layers.The user can use different colors and brush sizes to draw on different layers.The user can set the number of layers in order to then hide, show, and select them individually.This was developed in Javascript with the P5.JS library so it can run on regular internet browsers. SlicePainter can be used with a keyboard, a mouse or a drawing tablet.The keyboard is used to navigate through layers and change their opacity, the right-click is used to choose color or paint, and the left-click is the eraser.With the tablet, the layer navigation is done with the pen buttons and the tablet active area is mapped onto the current layer. SlicePainter offers some novel experiences.The parallax effect can be exploited to create innovative drawings and optical illusions.Scenes can be painted to be perceived from different points of view (Figure 12) and explore color-mixing, as light from the layers is accumulated. Retro-Gaming Platform We have modified an open-source MegaDrive (Genesis) emulator to render each 2D layer separately.The chosen emulator is Helios, coded in Java. The original MegaDrive employs five layers in which games draw their tiles and sprites; these layers are Background, Plane B, Plane A, Sprites, and Window.We exploit this feature and instead of combining all the layers in the screen, we render each layer at different slices and depths.The resulting parallax effect is appropriate for most retro games and provides a novel experience of classic games. Many Megadrive games can be played in SliceView, but some were not completely adaptable.Some games employ a sprite priority mechanism that switches the rendering order of specific parts of the layers.On some occasions, layers contain glitchy graphics because they are not meant to be seen, since they are totally occluded by a layer on top.Sometimes, background elements are much brighter than foreground elements, making foreground colors and details difficult to perceive.As light is perceived cumulatively, brighter colors can severely bleed into darker ones.To mitigate these problems, we implemented two different approaches.Firstly, background dimming and foreground brightening can be applied to enhance the important game elements.Secondly, we added an option to mask in black, in a layer, the silhouette of the layers on top of it.This option solves the background layers with glitchy graphics and improves the final image quality, but leaves gaps when the display is not directly observed from the front (see Figure 13). Videogames that do not use the pixel priority system have a horizontal-scroll gameplay, dark backgrounds and bright foregrounds, and implement some parallax effect between layers working better in SliceView.Ghosts'N Goblins TM is an example that has all these characteristics (Figure 14). Text Overlay: Subtitles SliceView can be used as a video player with some advantages when overlaying content such as subtitles.We used a VLC Media Player with settings configured to play synched videos and subtitles at different areas of the screen, which will be projected onto different layers. In general, black backgrounds are used with white subtitles, occluding the original image behind them.Sometimes, subtitles are displayed with partial opacity.Professionally made subtitles render specific phrases at different coordinates to avoid occluding important information, but most subtitles are always rendered centred at the bottom.When using SliceView, the whole video is always visible, as subtitles are rendered in a different layer (see Figure 15).The viewer can change its point of view and focal point to reveal areas.Subtitles could be developed to be rendered at different depths, visually representing the distance from the speaker, or superimposing different phrases when two or more speakers are arguing on the scene. Sliceviewer: Browser-Based 3d Scenes Renderer We developed a specific 3D scene renderer for SliceView.The user can choose a pre-made 3D scene or implement a new one, and SliceViewer will render the scene adapted to SliceView.After setting the number of layers, the perspective, and the occlusion mode, the scene is sliced at different depths and rendered at the monitor (see Figure 16).When a slice cuts through a mesh, we can choose not to render its interior faces so they will not interfere with the layers below.SliceViewer was developed in ThreeJS so that it can run directly on most web browsers. Scenes with bright foregrounds and dark backgrounds are perceived better, and orthographic cameras show a more reliable alignment between layers.Scenes that have elements at different discrete depths, without them being cut in-between slices, provide better visualization results. Discussion SliceView suffers from some of the same problems exposed by Pepper's Cone [18], such as not being suitable for bright outdoor environments, and ambient light reducing contrast and the perceived image quality due to the highlighted dust, scratches and imperfections on the projection surfaces.However, a regular LCD monitor is bright enough to offer an acceptable experience.Dark areas behind the display provide the best viewing conditions, therefore, we suggest enclosing SliceView inside a dark cardboard box (see Figure 17).Bright contrasted images are, in general, better suited to SliceView, whilst dark colors should only be used on backgrounds or unimportant areas.White cannot be fully blocked by other layers, therefore, full opacity on a layer is not achievable and light emitted from aligned pixels at different depths is perceived cumulatively.A dark foreground in front of a bright background is almost invisiblel one solution is masking the foreground silhouette in black on the background layer.This considerably improves the final image quality, but leaves gaps when the display is not directly observed from the front.We consider that interesting color-mixing can be achieved by exploiting this to provide an ethereal effect. Layer thickness has a critical role in image quality.(See Figures 4 and 5).When using sheets of more than 1-mm thickness, there are perceivable double reflections, and text is difficult to read when using sheets thicker than 2 mm.Double reflections also produce less sharp images.When using thinner sheets, the lack of rigidity results in wobbly artefacts and the assembly of SliceView is more complicated.The use of more rigid and reflective materials, such as tempered glass or silica glass, could avoid these problems, but these materials may be harder to obtain and increase the cost. SliceView does not provide a 360 • field of view; the comfortable field of view is approximately 90 • .It would be possible to obtain a 360 • FOV by using multiple concentric cones, i.e., mixing Pepper's cone [18] and SliceView concepts.However, back reflections from the opposite sections and reduced spatial resolution would reduce image quality significantly. Despite being easy to build, SliceView is not plug&play, some basic calibration is needed after assembling it.As shown in Figure 3, transformations and sheet angle tuning are required to obtain optimal results.To ease this process, calibration templates are provided . Adding more layers to increase depth resolution reduces planar resolution, because the same screen is used to project light onto each layer.However, we reckon that with monitors increasing in resolution (1080 p, 4 K and so on), using more than five layers is feasible. Increasing the size of SliceView can be limited by the size of the monitor.TV or PC monitor screens are available in larger sizes but at an increased price.Projector technologies may be more suited for a large-scale SliceView. Like most projection-based displays, SliceView also exhibits the cardboard effect [21].This can be slightly reduced by increasing the number of layers, at the cost of losing planar resolution.Most of our proposed applications do not avoid, but take advantage of, this effect. When using a low number of layers, empty spaces are available between them, in which we could place real objects to be mixed with the virtual scene in a diorama-like style, something unachievable on the LookingGlass or Pepper's Cone alternatives.Touch sensors could also be placed at or between different layers to provide some kind of direct interaction.Nevertheless, as each layer is completely solid, they could be only reached through the top or sides.This is not possible when using a black box enclosing the display to increase perceived brightness. Conclusions We have presented custom software that can adapt existing content to take advantage of multi-depth displays.Namely, a paint application, a retro-game emulator, a video player with subtitles and a 3D scene web-browser are presented and provided with this paper.We also presented a simple and affordable multi-depth display named SliceView, and performed a comparison of this display with a commercially available light-field display regarding the perception of information at different depths.We hope that SliceView encourages people to further explore multi-view displays. Figure 1 . Figure 1.(a) Ray diagram on SliceView: the image emitted from a part of the monitor is reflected on the slice above it, then it refracts on the slices in front of it until it reaches the user.Primary refractions, secondary reflections and residuals are lost and reduce the final intensity.(b) Schematic diagram of a SliceView rendering a layered videogame (SNES Super Mario World TM ). Figure 2 . Figure 2. Component diagram showing modified and employed software. Figure 3 . Figure 3. (a) Vertical mirroring.(b) Size mismatch when no z-based scale correction is applied.(c) Calibration templates for three layers. Figure 4 . Figure 4. (a) Double reflections ray diagram.(b) Double reflection effect on a 2-mm PE sheet.(c) Wobbly effect on a 0.25-mm acetate sheet. Figure 5 . Figure 5. Calibration template (a) captured under the same conditions (b) from the monitor and (c) reflected on a 2-mm sheet and reflected on a 0.25-mm sheet. Figure 6 . Figure 6.Transmitted and reflected beam profiles: directly from the source and through an acrylic sheet of 2 mm and 4 mm thickness placed at 45 o . Figure 7 . Figure 7. (a) Participant using SliceView to perform several tasks.(b) Participant using LookingGlass to perform several tasks. Figure 8 . Figure 8.(a) 3D chart task on a SliceView device.(b) On a LookingGlass device observed from two different angles. Figure 9 . Figure 9. (a) Overlapped text readability task on SliceView with 316-mm of depth separation between sentences.(b) Overlapped text readability task on Looking Glass with 238-mm of virtual depth separation between sentences (physical depth of the device 98 mm). Figure 12 . Figure 12.User painting in SlicePainter at different layers; depending on the point of view, the layers align differently and provide different interpretations. Figure 13 . Figure 13.Masking the foreground silhouette in black in background layers.(a) observed from the front.(b) gaps are revealed when SliceView is observed from a different angle.(Megadrive Sonic the Hedgehog TM ). Figure 14 . Figure 14.User playing to Ghosts'N Goblins TM videogame sliced on multiple layers. Figure 15 . Figure 15.(a) Traditional subtitles on a regular screen.(b) Using SliceView to render subtitles at different depths, enabling the user to change the focal point to reveal partially occluded areas. Figure 16 . Figure 16.User interacting with a 3D scene sliced into three layers. Figure 17 . Figure 17.(a) SliceView on a completely dark environment, (b) under a regular household light bulb, and (c) under an LED spotlight.
7,631.4
2020-10-21T00:00:00.000
[ "Computer Science", "Engineering" ]
On the Analysis of Human and Automatic Summaries of Source Code Within the software engineering field, researchers have investigated whether it is possible and useful to summarize software artifacts, in order to provide developers with concise representations of the content of the original artifacts. As an initial step towards automatic summarization of source code, we conducted an empirical study where a group of Java developers provided manually written summaries for a variety of source code elements. Such summaries were analyzed and used to evaluate some summarization techniques based on Text Retrieval. This paper describes what are the main features of the summaries written by developers, what kind of information should be (ideally) included in automatically generated summaries, and the internal quality of the summaries generated by some automatic methods. Introduction As a broad concept, summarization is the process of reducing large volumes of information in entities like texts, speeches, or films, to short abstracts comprising the main points or the gist in a concise form [1]. Currently, one of the most promising applications of summarization is its use as a complement or a second level of abstraction for text retrieval tools, since they often return a large number of documents that overwhelm their users.For instance, automated summarizing tools are needed by internet users who would like to utilize summaries as an instrument for knowing the structure or content of the returned documents in advance, and eventually, be able to effectively filter out irrelevant results. Recently, software engineering researchers have begun to explore the use of summarization technologies, mainly as a potential instrument for supporting software comprehension tasks.These tasks are key developer activities during software evolution, accounting for more than half of the time spent on software maintenance [2].In the case of source code artifacts, developers are often faced with software systems with thousands or millions of lines of code, and before attempting any changes to those systems, they must find and understand some specific parts of them.We argue that offering developers descriptions of source code entities can reduce the time and effort needed to browse files, locate and understand the part of the system that they need to modify.Ideally, such summaries will be informative enough to be used for filtering irrelevant artifacts, and even, as substitutes of the detailed reading of full artifacts.Even when they do not convey enough information to replace the originals, they could be useful indicative summaries, a type of abstracts built with the purpose of indicating to the user which documents would be worth studying in more detail.In the worst scenario, developers would have to read the summary and the original artifact.Even in this case, this extra reading would be helpful, since the summary can provide a preview of the original document (e.g., its structure or an initial idea of its content). The major challenge in automatic software summarization is to handle mixed software artifacts such as source code, where information is encoded in a different way than in natural text documents.One key issue that we need to address is determining what is relevant in source code documents, and therefore, should be included in the summaries.The answer may be different for various types of source code entities (e.g., class vs. method) [3], and also may differ between programming languages.Some feasible ways to address this issue are (1) to study how developers create summaries of source code artifacts, (2) to analyze the summaries generated by them, and (3) to use their expertise for determining what information should be included in the summaries of source code artifacts. A second aspect of software summarization research is the evaluation of the quality and usefulness of automatically generated summaries.It is essential to know if the generated abstracts are useful supporting development or maintenance tasks and how this positive effect can be assessed.Moreover, some metrics are required to determine if the summaries convey the most relevant information in the original artifact.As a result, there have been considered two broad types of evaluation: extrinsic and intrinsic evaluation [4].The former one aims at determining whether the summaries are good instruments to support real-user's work, whereas the latter one measures internal properties of the abstracts such as semantic informativeness, coherence, and redundancy. From a research standpoint, intrinsic evaluation is important because it allows us to assess the results of a summarization system, compare the results of different approaches, and identify and understand the drawbacks of a particular summarization procedure.In this kind of evaluation, the quality of a summary can be established mainly through two approaches.In the first one, the peer summary (i.e. the summary being evaluated) is reviewed and rated by human judges, using some pre-established guidelines.The second alternative is to measure the similarity between the peer summary and some reference abstract given by experts, which is often called the gold standard summary. In this paper, we present the results of an empirical study where a group of developers (1) generated two types of manually-written summaries for various kinds of source code artifacts, and (2) answered questions about what they think should be included in a summary.Additionally, we propose the use of these humangenerated summaries, and some well-known text retrieval measures to carry out an intrinsic evaluation of several automatic summarization approaches. The remaining of the paper is organized as follows.Section 2 presents the problem and research questions we want to answer.The details of the conducted empirical study are described in Section 3. Section 4 sums up the more important results of the study, and discusses possible explanations and their implications for automatic summarization of source code.Section 5 discusses related work, and Section 6 draws some conclusions and remarks regarding extensions for this research. Definition and Research Questions of the Case Study The goal of this experiment was to analyze code summaries written by developers, and use these summaries as a test-bed to carry out a comparative evaluation of Text Retrieval (TR) techniques, when they are used as automatic code summarizers.The quality focus was on improving tool support for software comprehension tasks, as well as, providing a stable evaluation framework to measure whether automatic generated summaries convey the most relevant information in the original software artifacts.The perspective was of researchers who need to gain insight into (1) how developers analyze and summarize various kinds of source code entities; (2) what structural elements they consider should be included in a summary; and (3) how intrinsic summary evaluation methods can be used to evaluate automatic code summarization approaches. Therefore, within this case study the following research questions were formulated: RQ1 How long are the summaries generated by developers? RQ2 What type of structural information do developers include in their summaries? RQ3 What are the main characteristics of the terms most selected by developers?Can they be considered as gold-standard summaries for generating and evaluating automatic summarization tools? RQ4 How good are text retrieval techniques as automatic summarizers of source code artifacts?How much do these automatic summaries resemble the human-generated ones? Answers to RQ1 and RQ2 will give us valuable information about the data that should be (ideally) included in automatically generated summaries, and how they can be created.RQ3 and RQ4 aim at exploring TR techniques as code summarizers, and intrinsic evaluation methods as suitable approaches to measure the quality of their outcomes. Context of the Case Study This section begins with a description of the resources selected to perform the empirical study, i.e., the system and the participants.Then, it presents the layout of the experiment. Objects The system selected to carry out the experiment was aTunes, an open source project that manages and plays audio files.It is a small-medium sized Java system whose application domain is easy to understand and often interesting for almost any developer.Table 1 sums up the main features of the selected version. English Since we are interested in analyzing how various types of source code entities are summarized, and we think the content and structure of summaries are affected by the size and type of the summarized artifact, we selected two methods, two classes, two groups of methods (each group of methods consists of three methods, which, as a calling sequence, implement a specific feature of the system) and one package. We focused on entities dealing with the business logic (not GUI or data layer classes), and excluded too short entities, such as methods with less than 10 lines, classes with less than 3 attributes or a few number of short methods, and packages with only one class.Table 2 shows the basic features of the selected artifacts.It is important to point out that the aTunes version selected for the study contains very few heading comments; in fact, the artifacts used in the study do not contain such comments, but they do contain some inline comments. Subjects The subjects of this study were twelve graduate and undergraduate students from the Computer Science Department at Universidad Nacional de Colombia.Senior undergraduate students were recruited mainly from two courses: Software Engineering and Software Architecture; both courses are part of the second half of the undergraduate degree program in Computer Science.Only two of the participants were master students who were also working as professional Java developers.We excluded from the analysis the subjects who did not complete the tasks, and those who made mistakes due to the misunderstanding of experiment instructions.This way, the conducted analysis is based on the responses of only nine of the subjects.On average, the subjects had 4.6 years of programming experience.Regarding their knowledge of Java programming language, they reported an experience of 2.2 years on average, and considered their skills as satisfactory, good or very good, in all the cases.Just two of the subjects evaluated their experience in understanding and evolving systems as less than satisfactory.Since the experiment was carried out using the Eclipse IDE, we asked them about their abilities with this programming environment.Three of them considered their expertise with this particular IDE as poor or very poor.However, we think this issue did not have a major impact in the experiment's results because they used only searching and browsing commands, and they also mentioned extensive experience with NetBeans, a similar programming environment.Finally, subjects assessed their English proficiency at least as satisfactory, in all cases. Experiment layout As a preliminary experiment setup, Eclipse and aTunes were installed on each computer used in the experiment, and the following documents were created: • A description of the main functionality of aTunes that was used as an introduction to the system. • Slides for explaining the tasks to do within each stage of the experiment. • A form to collect information related to the programming experience and skills of each participant. • Forms to collect the summaries of each source code artifact. • Forms to collect feedback at the end of each session of the experiment. Participants attended three sessions; each one lasted around 2 hours.All sessions began with a brief explanation of the tasks to do.In particular, a general explanation of the whole experiment was given at the beginning of the initial session. The first session was a training session, where participants filled out an individual form about their English and programming skills.After that, a 10-minutes presentation of the software system was given, which included a demo of its most important functionality.The remaining time was spent by the subjects reading documentation of the system, and getting familiar with the organization of the source code. During the second session, subjects generated English sentence-based summaries for each artifact using the forms we prepared for this task.A sentence-based summary is an unrestricted natural language description of the software entity (abstractive summary), and within this experiment, it was used as a sanity-check instrument, i.e., a basic test to quickly evaluate that the answers of a participant did not contain elementary mistakes or impossibilities, or were not based on invalid assumptions.Each subject finished the session answering the post-experiment questionnaire regarding the tasks done and the kind of analysis performed. During the last session, participants summarized the artifacts in Spanish and then, they created termbased summaries.A term-based summary is a set of unique words or identifiers selected from the source code of the software entity (extractive summary).They selected relevant terms for each artifact, by enclosing them within source code files, using a set of predefined tags.Finally, each subject answered the post-experiment questionnaire regarding the tasks done and the usefulness of various parts of the code when doing term-based summarization. Brief description of artifacts' content As stated in section 3.1, four types of artifacts were used in the experiment.Although all these entities are composed by a great number of identifiers and keywords, just some of them are useful to describe the content of the artifact.Actually, within a source code unit some terms are repeated constantly in different identifiers.For example, the method M1 is formed by 231 terms, but only 70 of those are unique.The same information for the other artifacts is presented in Table 3.The length in this case refers to the amount of terms in the artifact, including keywords and identifiers, after a process of splitting.For instance, the method name timeInSeconds is transformed into the terms time In Seconds.As another example, the variable name DEFAULT LANGUAGE FILE, is split in DEFAULT LANGUAGE FILE.The rationale behind splitting is that we want to know which is the exact vocabulary used within each artifact.At first glance, it can be observed that the ratio between unique terms and length is very low.Furthermore, this ratio decreases as the artifact increases its size.In detail, the ratio is higher for methods possibly because they encapsulate functions over specific objects.Sequences also perform specific functions but according to the classes which their methods belong to, the ratio of unique identifiers can increase: if the methods belong to the same class the ratio is lower, otherwise the ratio is higher.A possible explanation to this situation is that the greater amount of involved classes in an artifact, the more themes are touched by it, and by extension, there are more unique terms in its source code. General description of developers' summaries In this phase of the study, we focused the analysis on summaries written in English, in order to compare and contrast them with term-based summaries and also with the declaration of the artifacts, both of these written in English.Once again, the length of the summary is calculated as the amount of split terms it contains. Regarding the sentence-based summaries, there is no significant relationship between their size and the length or type of artifact they describe (p-value for Pearson's correlation = 0.359).Even the average of unique terms in this kind of summaries is almost the same for all type of artifacts (Table 4).In the case of sequences, however, the length of the summaries is slightly greater, as well as their number of unique terms; this indicates that sequences are harder to describe or deserve more detailed descriptions.On the other hand, for term-based summaries we found direct relationships with respect to lengths (pvalue for Pearson's correlation < 0.01).Specifically, developers tend to mark more terms when they are analyzing packages, fewer terms when the artifacts are classes or sequences, and even fewer terms when they are dealing with methods, as shown in Table 4.This situation indicates that all kind of artifacts cannot be summarized with the same fixed number of terms: as granularity level increases, the amount of terms needed to describe an artifact decreases.The high standard-deviation values obtained in this case indicate that developers hardly ever mark similar number of terms. Origin of relevant terms In a post-experiment questionnaire, we asked the developers about the usefulness of structural information from source code when doing term-based summarization.To that end, the participants rated different locations of source code (e.g.class name, attribute type, attribute name, etc.) on a 1-to-4 Likert scale [5], where 1 represented totally useless and 4 represented very useful.We did not used the 5-level scale in order to avoid the tendency to mark non-committal answers (i.e., neither useful nor useless).Not surprisingly, through the questionnaire we found that when summarizing methods, classes and packages, their respective names were considered as the most useful parts of code.Other locations equally useful when describing methods were invoked methods, which together with methods names, were assessed as very useful when dealing with classes.In the case of sequences (and similar to methods), the parts of code considered as most useful were the names of the invoked methods and the variables and parameters names.We also noticed two striking facts on the questionnaire: 1. Source code comments were not considered valuable, especially in packages and sequences cases, where they were rated as totally useless information. 2. Methods and classes names were considered useful when summarizing all four types of artifacts. Furthermore, packages were recognized as more difficult to summarize than other artifacts, and only packages names, classes names and methods names were useful information, whereas the rest of the locations were marked as moderately or totally useless.This may be the cause of the tendency to mark a greater amount of terms within that type of artifacts, and additionally suggests that a multi-document approach, where each class of the package would be treated as an individual document, can be adequate for summarizing packages. In order to go deeper into the origin of relevant terms and contrast the questionnaire answers, we identified where every marked term belonging to term-based summaries came from.These origins were classified in the same categories used in the questionnaire (Table 5).We discovered that developers constantly marked the local variable names when summarizing all artifacts, even for packages, where they had been classified as moderately useless information.We also noticed that terms from comments were hardly used when building summaries for sequences, classes and packages; this proves the uselessness of this location reported by developers.In the case of methods, however, terms in comments were even more frequently chosen than attributes names and parameter types, although these latter ones were ranked as very useful within this kind of artifact. Additionally for methods summaries, the origin extraction confirmed the usefulness of method call names and method names given by developers to those parts of code; nevertheless, we observed that the names of their parameters were also frequently used when summarizing them.In contrast, well-ranked locations such as parameters types, classes names and attributes types, were not used at all when summarizing methods. With regard to sequences, developers considered methods names, invoked methods names and local variables names as relevant, and actually, this was confirmed by the terms marked for their summaries.It is worth mentioning that although attributes types, parameters types and methods returns types were considered as useful in the questionnaire, they were never used when summarizing sequences. In the case of classes summarization, developers rated their names as very useful information, which was confirmed by means of origins extraction.These same names were considered useful for all artifacts, but actually, they were rarely marked when summarizing methods and sequences. Concerning packages, their names together with the names of classes and methods were constantly used to summarize this type of artifact, just as mentioned by developers in the questionnaire.Nonetheless, other locations such as the names of attributes, variables and parameters, which were ranked as useless by developers, were in fact often used in packages summaries. Surprisingly, the type of variables, attributes and parameters were barely used in most of the cases, albeit they were considered useful when summarizing methods and sequences.Even so, sequences and classes kept the correlation between the scores given to parts of code by developers, and the real proportion of terms' locations in term-based summaries.This means that only such kinds of artifacts preserved a high coherence between developers' opinions about the usefulness of the locations, and their actual summarization choices. Finally, additional categories not included in the questionnaire were used by developers, such as literal data allocated in string constants or systems logs, which were marked more times than other origins previously considered.For instance, the literal texts "Exporting process done" and "Exporting songs" were marked by some developers to describe the sequence S2. Approximation to gold standard summaries As shown in Table 6, from the set of unique terms in a sentence-based summary around 35% are provided by the declaration of the artifact, no matter its type.Apparently, this suggests that extractive approaches are not enough to generate the summaries automatically, given the great amount of new terms within the description provided by developers. Nonetheless, the overlap between the term-based summaries (which are comprised exclusively by terms found in the declaration of the artifacts) and the terms in sentence-based summaries, reveals that relevant terms can be extracted from source code as the basis for a short description.Here, the relevancy of a term is defined by the amount of developers who chose it to be part of the summary, i.e., the agreement among subjects.Since in the experiment participated nine developers, the relevancy scale goes from 0 to 9, where 0 represents totally irrelevant and 9 means totally relevant.For the terms in the last mentioned overlap, we found that about 75% of them were chosen by five or more developers.This percentage could increase if we take into account the use of synonyms in the free-form summaries; as a case in point, the term encode found in method M1, was replaced by words such as transform and convert in the sentence-based summaries.Thus, some text retrieval techniques might be suitable for identifying the most prominent terms within source code artifacts, as was proposed by [6]. In each single artifact, the relevant terms in the intersection between term-and sentence-based summaries could be considered as a gold standard summary, i.e., a reference or an ideal description that contains the important information of the entity under analysis.Nevertheless, the results show that the terms chosen by five or more developers in the term-based summaries form, in fact, a better approximation to those standard summaries.Some of their properties are presented in Table 7.It can be noticed that the length of these summaries depends on the type of artifact they describe, and also on the length of such artifact (p-value for Pearson's correlation < 0.01).Therefore, the gold summaries of packages and classes are larger than those that describe sequences and methods. About the terms' origins, the proportions of term-based summaries remain stable for ideal summaries, with little exceptions.For example, in the case of sequences, attributes names and literal texts do not take part of gold summaries.The same situation occurs with variables names for classes and packages.The principal terms' locations for each artifact are presented in Table 7. Since they represent the core of both types of abstracts, the gold standard summaries obtained in the experimental study represent the main target of the automatic summarizer we aim to achieve.Moreover, they are suitable to assess its results through intrinsic evaluation measures [4]. Evaluating automatically generated summaries Usually in text processing, the quality of summaries' content is determined by comparing the peer summary (i.e., the summary to be evaluated), with an ideal summary (i.e., a gold standard summary).In the specific case of extractive summaries, the primary metrics to perform such task are precision and recall.These metrics are based on the relevant content of the summary, and are defined as following: The range of both metrics is [0, 1].A precision value equal to one means that all the terms in the peer summary are relevant, although there could be relevant terms missing.On the other hand, a recall value equal to one means that the peer summary contains all the relevant terms, though it could also contain some irrelevant terms.In general, the lower the length of the peer summary, the higher the precision; whereas, the higher this length, the higher the recall. A third metric, called F-score, measures the balance between precision and recall: The highest value reached by this harmonic mean is an indicator of the best achievable combination of the metrics it involves. Text Retrieval Techniques in Source Code Summarization In [3], three techniques from Text Retrieval were proposed to summarize source code artifacts: the Vector Space Model (VSM), Latent Semantic Indexing (LSI), and a combination of VSM and lead summarization.This latter approach is based on the hypothesis that the first sentences of a document are a good summary of it. The Vector Space Model is one of the most common algebraic models in Text Retrieval for representing text corpora.It assumes a corpus as a set of documents D, from which is extracted the set of terms T , i.e., the vocabulary.Then, it represents this corpus as a matrix M |T |×|D| , where the row i corresponds to the term t i ∈ T , and the column j corresponds to the document d j ∈ D. In this sense, the value in the cell i, j is the weight of the term v i in the document d j . The basic weighting scheme is the Boolean-based, which assigns one to the cell m i,j if the term t i occurs in the document d j , or zero otherwise.Other weighting schemes consider local and global weights, i.e., the contribution of the term t i to the document d j and to the entire set of documents D. For example, the popular scheme tf-idf determines the weight of the term t i by multiplying its frequency of occurrence in the document d j , by its inverse document frequency, as following: When summarizing source code, the documents are code artifacts such as methods or classes.The k terms with the highest weight in the vector d j are the ones conforming the summary of the document.This k value is usually called constant threshold. The Latent Semantic Indexing (LSI) is based on a dimensionally reduced version of the vector space produced by VSM, in order to recover the underlying semantic in the corpus.Therefore, LSI uses Singular Value Decomposition (SVD) to decompose the matrix M into the left and right singular matrices U and V (which represent the terms and documents, respectively), and a diagonal matrix of singular values Σ.Then, M = U ΣV * , where V * is the transpose of V .The dimensions of these matrices can be reduced by choosing the C first columns of U and V , and the highest C singular values in Σ, which leads to C , i.e., to the approximation of the matrix M . The corpus representation produced by LSI allows to compute the similarity between terms and documents.The summary of the document d j ∈ D is formed by the k terms in T with highest cosine similarity with the vector of the document d j . According to the results of an informal evaluation, where humans assessed the output of some TR-based summarizers, in [3] it was concluded that the combination of lead summarization and VSM (from now on called lead+VSM ) produces better summaries than LSI and VSM by itself.Broadly, the lead summaries consist of the first k terms that appear in the target documents.In the case of source code artifacts, these first terms often contain the artifact type and name, which are rarely found in the VSM summaries.Thus, the combined summaries contain complementary information from both techniques. Evaluating Text Retrieval Techniques through Intrinsic Measures In order to evaluate the aforementioned techniques in software summarization, we computed the precision, recall and F-score metrics of their resulting summaries.For performing this task, we utilized the gold standard summaries described in section 4.4.Besides, we considered two baseline summarization methods, namely lead and random.This latter one generates summaries consisting of k terms randomly chosen from the target documents. In [3], there are considered methods and classes summaries of length 5 and 10.In a different fashion, we evaluated summaries that vary its length from 5 to 20, since we were interested in analyze the influence of the length in the quality content metrics, and also, because we considered other kinds of code artifacts (i.e., sequences and packages).Thus, our objects of study were summaries composed by five to twenty terms, generated by VSM, LSI, lead+VSM, lead, and random methods, of each artifact described in Table 2. For these summarization techniques, we observed that, as usual, the precision decreased as the recall and length of the summaries increased.However, in exceptional cases (artifacts M1, C2 and S2) there was an upward trend in the precision of LSI summaries, when increasing the length of the summaries.As expected, the precision of random summaries was low for all kind of artifacts and lengths, although in several cases LSI summaries had the lowest precision values, which confirms the results obtained in [3].This fact was clearly observed in the artifacts C2 and P1, where LSI summaries had lower precision than random summaries.It was noticeable that in average, lead, VSM and lead+VSM had significantly higher precision and recall than random and LSI, no matter the kind of artifact that was being summarized.This fact can be observed in Fig. 1 and Fig. 2. When analyzing precision, we found that it was low for methods summaries having more than 10 terms, and for sequences and classes summaries having more than 15 terms.Considering the ranges where precision values were high, we found that in the method case, lead+VSM technique achieved the best results.This same technique together with lead got good precision values for sequences summaries.In the case of classes and packages, lead+VSM, lead and VSM summaries had similar precision, making it difficult to determine which technique was better for these kinds of artifacts.Furthermore, such techniques had a high precision (above 0.5), even for long summaries. Regarding recall values, once again lead+VSM, lead and VSM outperformed LSI and random, and in some cases, random summaries achieved higher recall than LSI summaries (e.g., for artifacts C2 and P1).In addition, for every summarization technique, the recall values remained constant for the summaries consisting of more than 15 terms, with few exceptions, such as the lead+VSM summaries of the artifacts C1, S1 and P1, which continue increasing when k > 15. All these results suggest that in order to get the gist of source code artifacts automatically, the length of a term-based summary should be in the interval [10,20].For methods, the number of terms in the summary is nearer to the lower bound (10), while for packages, this number is nearer to the upper bound (20).This means that automatic summaries are approximate 25% longer than the gold standard summaries, which is not an issue if they capture the intent of the code and remain shorter than the artifact they describe.These results were confirmed by the F-score values, which presented acceptable and stable values in the range [10,20] for all kinds of artifacts when the summaries were generated by lead+VSM, lead and VSM techniques.The average F-score values are presented in Fig. 3. Additionally, the intrinsic evaluation also showed that LSI based on tf-idf and random are not appropriate techniques to summarize source code artifacts, which confirms the results in [3].Although in average lead+VSM outperformed lead and VSM, none of these techniques is specially suitable or unsuitable for summarizing an specific kind of artifact.In fact, the performance of these three techniques is similar in all cases. Threats to validity As in any empirical study in software engineering, we cannot generalize the outcomes.Therefore, we consider these results only as useful heuristics to guide the development of automated summarization and documentation tools.The number of participants is always an issue for this type of experiment and in our case, nine developers is clearly a small group.Moreover, although subjects reported some experience in programming and evolving systems, they cannot be considered as professional developers.We plan to work with other research groups, and perform similar but larger studies involving more experienced subjects in order to gain more confidence in the results. Equally important, we selected only two methods, two method sequences, two classes and one package from a single system.While we tried to vary their properties, they may not necessarily be the most representative of each type of artifact.Moreover, aTunes system has high quality, self-explanatory identifiers, and a very simple and clear domain.Therefore, we cannot estimate what would be the results for systems with poor identifier naming or a more complex domain. During the summarization sessions carried out, developers had to study several times the same artifacts and write three different types of summaries for each of them.In consequence, the presence of a learning effect is possible.We did not try to measure or mitigate it. Related Work The automatic summarization of natural language text has been widely investigated by researchers and many approaches have been proposed, which are based mostly on Text Retrieval (TR), machine learning, and natural language processing techniques [1].The summarization of software artifacts is only at the beginning, but there are promising results.For instance, abstracts of bug report discussions, generated using conversation-based classifiers, were proposed as a suitable instrument during bug report triage activities [7]; the summarization of the content of large execution traces was suggested as a tool that can help programmers to understand the main behavioral aspects of a software system [8]. Regarding automatic summarization of source code, in [9] it was proposed an abbreviated and accurate description of the effect of a software change on the run time behavior of a program, in order to help developers validating software changes and understanding modifications.High level descriptions of software concerns were designed for raising the level of abstraction and improving the productivity of developers, while working on evolution tasks [10].The text retrieval based approaches for source code summarization, first introduced in [6], were applied for summarizing whole source code artifacts, with the purpose of aiding developers in comprehension tasks [3].A form of structural summarization of source code has also been proposed in [11], which presented two techniques, i.e., the software reflection model and the lexical source model extraction for a lightweight summarization of software.These two techniques are complementary to the approaches we investigated and we envision combining them in the near future. By the same token, some TR techniques are used in [12] to cluster source code and relevant terms from each cluster are extracted to form labels.A similar approach is used in [13], where TR is used to extract the most relevant set of terms to a group of methods returned as the result of a search.These terms are treated as attributes, which are used to cluster the methods.In each case, the labels and attributes can be considered as (partial) summaries. Another related research thread is on source code tagging and annotations [14].These mechanisms could support developers to create and represent manual summaries of the code (in addition to comments). Although several alternatives have been explored to summarize various types of software artifacts, the evaluation of the generated summaries has been mostly informal.For example, [15] presents an approach to summarize methods by identifying and lexicalizing the most relevant units.The generated summaries in this case were evaluated by asking developers how much accurate, adequate and concise those descriptions were. An exception to this informal situation is [7], where bug reports summaries were evaluated by using intrinsic measures such as precision, recall, F-score and pyramid precision, to assess the informativeness, redundancy, irrelevant content and coherence.Then, these results were compared against scores assigned by human judges to the same features.However, from a practical point of view, this study is considered as text-summarization, and therefore, its evaluation mode is not really novel. In that sense, the term-based summaries generated by [6] from source code using information-retrieval techniques were evaluated using the Pyramid method.Also, the descriptions of source code produced in [3] underwent intrinsic-online evaluation for assessing the agreement between developers. Conclusions and Future Work The presented case study analyzed two kinds of summaries created by Java developers for several source code entities, with the purpose of studying how programmers create descriptions of source code.Besides, we asked developers to provide answers to questions about what they think should be included in a summary. When developers create natural language descriptions of source code artifacts, the length is similar for all types of entities.We obtained slightly longer summaries for the case of sequences of calling methods.This result may indicate that this kind of artifact is harder to describe or deserves more detailed explanations. On the other hand, the length of a term-based summary is correlated with the length of the artifact it summarizes.This result suggests that term-based summaries (extractive summaries) are inherently less informative than sentence-based summaries, and therefore, they are not enough to fully describe source code artifacts.Such fact is corroborated by the low percentage of words used in sentence-based summaries that correspond to terms selected within term-based summaries. Consequently, despite textual information is essential, automatic code summarizers cannot exclusively rely on the identification of relevant terms contained in software entities.The precision, recall, and F-score values achieved by some TR-based techniques show that the semantic information by itself is not enough to generate high-quality code summaries.However, the outcomes of these techniques can be considered a good starting point for source code summarization, and they can be improved using structural information and natural language processing tools. The experiment also gave us clues about what should be included in a summary.For instance, local variable names can be considered as useful pieces of information for describing all types of entities; names and invoked method names are quite relevant for summarizing methods; invoked method names and variable names are relevant for explaining sequences of calling methods; the name of a class is essential for describing its purpose.The results also suggest that summarization of packages is often problematic.This could indicate that packages cannot be considered as units, and in consequence, a multi-document approach, where a package would be conceived as a group of related documents (i.e., classes), is more appropriate. Overall, the results obtained represent valuable information for building and evaluating automatic summarization tools.The gold-standard summaries characterize the main target of our envisioned summarizer, which will consider structural and textual information of artifacts.Since the text retrieval methods studied in this opportunity achieved only acceptable results, we plan to investigate and apply other text retrieval techniques in code summarization, and some multi-document summarization approaches for large artifacts as packages.Moreover, new user studies will be conducted to assess the effect of summaries on several development and maintenance tasks. Figure 1 : Figure 1: Average precision for VSM, LSI, lead+VSM, lead and random summaries.The x-axis represents the length of the summary, and the y-axis represents average precision values Figure 2 : Figure 2: Average recall for VSM, LSI, lead+VSM, lead and random summaries.The x-axis represents the length of the summary, and the y-axis represents average recall values Figure 3 : Figure 3: Average f-score for VSM, LSI, lead+VSM, lead and random summaries.The x-axis represents the length of the summary, and the y-axis represents average f-score values Table 1 : Main characteristics of aTunes system Table 3 : Identifiers and unique terms of each selected artifact sorted by length Table 4 : Length properties of sentence-and term-based summaries by artifact Table 5 : Percentage of terms' origins marked by developers Table 6 : Average overlap between declaration, sentence-and term-based summaries by artifact Table 7 : Properties of gold standard, term-based summaries by artifact
9,086
2012-08-01T00:00:00.000
[ "Computer Science" ]
EFD and CFDDesign and Analysis of a Propeller in Decelerating Duct Ducted propellers, in decelerating duct configuration, may represent a possible solution for the designer to reduce cavitation and its side effects, that is, induced pressures and radiated noise; however, their design still presents challenges, due to the complex evaluation of the decelerating duct effects and to the limited amount of available experimental information. In the present paper, a hybrid design approach, adopting a coupled lifting line/panel method solver and a successive refinement with panel solver and optimization techniques, is presented. In order to validate this procedure and provide information about these propulsors, experimental results at towing tank and cavitation tunnel are compared with numerical predictions. Moreover, additional results obtained by means of a commercial RANS solver, not directly adopted in the design loop, are also presented, allowing to stress the relative merits and shortcomings of the different numerical approaches. Introduction Propeller design requirements are nowadays more and more stringent, demanding not only to provide high efficiency and to avoid cavitation, but including also requirements in terms of low induced vibrations and radiated noise.Ducted propellers may represent a possible solution for this problem; despite the fact that their main applications are devoted to the improvement of efficiency at very high loading conditions (near or at bollard pull), with accelerating ducts, decelerating duct application may result in improved cavitation behavior.Concepts and design methods related to these propulsors are well known since the early 70s [1], and many different works have been presented during years.Notwithstanding the rather long period of application (and study) of these propulsors, their design still presents many challenges, which need to be analyzed, including the evaluation of the complex interaction between duct and propeller, of the duct cavitation behavior, and of its side effects, such as radiated noise and pressure pulses. These problems are amplified when decelerating, rather than accelerating, duct design is considered; one of the reasons for these difficulties is the higher complexity of the calculation of the duct decelerated flow, which makes the application of conventional lifting line/lifting surface design approaches less practicable or at least not sufficiently accurate.Moreover, a further problem is represented by the lack of experimental data for this type of nozzle configuration with respect to the more conventional (and widely studied) accelerating ones.In order to alleviate the mentioned problems, in the present work, a hybrid design approach is presented.As a first step, the initial estimation of the blade geometry is performed, applying a fully numeric coupled lifting line/panel method solver [2].Traditional approaches, based on Lerbs approximations [3], are in fact, unable to treat complex geometries, including the effect of the hub and, of course for these kind of propellers, of the duct.A more robust approach is thus required at least for their preliminary design, as well as improved analysis tools, capable to assess the complex viscous interactions that take place on the gap region between the propeller tip and the duct inner surface.This first step geometry is successively refined by means of a panel method coupled International Journal of Rotating Machinery with an optimization algorithm, adopting an approach which already demonstrated successful results in the case of conventional CP propellers [4][5][6] with multiple design points.In the present case, the use of the panel code in the second design phase (geometry optimization) allows a more accurate evaluation of the cavity extension and of its influence on the propeller performances, thus leading to a better design.The theoretical basis of the design approach is reported in Section 2, while in Section 3 an application to a practical case is presented.Once the final geometry has been obtained, a thorough analysis of the propulsor functioning in correspondence of a wide range of operating conditions, covering design and off-design points (in terms both of load and cavitation indices), is presented.This analysis was carried out applying the same panel method adopted in the design loop and a commercial RANS solver [7] in order to appreciate the capability of the two approaches to correctly capture the ducted propeller performances (mechanical characteristics and cavity inception/extension).If an accurate geometrical description of the duct (within the potential approaches possible only with the employment of the panel method) is fundamental to capture the accelerating/decelerating nature of the nozzles, viscous effects at the duct trailing edge and at the blade tip can have, with respect to the free running propellers case, an even higher influence on the propeller characteristics.The load generated by the duct and the redistribution of load between the blade and the duct itself are, in fact, strongly dependent by the flow regime on the gap region.Sanchez-Caja et al. [8] and Abdel-Maksoud and Heinke [9] successfully predicted the open water characteristics of accelerating ducted propellers with RANS solvers, providing valuable information (beyond the potential codes capabilities) on the features of the flow in the gap region; potential panel methods, in order to simulate these complex phenomena, need the adoption of empirical corrections (like the orifice equation, as in [10]), which may also include the effect of boundary layer on the wake pitch [11] or simplified approaches, like the tip leakage vortex [12].The accurate description of these phenomena, also through reliable viscous computations, could provide practical ideas for the design process in order to improve the robustness of the approach and the corrections to the potential flow computations.In order to validate the numerical results, an experimental campaign at the towing tank and at the cavitation tunnel was carried out, as presented in Section 4. The comparison of numerical and experimental results in correspondence to the various operating conditions considered allows to stress the merits and the shortcomings of the various approaches, as discussed in Section 5. Coupled Lifting Line/Panel Method Design Approach. In the case of lightly and moderately loaded free running propellers, operating in a nonuniform inflow, the fully numerical design approach is based on the original idea of Coney [2] for the definition, through a minimization problem, of the optimum radial circulation distribution.Traditional lifting-line approaches are, in fact, mainly based on the Betz criteria [3] for the minimum energy loss on the flow downstream of the propeller, and the satisfaction of this condition is realized by an optimum circulation distribution that is generally defined as a sinus series over the blade span.In the fully numerical design approach [2], instead, this continuous radial distribution of vorticity Γ(r) along each lifting line that models each of the propeller blades is discretized with a lattice of vortex elements of constant strength.The continuous trailing vortex sheet that represents the blade trailing wake is therefore replaced by a set of M horseshoe vortexes, each of intensity Γ(m) and each composed by two helical trailing vortexes, aligned with the hydrodynamic angle of attack and a bound vortex segment, on the propeller lifting line, as in Figure 1. With this discrete model the influence of the hub can be simply included by means of image vortexes [13], based on the well-known principle that a pair of two-dimensional vortexes of equal and opposite strength, located on the same line, induce no net radial velocity on a circle of radius r h .The same result approximately holds in the case of three-dimensional helical vortexes, provided that their pitch is sufficiently high.As a consequence, in the case of propellers, the image helical vortexes representing the hub lay on cylinders whose radiuses can be calculated as where r is the radius of each vortex that models the blade, r ih is the radius of its hub images, and r h is the mean radius of the hub cylinder.This system of discrete vortex segments, bound to the lifting line and trailed in the wake, induces axial and tangential velocity components on each control point of the lifting line, defined as the mean point of each bound vortex segment, where boundary conditions are enforced.These self-induced velocities are computed applying the Biot-Savart law as the contribution, on each control point, of all the horseshoe vortexes modeling each blade: where A im and T im are, as usual, the axial and tangential velocity influence coefficients of a unit horseshoe vortex placed at the m-radial position on the i-control point (iradial position) of the lifting line and M and M h are the total number of horseshoe vortexes representing the blades and their hub images.With this discrete model, the hydrodynamic thrust and torque characteristics of the propeller can be computed by adding the contribution of V a (r) Figure 1: Blade equivalent lifting line, reference system, and velocities convention. each discrete vortex on the line.In fact, under the assumption of pure potential and inviscid flow: where V tot (r) • cos β i (r) is simply the total tangential velocity acting at the lifting line (relative inflow V t + ω • r plus selfinduced tangential velocity u t ), V tot (r) • sin β i (r) is the axial velocity (inflow V a plus self-induced axial velocity u a ), and β i is the hydrodynamic pitch angle.In discrete form (3) leads to A variational approach [2] provides a general procedure to identify the set of discrete circulation values Γ(m) (i.e., the radial circulation distribution for each propeller blade described as the superposition of the strength of its M horseshoe vortexes) such that the propeller torque (as computed in (4)) is minimized, keeping contemporarily to a constant value (within a certain tolerance) the required propeller thrust T R , which is a constrain of the problem. Introducing the additional unknown represented by the Lagrange multiplier λ, the problem can be solved in terms of an auxiliary function ), requiring that its partial derivatives are zeros: Carrying out the partial derivatives, (5) leads to a nonlinear system of equations for the vortex strengths and for the Lagrange multiplier, because self-induced velocities depend, in turn, on the unknown vortexes strengths themselves.The iterative solution of the nonlinear system is obtained by the linearization proposed by Coney [2] in order to achieve the optimal circulation distribution that minimizes torque with the prescribed thrust.This formulation can be further improved to design moderately loaded propellers and to include viscous effects.The initial horseshoe vortexes that represent the wake, frozen during the solution of (5), can be aligned with the velocities induced by the actual distribution of circulation and the solution iterated until convergence of the wake shape (or of the induced velocities themselves). A viscous thrust reduction, as a force acting on the direction parallel to the total velocity and thus as a function of the self-induced velocities themselves, can be furthermore added to the auxiliary function H, and a further iterative procedure, each time the chord distribution of the propeller has been determined, can be set.In total, for the design of a single propeller, the devised procedure works with (i) an inner iterative approach for the determination of the optimal circulation distribution by the solution of the linearized version of ( 5), (ii) a second-level iterative approach to include the viscous drag on the optimal circulation distribution, by adding viscous contribution to the auxiliary function H, (iii) a third-level iterative approach to include the wake alignment and the moderately loaded case. This design procedure outlined for free running propellers can be easily extended to treat the case of ducted propellers.As for the hub, the influence of the nozzle on the performances of the propeller can be included in the numerical lifting line model simply adding an appropriate set of image vortexes in place of the duct itself, in order to include its "wall" effect and the resulting loading of the blade tip region.With a formulation equivalent to that of (1), it is possible to define the radial location of the duct image International Journal of Rotating Machinery vortexes, replacing the hub cylinder mean radius r h with the duct cylinder mean radius r d . The presence of the duct, however, influences the propeller performances not only in terms of additional load at the tip.The shape of the nozzle (for accelerating or decelerating configurations) induces very different inflow distributions on the propeller plane, which cannot be taken into account by means of the simple addition of the image vortexes that model the "wall" condition.The main responsibles of the modified inflow at the propeller plane are, in fact, the effective shape and the thickness of the nozzle that are neglected by the vortical approach.Moreover the nozzle contributes to the total propulsive thrust, and, therefore, the design of a ducted propeller has to include this additional term.To overcome the limitation of the original approach based only on a distribution of vortexes, an iterative methodology has been devised, in order to couple the numerical lifting line design approach (for the determination of the optimal circulation distribution and of the resulting propeller geometry) with a panel method, suited for a more accurate computation of the inflow velocity distribution on the propeller plane and for the evaluation of the duct thrust force.The coupling strategy between the two codes is schematically presented in Figure 2.With respect to the procedure outlined in the case of free running propellers, the coupling with the panel method modifies the inner and the outer iterative loops.The interaction between the propeller lifting line and the duct is, in fact, achieved through induced velocities.Every time a new circulation distribution has to be computed, the panel method provides the input inflow velocity distribution V a and V t necessary in (4) and the definition of the hydrodynamic pitch angle β i needed for the determination of the trailing vortexes shape on the propeller wake.The duct (without the propeller), operating on the mean inflow generated by the set of lifting line vortexes computed at the previous design iteration, is solved by the panel method, and the mean axial and tangential velocities induced on the propeller plane are used as the input inflow for the next design step.Furthermore, once a propeller geometry has been defined, not only the frictional forces are computed and the propeller thrust is updated but also the duct thrust/resistance is calculated (by the panel method applied to the entire propeller/duct problem) and the required propeller thrust is adjusted in order to achieve the total (propeller plus duct) propulsive thrust. After the blade circulation and hydrodynamic pitch distribution have been defined, the design procedure proceeds to determine the blade geometry in terms of chord length, thickness, pitch, and camber distributions which ensure the requested sectional lift coefficient satisfying, at the same time, cavitation and strength constraints.For the calculation of blade stresses the method proposed by Connolly [14] has been preferred, while cavitation issues are solved in accordance with the approach developed by Grossi [15], in turn based upon an earlier work by Castagneto and Maioli [16] where minimum pressure coefficients on a given blade section with standard NACA shapes are semiempirically derived.A more detailed description of the design procedure may be found in Gaggero et al. [17,18]. Design by Optimization. The design of ducted propellers via lifting line approaches remains, however, problematic.Despite the lifting surface corrections that can be adopted for the definition of the blade geometry (through the empirical corrections proposed by VanOossanen [19] or by a dedicated lifting surface code), the influence of the blade and of the duct thickness, the nonlinearities linked with the cavitation, and the effects of the flow in the gap between the blade tip and the inner duct surface strongly affect the optimal propeller geometry.An alternative and successful way to improve the propeller performances is represented by optimization [4][5][6].The design of the ducted propeller can be improved, in fact, adopting an optimization strategy, namely, testing thousands of different geometries, automatically generated by a parametric definition of the main geometrical characteristics of the propeller (eventually also of the duct), and selecting only those able to improve the performances of the initial configuration (e.g., in terms of efficiency and cavity extension) together with the satisfaction of defined design constraints (thrust identity, first of all). Panel methods, with their extremely high computational efficiency (at a sufficient level of accuracy with respect to RANS solvers), are the natural choice for the analysis of thousands of geometries: with respect to lifting line/lifting surface models, panel methods allow to directly compute the influence of the hub and, especially, of the duct, both in terms of the additional load on the blade tip region and in terms of the velocity disturbance on the whole propeller, avoiding the simplified representation of the duct only by vortex rings and sources.Also cavitation (at least sheet cavitation both on the back and on the face blade sides) can be directly taken into account, by means of a better computation of the pressure distribution instead than by semiempirically derived minimum pressure coefficients on standard blade sections. For the improvement of the ducted propeller performances a panel method developed at the University of Genoa [20,21] and specifically customized for the solution of cavitating ducted propellers with the inclusion of the tip gap flow corrections [10] has been adopted.Potential solvers are based on the solution of the Laplace equation for the perturbation potential φ [22], which is the counterpart of the continuity equation if the hypotheses of irrotationality, incompressibility, and absence of viscosity are assumed for the flow: Green's second identity allows to solve the threedimensional problem of (6) as a simpler integral problem involving only the surfaces that bound the computational domain.The solution is found as the intensity of a series of mathematical singularities (sources and dipoles) whose superposition models the inviscid, cavitating flow on and around the propeller.Boundary conditions (dynamic and kinematic both on the wetted and the cavitating surfaces, Kutta condition at the trailing edge, and cavity bubble closure at bubble trailing edge) close the solution of the linearized system of equations obtained from the discretization of the differential problem represented by ( 6) on a set of hyperboloidal panels representing the boundary surfaces (Figure 3) of the hub, the blade, the duct, and the relative trailing wakes.An inner iterative scheme solves the nonlinearities connected with the Kutta boundary condition while an outer cycle solves the nonlinearities due to the unknown cavity bubble extension.As usual forces are computed by integration of the pressure field, evaluated by the Bernoulli theorem, over the propeller surfaces, while the effect of viscosity is taken into account with a standard frictional line correction.With respect to the free running propeller case, the solution of the potential problem, when a ducted propeller is addressed, requires a special treatment of the flow on the gap region that could strongly influence the propeller tip loading and the distribution of load between the propeller and the duct itself.In present case a gap model with transpiration velocity (similar to that proposed by Hughes [10]) and the orifice equation are adopted.At first an additional strip of panels along the blade tip is introduced to close the gap between the propeller and the duct.Moreover, a wake strip of panels is added, for which International Journal of Rotating Machinery the dipole strength is determined again from the Kutta condition.The existence of a transpiration velocity through the gap is obtained with a modification of the kinematic boundary conditions (∂φ/∂n = −V ∞ • n for fully wetted panels) applied to the panels on the gap strip: where n c is the unit normal vector to the mean camber line at the gap strip on the same chordwise position of the panel, n is the unit normal of each panel on the gap strip, V ∞ is the total, local velocity vector, C Q is an empirical discharge coefficient (set equal to 0.85) to take into account the losses on the gap region, and ΔC P is the unknown pressure difference between the face and back side of the gap region.A further iterative scheme is, thus, required to force the boundary condition of ( 7) on the gap panels: as a first step, the problem is solved as if the gap was completely closed (C Q = 0), and the initial pressure difference is computed; in following steps, ( 7) is updated with the current value of pressure difference, and the potential problem is solved again until a certain convergence of the gap flow characteristics is achieved. For the application of the panel method (mainly an analysis, not a direct design code) into a design procedure through optimization, a robust parametric representation of the propeller geometry [4-6, 20, 21] is needed.The classical design table is, inherently, a parametric description of the geometry itself.All the main dimensions that define propeller geometry, like pitch, camber, and chord distribution along the radius, represent main parameters that can easily be fitted with B-Spline parametric curves, whose control points turn into the free variables of the optimization procedure, as in Figure 4.As regards the profile shape, instead of adopting standard NACA or Eppler types, with the same parametric approach it is possible to describe with only few control points thickness and camber distribution along the chord for a certain number of radial sections (or, more consistently, to adopt a B-Surface representation of the mean nondimensional propeller surface) and include also profiles in the optimization routine. The adopted optimization algorithm is of genetic type: from an initial population (whose each member is randomly created from the original geometry, altering the parameter values within prescribed ranges), successive generations are created via crossover and mutation.The members of the new generations arise from the best members of the previous one that satisfy all the imposed constraints (e.g., thrust identity) and grant better values for the objectives.To improve the convergence of the algorithm and speed up the entire procedure, a certain tolerance (within few percent points) has been allowed for the constraints, letting the inclusion of some more different geometries in the optimization loop.In particular, in the specific case presented in Section 3, thrust coefficient variations of ±2% have been accepted.Each member of the initial population is analyzed via the potential code.Results, in terms of thrust, torque, efficiency, and cavity area, are collected together with the values of the parameters that describe that given geometry.The optimization algorithm, through these data, identifies the "direction" to be followed in order to satisfy the constraints and improve the objectives until convergence is achieved (or Pareto convergence in the case that more than one objective are addressed). Analysis Tools. Panel methods can accurately take into account the thickness effect, the nonlinearities due to cavitation, and, even if in an approximate way, the interaction between the propeller and the nozzle, in terms both of increase of loading at the tip and inflow velocity distribution.For these reasons the optimization has been carried out performing all the calculations with a panel method customized for the solution of the ducted propeller problem.On the other hand the accuracy and the efficiency of RANS solvers has increased significantly in the last years (see, e.g., [17,18] and, for the ducted propellers, [8,9]), making RANS solutions, in many engineering cases, a reliable alternative to the experimental measurements and an excellent tool to understand and visualize, for instance, the complex flow phenomena on the gap region.In addition to the panel method, hence, also a commercial finite volume RANS solver, namely, StarCCM+ [7] has been adopted to evaluate the performances and the characterizing flow features (tip vortexes and cavitation) of ducted propellers and, thereby, to have a further set of results to be compared with the experimental measures.For the noncavitating computations, as usual continuity and momentum equations for an incompressible fluid are expressed as in which V is the averaged velocity vector, p is the averaged pressure field, μ is the dynamic viscosity, S M is the momentum sources vector, and T Re is the tensor of Reynolds stresses, computed in agreement with the two-layer realizable k-ε turbulence model.In the case of cavitating flow an additional transport equation for the fraction α of liquid is needed: continuity and momentum equations are solved for a mixed fluid whose proprieties are a weighted mean between the fraction α of liquid and the fraction 1 − α of vapor.In turn also continuity equation is modified, in order to take into account the effect of cavitation through a source term, modeled by the Sauer and Schnerr [23] approach.The numerical solutions have been computed on appropriate meshes (e.g., Figures 3 and 5), whose reliability has been verified similarly to Bertetta et al. [4][5][6].As it is well known, the quality of the mesh is a decisive factor for the overall reliability of the computed solution.From this point of view, automatic unstructured meshes, as those adopted in the present case, may pose some additional issues with respect to the more user adjustable structured ones.To limit the numerical errors and to grant smooth variations of the geometrical characteristics of the cells, where local refinements (adopted to increase the accuracy where the most peculiar phenomena of these kind of propellers are expected) have been adopted, special attention has been dedicated to define the appropriate growing factors that drive the transition of the cells dimensions and of the prism layers arrangement.For instance, depending on whether the computation is for the open water or for the cavitating condition, different modellings of the prism layer have been adopted, in the light of the local Reynolds number (i.e., turbulent layer thickness) of the model propeller tested (diameter of 230 mm) at the cavitation tunnel and of an estimated (panel method) sheet cavity thickness.The main parameters for both the computations are summarized in Table 1.The adopted mesh, as usual, is a compromise between accuracy and available computational time.In any case all the parameters that define the quality of the mesh fall within the thresholds (e.g., volume change greater than 1•10 −5 and maximum skewness angle lower than 85 • ) suggested for reliable solutions on unstructured polyhedral meshes [7].The dimensionless wall distance, as also presented in Figure 6 for the noncavitating propeller at the design point, is overall adequate for the application of the selected two-layer realizable k-ε turbulence model. Moving reference frames (for RANS computations) and key blade approaches (in the framework of the potential approach, as in Hsin [24]) have been finally adopted to exploit the axial symmetry of the problem and reduce the computational domain. Design Activity The coupled lifting line/panel method design methodology and the design via optimization outlined in the previous sections have been employed for the definition of an optimal four-blade decelerating ducted propeller operating in moderately loaded conditions.The duct is shaped as an NACA profile, and its geometry, given for the preliminary design through the lifting line/panel method procedure, has been maintained unaltered also for the optimization.The complete details of nozzle geometry may not be provided for industrial reasons.A prescribed total (propeller plus duct) thrust coefficient, to be satisfied at an advance coefficient close to 1 and at a cavitation index (based on the number of revolutions) of about 1.5, has been assumed for the design of the propeller.The resulting preliminary geometry (having a pitch over diameter ratio at 0.7 radial section of about 1.33) from the coupled lifting line/panel method design procedure is presented in Figure 7, in which both the panel method and the RANS results in terms of predicted cavity extension are shown.The agreement between the numerical results from both the approaches is satisfactory, showing similar cavitation extents.In terms of predicted thrust, the effectiveness of the design is confirmed by the numerical computations.The value predicted by the RANS is very close (about 2% lower) to the required total thrust assumed for the design; this discrepancy was deemed acceptable.The numerical predictions of the thrust and of the torque obtained with the panel method for the same preliminary geometry show, instead, some differences with respect to the RANS computations: panel method results tend to be a bit more overpredicted, with respect to the required total thrust and, as a consequence, with respect to the RANS calculations.For the optimization it has been assumed that these differences, ascribed to the numerical approach, remain the same also for the newly designed propellers, and the numerical predictions by the panel method obtained for the preliminary propeller designed with the coupled lifting line approach have been taken as the reference point of the optimization procedure. Starting from this preliminary geometry, the optimization of the propeller has been carried out in order to obtain a new geometry able to maximize efficiency and to reduce back cavitation at the design cavitation index with the same numerical delivered thrust (within a range of ±2% to speed up the convergence) computed for the preliminary design. Also in terms of cavity extension, some limitations of the panel approach can be highlighted.Previous experiences at the cavitation tunnel with similar decelerating ducted propellers (and also the viscous computations on the preliminary geometry as presented in Figure 7) showed that tip leakage vortex, whose prediction is, of course, beyond the capabilities of the cavitating panel method, is one of the dominating cavitating phenomena.If for the RANS the prediction of cavitating vortexes can be considered reliable, for the panel method it is clear that only an artificial sheet cavity bubble is computed on the outermost strip of panels at tip.This sheet cavitation, that has been numerically evidenced at the blade tip could be, however, correlated (this assumption will be partially confirmed by the experimental campaign) with the occurrence of the tip cavitating vortex, and its extension (to be, as a consequence, minimized) could be considered a measure of the risk (or strength) of this kind of cavitation.The risk of midchord bubbles at tip is, moreover, evidenced by both methods: the RANS vapor isosurface (fraction of vapor equal to 0.5) covers the blade at its trailing edge just below the tip while; by the panel method, again a sheet cavity bubble is predicted at midchord near the same position.In order to numerically amplify the sheet cavity bubble at the blade tip, include a certain margin for the occurrence of bubble cavitation and let the optimization work at a more convenient point (for which the cavity extension is not constrained by the dimension of the few panels at the blade leading edge); the design of the new propeller via optimization has been carried out at a slightly lower cavitation index with respect to the design point (90%), as mentioned also in Figure 7. The optimization activity for the design of the new propeller has been carried out investigating only global parameters, that is, maintaining the profile shape adopted for preliminary design.In particular chord, maximum camber and pitch distributions along the radius have been considered in the optimization, taking control points of the related curves as free variables.Maximum thickness has been constrained to the chord distribution in order to achieve the same blade strength of the initial propeller. About 20 thousands different geometries have been generated and analyzed by the panel method; results of the optimization are reported in the Pareto diagram of Figure 8. Feasible geometries are those that satisfy all the prescribed constraints (thrust identity for instance) while unfeasible points represent the performances, in terms of efficiency and cavity extension, of the geometries that do not satisfy the constraints. One of the most powerful aspects of the optimization is the availability, at the end of the design process, of an entire set of geometries, all satisfying the design criteria (within the limits of the adopted flow solvers) with different compromises regarding the design objectives.Among the Pareto frontiers, thereby, it is possible to select a new geometry, as a balance between increase of efficiency and reduction of cavity extension also in the light of the designer experience.The new optimal geometry (having a pitch over diameter ratio at 0.7 radial section of about 1.32), as highlighted in Figure 8, grants a numerical reduction of the cavity extension of about 40% and an increase in efficiency slightly greater than 2% at the same working point of the preliminary geometry, and, as expected, the total thrust computed by the panel method is within the prescribed numerical tolerance of ±2%.Also for the optimal geometry a more accurate RANS computation has been carried out, whose results, in terms of predicted cavity extension, are reported in Figure 9.As wished, RANS results confirm the effectiveness of the optimization approach: the total propeller thrust of the optimized propeller is, as well as the total thrust of the preliminary design, 2% lower than the requested design thrust, with an efficiency 1.8% greater than that computed by the RANS in the case of the preliminary design.The predicted RANS cavity extension is itself in agreement with the panel method analysis (with the limits already underlined by the analysis of the cavity prediction of the preliminary design), and, at least qualitatively, it is possible to evidence a nonnegligible reduction with respect to the same kind of numerical analysis presented in Figure 7 for the preliminary geometry. Experimental Campaign Once the final geometry has been chosen, a series of model tests (open water tests and cavitation tunnel tests) has been performed in order to validate the numerical results.The model used throughout the tests (having a diameter of 230 mm) is reported in Figure 10. In particular, open water tests have been carried out at SVA towing tank, using a Kempf & Remmers propeller dynamometer H39 and a R35X balance for the measurement of duct thrust.A constant propeller rate of revolution (15 Hz) was adopted during tests. Cavitation tunnel tests have been carried out, instead, at the University of Genoa cavitation tunnel.The facility, represented in Figure 11, is again a Kempf & Remmers closed water circuit tunnel with a squared testing section of 0.57 m × 0.57 m, having a total length of 2 m, in which conventional propeller cavitating behaviour [25] and SPP propellers characteristics [26] are usually tested.The nozzle contraction ratio is 4.6 : 1, and the maximum flow speed in the testing section is 8.5 m/s.Vertical distance between horizontal ducts is 4.54 m, while horizontal distance between vertical ducts is 8.15 m.Flow speed in the testing section is measured by means of a differential venturimeter with two pressure plugs immediately upstream and downstream of the converging part.A depressurization system allows obtaining an atmospheric pressure in the circuit near to vacuum, in order to simulate the correct cavitation index for propellers and profiles.The tunnel is equipped with a Kempf & Remmers H39 dynamometer, which measures the propeller thrust, the torque, and the rate of revolution.As usual, a mobile stroboscopic system allows to visualize cavitation phenomena on the propeller blades.Moreover, cavitation phenomena visualization in the testing section is also made with two Allied Vision Tech Marlin F145B2 Firewire Cameras, with a resolution of 1392 × 1040 pixels and a frame rate up to 10 fps.As regards the duct forces, an in-house developed measuring device has been adopted.In particular, a cavitation tunnel window was modified to hold an aluminium alloy plate coupled by welding to an aluminium alloy hollow bar [27].Force measurement is performed by means of strain gauges directly applied on the hollow bar.This instrumentation was successfully tested against towing tank results, as reported in Bertetta et al. [4][5][6], where further details about its development and calibration may be also found. In order to avoid vortex shedding from the hollow bar in the tunnel flow, which can affect bar integrity and decrease dramatically its fatigue life, a screening foil was adopted.The foil shape was selected in order to postpone cavitation, thus limiting, as far as possible, the additional noise due to the presence of the measuring device.In Figure 12 the final measurement setup at cavitation tunnel is shown. All tests were carried out without axis longitudinal inclination and in an uniform wake, consistently with the design assumptions previously described.A constant propeller rate of revolution (25 Hz) was adopted. Open Water. Model scale open water computations, compared with measures carried out at SVA towing tank, are shown in Figure 13.Results are reported in nondimensional form, with the normalization carried out with respect to the towing tank values at the design point. Measures substantially confirm the design procedure.The optimized propeller, at the design point, has a slightly lower (about 3%) thrust with respect to that required during the design.The RANS computations, which were assumed as a validation of the preliminary and the optimized designs (predicting values of total thrust 2% lower than that assumed for the design, as in the previous section), are very close to the experimental point, with a discrepancy (overestimation) in terms of the total thrust less than 1%. Since calculations have been carried out in correspondence to a rather large range of advance coefficient values, not limiting to the design point, it is worth discussing them, examining in particular the existing discrepancies. Considering thrust coefficients, RANS computations can be considered, also in off-design, sufficiently reliable.Particularly at lower values of advance the agreement between the measured and the computed total (propeller plus nozzle) thrust is good, even if it has to be remarked that the propeller thrust is slightly overestimated while the nozzle one is slightly underpredicted, with their sum very close to the experiments as a counterbalance of two diverging errors.Only for higher values of advance, thrust predictions are significantly different from the measures, confirming the very complex nature of flow inside highly loaded decelerating nozzles.This difference may be mainly due to a progressively increasing error in the prediction of the propeller thrust alone, probably due to a noncompletely satisfactory modelling of the mutual interaction between duct and propeller.Discrepancies in torque prediction (and, in turn, in efficiency) are, instead, more evident.At the design point the numerical overestimation is about 10%, but the torque curve is almost constantly vertically shifted with respect to the experimental measures.Unfortunately, panel method predictions amplify the discrepancies already highlighted for the RANS.Even if at lower advance coefficients total thrust values (as a counterbalance of propeller and nozzle thrust predictions) are deemed acceptable, with a 7% difference at the design point; at higher advances the differences with the measures increase, confirming the limitations of the adopted panel method when applied to solve the complex flow phenomena that occur, for instance, on the gap region.Discrepancy in the torque coefficient is comparable to those obtained with RANS at design point, with differences in offdesign conditions which are in line with thrust coefficient behaviour. As already mentioned, the discrepancies between numerical results and measurements may be due to the noncorrect capturing of the flow in the decelerating duct.Previous calculations with RANS and panel method on a propeller with accelerating duct, in fact, provided lower discrepancies (similar to those found in [9,11]) in correspondence to the functioning points at lower values of advance coefficient, where an accelerating effect exists, while higher discrepancies were found when duct functioning is reversed, that is, when a decelerating effect is present.Unfortunately, decelerating duct configurations are scarcely considered in literature, especially for what regards numerical calculations, limiting the possibility of comparisons. Notwithstanding this problem, the quality of present results, despite presenting considerable discrepancies especially in terms of torque coefficient, is deemed acceptable in the context of the proposed design procedure, since it showed to be able to rank different propellers.In particular, three different geometries with same decelerating duct were numerically and experimentally tested in a parallel activity.The relative trends, in terms of efficiency, were correctly captured by both panel and RANS methods, which succeeded in ranking correctly the three different propellers, with panel methods slightly amplifying the differences found with experiments and RANS slightly smoothing them.Complete results may not be included for industrial reasons.Currently, in the context of the propeller design activity, the presented calculations parameters were therefore considered a correct compromise between calculations accuracy and required computational time. In order to have a better insight into the problem, a first analysis with RANS has been carried out considering the possible influence of turbulence models.In particular, k-ω and RST turbulence closure equations were adopted, keeping constant mesh parameters.The results did not present significant modifications and are therefore omitted.Possible further analyses, which will be carried out in future activities, will consider the influence of the adoption of more refined meshes and of structured grids, with the aim of better capturing flow features in the small gap region. Regarding panel method, a possible future improvement can be represented by the introduction of a better trailing wake model, as proposed by Baltazar et al. [11]. Cavitating Conditions. The cavity extension observed at the cavitation tunnel of the University of Genoa has been compared with the numerical computations carried out with the panel method and the RANS solver.Four different functioning points, in addition to the design one, have been considered.Two of the four points have the same design thrust coefficient and different cavitation indexes while the other two have the same cavitation index but increased and decreased value of the thrust coefficient.All the comparison have been carried out at the same (experimental and numerical) thrust coefficient in order to minimize the discrepancies between measures and numerical computations highlighted for the open water case.As a first step, cavitation extension at design point is considered.Results are shown in Figure 14: cavitating tip leakage vortex is the only noticeable phenomenon, which extends on the duct. A satisfactory agreement is found between the experimentally observed phenomena and the RANS numerical calculation, which show that the propeller is cavitationfree, except for the tip leakage vortex, whose existence is captured.Moreover, numerical tip vortex shows also a pitch very similar to the observed one, which is considerably lower than it would be expected in case of a conventional propeller.This feature is probably due to the local effect of the duct wake and of the flow in the gap region.Nevertheless, the numerically predicted extension of the cavity area appears larger than the experimental one, which develops only from about midchord.It is important to underline that the vortex appeared considerably unstable during the experiments, with a variable extension (the photograph presented may be considered as a mean value).This instability, which is strongly influenced by local characteristics of the propeller (including tip geometry and tip gap), was particularly evident in correspondence to the design point, while it decreased at lower and higher loadings of the propeller.Panel method results are also satisfactory, showing no cavitation on propeller blade apart from a very limited number of panels at the tip, which may be considered as an indication of the existence of the tip leakage vortex, which may not be captured by the method by its nature.As a whole, therefore, the results confirm the reliability of the adopted design procedure, which allowed to obtain an almost completely cavitation-free propeller, apart from the tip leakage vortex.It has to be noted however that, as well known, the cavitating tip vortex may be problematic, especially if the aim of the designer is the reduction of noise level.Future analyses will be carried out in order to compare the peculiar tip leakage vortex effect with typical conventional propeller noise levels and to analyze factors (propeller geometry at tip, propeller/duct clearance, effective flow) linked to the generation and behavior of this phenomenon.Off-design comparisons are shown in Figures 15 and 16, obtained from varying the cavitation number at the same design thrust coefficient, and in Figures 17 and 18, obtained from different loadings at the same design cavitation number. Again, having in mind the intrinsic limitations of the numerical approaches, a satisfactory agreement with experiments can be evidenced.In particular, panel method allows to rank, by means of the number of cavitating panels at the tip, the strength of the tip leakage vortex.A very good correlation exists, in fact, between the evaluated cavitation extension and the dimension of the tip leakage vortex; the unique functioning condition in which no cavitating panel is present is the one with lower loading, where the tip leakage vortex is effectively very weak.This consideration allows to state that panel method, at least to some extent, may be adopted, beyond its usual application, also as a tool to reduce (coupled with optimization techniques) the tip leakage vortex.However, this consideration has to be further investigated, since it is likely that, once differences at tip reduce to small details, panel methods would fail in rank different solutions.Calculations with RANS, which show again a very good general agreement with experimental observations, might be a more reliable alternative if interest is posed on the tip leakage vortex development, even if again a general overestimation of the phenomenon is visible.Considering other phenomena, it can be observed that they are very limited even if the analysis has covered a rather wide range of functioning points, confirming the satisfactory result of the design process.Tip back cavitation bubbles, observed at design loading and lower cavitation number and at higher loading condition, are correctly predicted only with the RANS method, which allows to capture satisfactorily also their extension.In the case of the panel method, midchord cavitation starts for slightly lower values of the cavitation index, thus underlining a lower accuracy of the method from this point of view, as it could be expected.Sheet cavitation at tip (as an extension of the tip leakage vortex) is particularly evident at highest loading condition.In this case, both methods appear to correctly capture it.Finally, root back bubbles were observed in correspondence to the lowest cavitation number, while they are not present in both the numerical calculations.This phenomenon, however, may be at least partially ascribed to the local manufacturing of the propeller model, which presents a not completely satisfactory finishing at leading edge, which probably tends to anticipate cavitation.Moreover, the propeller geometry adopted in both calculations does not consider the effective hub/blade root fillet, resulting in a lower local profile thickness at root, which may tend to postpone this phenomenon. Conclusions In the present paper, a hybrid approach for ducted propellers design has been presented.An application of the method for the design of a propeller in a decelerating duct is reported, together with the results of an experimental campaign at the towing tank and at the cavitation tunnel, carried out in order to validate the procedure.The comparison of the numerical results and the measurements confirms the validity of the approach, which allows to obtain a propeller that, in addition to satisfy the required mechanical characteristics, is almost completely cavitation-free, apart from the presence of the tip leakage vortex.Considering also off-design points, cavitating phenomena and their extensions are satisfactorily predicted by the adopted methods within their limitations, with RANS solver being able to correctly capture also bubble cavitation and tip leakage vortex, even if slightly overestimating the latter: a correct capturing of the local phenomena at tip is of particular importance for a propeller in decelerating duct, whose main aim is to reduce cavitation phenomena, since it may represent an undesired source of noise.The nature of this phenomenon has to be further investigated, both numerically and experimentally, evaluating the influence of local geometrical characteristics.From this point of view, a combined application of RANS solvers, with their ability to capture the complex flow characteristics including also viscous effects, and panel methods, whose low computational requirements allow to test a very high number of solutions within reasonable time, seems to represent a very promising and convenient approach. Figure 2 : Figure 2: Flow chart for the coupled lifting line/panel method design approach. Figure 3 : Figure 3: Panel representation of the ducted propeller.Tip gap region in red. Figure 4 : Figure 4: B-Spline representation of radial distributions of chord and pitch. Figure 5 : Figure 5: Polyhedral mesh arrangements for the ducted propelleropen water computations. Figure 6 : Figure 6: Wall Y + on propeller and duct-open water computations at the design condition. Figure 7 : Figure 7: Predicted cavity extension for the preliminary propeller geometry at the design advance coefficient.(a) Panel method computations at 90% of the design cavitation index.(b) RANS computations at the design cavitation index. Figure 9 : Figure 9: Predicted cavity extension for the optimized propeller geometry at the design advance coefficient.(a) Panel method computations at 90% of the design cavitation index.(b) RANS computations at the design cavitation index. Figure 14 : Figure 14: Observed and predicted (RANS and panel method) cavity extension at the design point. Figure 15 : Figure 15: Observed and predicted (RANS and panel method) cavity extension.Design thrust coefficient at 135% of design cavitation index. Figure 16 : Figure 16: Observed and predicted (RANS and panel method) cavity extension.Design thrust coefficient at 80% of design cavitation index. Figure 17 : Figure 17: Observed and predicted (RANS and panel method) cavity extension.70% of the design thrust coefficient at the design cavitation index. Figure 18 : Figure 18: Observed and predicted (RANS and panel method) cavity extension.130% of the design thrust coefficient at the design cavitation index.
11,295.2
2012-12-11T00:00:00.000
[ "Engineering" ]
Dark Matter and the elusive Z′ in a dynamical Inverse Seesaw scenario The Inverse Seesaw naturally explains the smallness of neutrino masses via an approximate B − L symmetry broken only by a correspondingly small parameter. In this work the possible dynamical generation of the Inverse Seesaw neutrino mass mechanism from the spontaneous breaking of a gauged U(1) B − L symmetry is investigated. Interestingly, the Inverse Seesaw pattern requires a chiral content such that anomaly cancellation predicts the existence of extra fermions belonging to a dark sector with large, non-trivial, charges under the U(1) B − L. We investigate the phenomenology associated to these new states and find that one of them is a viable dark matter candidate with mass around the TeV scale, whose interaction with the Standard Model is mediated by the Z′ boson associated to the gauged U(1) B − L symmetry. Given the large charges required for anomaly cancellation in the dark sector, the B − LZ′ interacts preferentially with this dark sector rather than with the Standard Model. This suppresses the rate at direct detection searches and thus alleviates the constraints on Z′-mediated dark matter relic abundance. The collider phenomenology of this elusive Z′ is also discussed. Introduction The simplest and most popular mechanism to accommodate the evidence for neutrino masses and mixings [1][2][3][4][5][6] and to naturally explain their extreme smallness, calls upon the introduction of right-handed neutrinos through the celebrated Seesaw mechanism [7][8][9][10][11][12]. Its appeal stems from the simplicity of its particle content, consisting only of the righthanded neutrinos otherwise conspicuously missing from the Standard Model (SM) ingredients. In the Seesaw mechanism, the smallness of neutrino masses is explained through the ratio of their Dirac masses and the Majorana mass term of the extra fermion singlets. Unfortunately, this very same ratio suppresses any phenomenological probe of the existence of this mechanism. Indeed, either the right-handed neutrino masses would be too large to be reached by our highest energy colliders, or the Dirac masses, and hence the Yukawa interactions that mediate the right-handed neutrino phenomenology, would be too small for even our more accurate precision probes through flavour and precision electroweak observables. However, a large hierarchy of scales is not the only possibility to naturally explain the smallness of neutrino masses. Indeed, neutrino masses are protected by the B − L (Baryon minus Lepton number) global symmetry, otherwise exact in the SM. Thus, if this symmetry is only mildly broken, neutrino masses will be necessarily suppressed by the small B − L-breaking parameters. Conversely, the production and detection of the extra right-handed neutrinos at colliders as well as their indirect effects in flavour and JHEP10(2017)169 precision electroweak observables are not protected by the B − L symmetry and therefore not necessarily suppressed, leading to a much richer and interesting phenomenology. This is the rationale behind the popular Inverse Seesaw Mechanism [13] (ISS) as well as the Linear [14,15] and Double Seesaw [13,[16][17][18] variants. In the presence of right-handed neutrinos, B − L is the only flavour-universal SM quantum number that is not anomalous, besides hypercharge. Therefore, just like the addition of right-handed neutrinos, a very natural plausible SM extension is the gauging of this symmetry. In this work these two elements are combined to explore a possible dynamical origin of the ISS pattern from the spontaneous breaking of the gauged B − L symmetry. Previous models in the literature have been constructed using the ISS idea or gauging B − L to explain the smallness of the neutrino masses, see e.g. [19][20][21][22][23][24]. A minimal model in which the ISS is realised dynamically and where the smallness of the Lepton Number Violating (LNV) term is generated at the two-loop level was studied in [25]. Concerning U(1) B−L extensions of the SM with an ISS generation of neutrino masses, several models have been investigated [26][27][28][29]. A common origin of both sterile neutrinos and Dark Matter (DM) has been proposed in [30,31]. An ISS model which incorporates a keV sterile neutrino as a DM candidate was constructed in e.g. [32]. Neutrino masses break B − L, if this symmetry is not gauged and dynamically broken, a massless Goldstone boson, the Majoron, appears in the spectrum. Such models have been investigated for example in [30,33]. Interestingly, since the ISS mechanism requires a chiral pattern in the neutrino sector, the gauging of B −L predicts the existence of extra fermion singlets with non-trivial charges so as to cancel the anomalies. We find that these extra states may play the role of DM candidates as thermally produced Weakly Interacting Massive Particles (WIMPs) (see for instance [34,35] for a review). Indeed, the extra states would form a dark sector, only connected to the SM via the Z gauge boson associated to the B − L symmetry and, more indirectly, through the mixing of the scalar responsible for the spontaneous symmetry breaking of B − L with the Higgs boson. For the simplest charge assignment, this dark sector would be constituted by one heavy Dirac and one massless Weyl fermion with large B − L charges. These large charges make the Z couple preferentially to the dark sector rather than to the SM, making it particularly elusive. In this work the phenomenology associated with this dark sector and the elusive Z is investigated. We find that the heavy Dirac fermion of the dark sector can be a viable DM candidate with its relic abundance mediated by the elusive Z . Conversely, the massless Weyl fermion can be probed through measurements of the relativistic degrees of freedom in the early Universe. The collider phenomenology of the elusive Z is also investigated and the LHC bounds are derived. The paper is structured as follows. In section 2 we describe the features of the model, namely its Lagrangian and particle content. In section 3 we analyse the phenomenology of the DM candidate and its viability. The collider phenomenology of the Z boson is discussed in section 4. Finally, in sections 5 and 6 we summarise our results and conclude. JHEP10(2017)169 2 The model The usual ISS model consists of the addition of a pair of right-handed SM singlet fermions (right-handed neutrinos) for each massive active neutrino [13,[36][37][38]. These extra fermion copies, say N R and N R , carry a global Lepton Number (LN) of +1 and −1, respectively, and this leads to the following mass Lagrangian where Y ν is the neutrino Yukawa coupling matrix, H = iσ 2 H * (H being the SM Higgs doublet) and L is the SM lepton doublet. Moreover, M N is a LN conserving matrix, while the mass matrix µ breaks LN explicitly by 2 units. The right-handed neutrinos can be integrated out, leading to the Weinberg operator [39] which generates masses for the light, active neutrinos of the form: Having TeV-scale right-handed neutrinos (e.g. motivated by naturalness [40,41]) and O(1) Yukawa couplings would require µ ∼ O(keV). In the original ISS formulation [13], the smallness of this LNV parameter arises from a superstring inspired E6 scenario. Alternative explanations call upon other extensions of the SM such as Supersymmetry and Grand Unified Theories (see for instance [15,42]). Here a dynamical origin for µ will be instead explored. The µ parameter is technically natural: since it is the only parameter that breaks LN, its running is multiplicative and thus once chosen to be small, it will remain small at all energy scales. To promote the LN breaking parameter µ in the ISS scenario to a dynamical quantity, we choose to gauge the B − L number [43]. The spontaneous breaking of this symmetry will convey LN breaking, generate neutrino masses via a scalar vev, and give rise to a massive vector boson, dubbed here Z . B − L is an accidental symmetry of the SM, and it is well motivated in theories in which quarks and leptons are unified [44][45][46][47]. In unified theories, the chiral anomalies cancel within each family, provided that SM fermion singlets with charge +1 are included. In the usual ISS framework, this is not the case due to the presence of right-handed neutrinos with charges +1 and −1. The triangle anomalies that do not cancel are those involving three U(1) B−L vertices, as well as one U(1) B−L vertex and gravity. Therefore, to achieve anomaly cancellation for gauged B − L we have to include additional chiral content to the model with charges that satisfy where the first and second equation refer to the mixed gravity-U(1) B−L and U(1) 3 B−L anomalies, respectively. The index i runs through all fermions of the model. In the following subsections we will discuss the fermion and the scalar sectors of the model in more detail. The fermion sector Besides the anomaly constraint, the ISS mechanism can only work with a certain number of N R and N R fields (see, e.g., ref. [48]). We find a phenomenologically interesting and viable scenario which consists of the following copies of SM fermion singlets and their respective B −L charges: 3 N R with charge −1; 3 N R with charge +1; 1 χ R with charge +5; 1 χ L with charge +4 and 1 ω with charge +4. 1 Some of these right-handed neutrinos allow for a mass term, namely, M N N c R N R , but to lift the mass of the other sterile fermions and to generate SM neutrino masses, two extra scalars are introduced. Thus, besides the Higgs doublet H, the scalar fields φ 1 with B − L charge +1 and φ 2 with charge +2 are considered. The SM leptons have B − L charge −1, while the quarks have charge 1/3. The scalar and fermion content of the model, related to neutrino mass generation, is summarised in table 1. The most general Lagrangian in the neutrino sector is then given by 2 where the capitalised variables are to be understood as matrices (the indices were omitted). The singlet fermion spectrum splits into two parts, an ISS sector composed by ν L , N R , and N R , and a dark sector with χ L and χ R , as can be seen in the following mass matrix written in the basis (ν c L , N R , N R , χ c L , χ R ): The dynamical equivalent of the µ parameter can be identified with Y N φ * 2 . 3 After φ 1 develops a vacuum expectation value (vev) a Dirac fermion χ = (χ L , χ R ) and a massless 1 Introducing 2 NR and 3 N R as for example in [32] leads to a keV sterile neutrino as a potentially interesting warm DM candidate [49] in the spectrum due to the mismatch between the number of NR and N R . However, the relic abundance of this sterile neutrino, if thermally produced via freeze out, is an order of magnitude too large. Thus, in order to avoid its thermalisation, very small Yukawa couplings and mixings must be adopted instead. 2 Notice that a coupling φ * 1 ωYωχR, while allowed, can always be reabsorbed into φ * 1 χLYχχR through a rotation between ω and χL. 3 The analogous term YN φ2 -also dynamically generated -contributes to neutrino masses only at the one-loop level and is therefore typically sub-leading. JHEP10(2017)169 fermion ω are formed in the dark sector. Although the cosmological impact of this extra relativistic degree of freedom may seem worrisome at first, we will show later that the contribution to N eff is suppressed as this sector is well secluded from the SM. To recover a TeV-scale ISS scenario with the correct neutrino masses and O(1) Yukawa GeV is the electroweak vev) and M R ∼ TeV are needed. Moreover, the mass of the B − L gauge boson will be linked to the vevs of φ 1 and φ 2 , and hence to lift its mass above the electroweak scale will require v 1 ≡ φ 1 TeV. In particular, we will show that a triple scalar coupling ηφ 2 1 φ * 2 can induce a small v 2 even when v 1 is large, similar to what occurs in the type-II seesaw [12,[52][53][54][55]. After the spontaneous symmetry breaking, the particle spectrum would then consist of a B − L gauge boson, 3 pseudo-Dirac neutrino pairs and a Dirac dark fermion at the TeV scale, as well as a massless dark fermion. The SM neutrinos would in turn develop small masses via the ISS in the usual way. Interestingly, both dark fermions only interact with the SM via the new gauge boson Z and via the suppressed mixing of φ 1 with the Higgs. They are also stable and thus the heavy dark fermion is a natural WIMP DM candidate. Since all new fermions carry B − L charge, they all couple to the Z , but specially the ones in the dark sector which have larger B − L charge. The scalar sector The scalar potential of the model can be written as Both m 2 H and m 2 1 are negative, but m 2 2 is positive and large. Then, for suitable values of the quartic couplings, the vev of φ 2 , v 2 , is only induced by the vev of φ 1 , v 1 , through η and thus it can be made small. With the convention φ j = (v j + ϕ j + i a j )/ √ 2 and the neutral component of the complex Higgs field given by where G Z is the Goldstone associated with the Z boson mass), the minimisation of the potential yields (2.11) Clearly, when η → 0 or m 2 2 → ∞, the vev of φ 2 goes to zero. For example, to obtain v 2 ∼ O(keV), one could have m 2 ∼ 10 TeV, v 1 ∼ 10 TeV, and η ∼ 10 −5 GeV. The neutral JHEP10(2017)169 scalar mass matrix is then given by Higgs data constrain the mixing angle between Re(H 0 ) and Re(φ 0 1 ) to be below ∼ 30% [56]. Moreover, since η m 2 , v 1 , the mixing between the new scalars is also small. Thus, the masses of the physical scalars h, ϕ 1 and ϕ 2 are approximately while the mixing angles α 1 and α 2 between h − ϕ 1 and ϕ 1 − ϕ 2 , respectively, are (2.14) If v 1 ∼ TeV and the quartics λ 1 and λ 1H are O(1), the mixing α 1 is expected to be small but non-negligible. A mixing between the Higgs doublet and a scalar singlet can only diminish the Higgs couplings to SM particles. Concretely, the couplings of the Higgs to gauge bosons and fermions, relative to the SM couplings, are which is constrained to be cos α 1 > 0.92 (or equivalently sin α 1 < 0.39) [57]. Since the massless fermion does not couple to any scalar, and all other extra particles in the model are heavy, the modifications to the SM Higgs couplings are the only phenomenological impact of the model on Higgs physics. The other mixing angle, α 2 , is very small since it is proportional to the LN breaking vev and thus is related to neutrino masses. Its presence will induce a mixing between the Higgs and ϕ 2 , but for the parameters of interest here it is unobservable. Besides Higgs physics, the direct production of ϕ 1 at LHC via its mixing with the Higgs would be possible if it is light enough. Otherwise, loop effects that would change the W mass bound can also test this scenario imposing sin α 1 0.2 for m ϕ 1 = 800 GeV [56]. Apart from that, the only physical pseudoscalar degree of freedom is and its mass is degenerate with the heavy scalar mass, m A m ϕ 2 . We have built this model in SARAH 4.9 [58][59][60][61]. This Mathematica package produces the model files for SPheno 3.3.8 [62,63] and CalcHep [64] which are then used to study the DM phenomenology with Micromegas 4.3 [65]. We have used these packages to compute the results presented in the following sections. Moreover, we will present analytical estimations to further interpret the numerical results. Figure 1. DM annihilation channels χχ → ff via the Z boson and χχ → Z Z . The χχ → Z Z channel opens up when M 2 Z < m 2 χ . Since the process χχ → ϕ 1 → Z Z is velocity suppressed this diagram is typically subleading. Dark matter phenomenology As discussed in the previous section, in this dynamical realisation of the ISS mechanism we have two stable fermions. One of them is a Dirac fermion, χ = (χ L , χ R ), which acquires a mass from φ 1 , and therefore is manifest at the TeV scale. The other, ω, is massless and will contribute to the number of relativistic species in the early Universe. First we analyse if χ can yield the observed DM abundance of the Universe. Relic density In the early Universe, χ is in thermal equilibrium with the plasma due to its gauge interaction with Z . The relevant part of the Lagrangian is where and P R,L are the chirality projectors. The main annihilation channels of χ are χχ → ff via the Z boson exchange and χχ → Z Z -if kinematically allowed (see figure 1). The annihilation cross section to a fermion species f , at leading order in v, reads: see e.g. [66,67], where n c is the color factor of the final state fermion (=1 for leptons), q χ L = 4 and q χ R = 5 and q f L,R are the B − L charges of the left-and right-handed components of the DM candidate χ and of the fermion f , respectively. Moreover, the partial decay width of the Z into a pair of fermions (including the DM, for which f = χ) is given by When M 2 Z < m 2 χ , the annihilation channel χχ → Z Z is also available. The cross section for this process (lower diagrams in figure 1) is given by (to leading order in the relative velocity) [66] σv The χχ → ϕ 1 → Z Z (upper right diagram in figure 1) channel is velocity suppressed and hence typically subleading. Further decay channels like χχ → ϕ 1 ϕ 1 and and the additional constraint from perturbativity Y χ ≤ 1 we get only small kinematically allowed regions which play a subleading role for the relic abundance. The cross section for the annihilation channel χχ → Z h 0 is also subleading due to the mixing angle α 1 between ϕ 1 − h 0 which is small although non-negligible (cf. eq. (2.14)). The relic density of χ has been computed numerically with Micromegas obtaining also, for several points of the parameter space, the DM freeze-out temperature at which the annihilation rate becomes smaller than the Hubble rate σv n χ H. Given the freezeout temperature and the annihilation cross sections of eqs. (3.3) and (3.5), the DM relic density can thus be estimated by [68]: where g is the number of degrees of freedom in radiation at the temperature of freeze-out of the DM (T f.o. χ ), σv is its thermally averaged annihilation cross section and M Pl = 1.2 · 10 19 GeV is the Planck mass. In section 5 we will use this estimation of Ω χ h 2 together with its constraint Ω χ h 2 0.1186 ± 0.0020 [69,70] to explore the regions of the parameter space for which the correct DM relic abundance is obtained. Direct detection The same Z couplings that contribute to the relic abundance can give rise to signals in DM direct detection experiments. The DM-SM interactions in the model via the Z are either vector-vector or axial-vector interactions. Indeed, the Z -SM interactions are vectorial (with the exception of the couplings to neutrinos) while χ has different left-and righthanded charges. The axial-vector interaction does not lead to a signal in direct detection and the vector-vector interaction leads to a spin-independent cross section [71]. The cross section for coherent elastic scattering on a nucleon is where µ χN is the reduced mass of the DM-nucleon system. The strongest bounds on the spin-independent scattering cross section come from LUX [72] and XENON1T [73]. The JHEP10(2017)169 constraint on the DM-nucleon scattering cross section is σ DD χ < 10 −9 pb for m χ = 1 TeV and σ DD χ < 10 −8 pb for m χ = 10 TeV. The experimental bound on the spin-independent cross section (eq. (3.7)) allows to derive a lower bound on the vev of φ 1 : This bound pushes the DM mass to be m χ TeV. For instance, for g BL = 0.25 and m Z = 10 TeV, a DM mass m χ = 3.8 TeV is required to have σ DD χ ∼ 9 × 10 −10 pb. In turn, this bound translates into a lower limit on the vev of φ 1 : v 1 40 TeV (with Y χ 0.1). Next generation experiments such as XENON1T [74] and LZ [75] are expected to improve the current bounds by an order of magnitude and could test the parameter space of this model, as it will be discussed in section 5. Indirect detection In full generality, the annihilation of χ today could lead also to indirect detection signatures, in the form of charged cosmic rays, neutrinos and gamma rays. However, since the main annihilation channel of χ is via the Z which couples dominantly to the dark sector, the bounds from indirect detection searches turn out to be subdominant. The strongest experimental bounds come from gamma rays produced through direct emission from the annihilation of χ into τ + τ − . Both the constraints from the Fermi-LAT Space Telescope (6-year observation of gamma rays from dwarf spheroidal galaxies) [76] and H.E.S.S. (10-year observation of gamma rays from the Galactic Center) [77] are not very stringent for the range of DM masses considered here. Indeed, the current experimental bounds on the velocity-weighted annihilation cross section < σv > (χχ → τ + τ − ) range from 10 −25 cm 3 s −1 to 10 −22 cm 3 s −1 for DM masses between 1 and 10 TeV. These values are more than two orders of magnitude above the values obtained for the regions of the parameter space in which we obtain the correct relic abundance (notice that the branching ratio of the DM annihilation to χ into τ + τ − is only about 5%). Future experiments like CTA [78] could be suited to sensitively address DM masses in the range of interest of this model (m χ 1 TeV). Effective number of neutrino species, N eff The presence of the massless fermion ω implies a contribution to the number of relativistic degrees of freedom in the early Universe. In the following, we discuss its contribution to the effective number of neutrino species, N eff , which has been measured to be N exp eff = 3.04±0.33 [69]. Since the massless ω only interacts with the SM via the Z , its contribution to N eff will be washed out through entropy injection to the thermal bath by the number of relativistic degrees of freedom g (T ) at the time of its decoupling: ω is the freeze-out temperature of ω and T ν is the temperature of the neutrino background. The freeze-out temperature can be estimated when the Hubble expansion JHEP10(2017)169 rate of the Universe H = 1. 66 √ g T 2 /M Pl overcomes the ω interaction rate Γ =< σv > n ω leading to: . (3.10) With the typical values that satisfy the correct DM relic abundance: m Z ∼ O(10 TeV) and g BL ∼ O(0.1) ω would therefore freeze out at T f.o. ω ∼ 4 GeV, before the QCD phase transition. Thus, the SM bath will heat significantly after ω decouples and the contribution of the latter to the number of degrees of freedom in radiation will be suppressed: which is one order of magnitude smaller than the current uncertainty on N eff . For gauge boson masses between 1-50 TeV and gauge couplings between 0.01 and 0.5, ∆N eff ∈ [0.02, 0.04]. Nevertheless, this deviation from N eff matches the sensitivity expected from a EUCLID-like survey [79,80] and would be an interesting probe of the model in the future. Collider phenomenology The new gauge boson can lead to resonant signals at the LHC. Dissimilarly from the widely studied case of a sequential Z boson, where the new boson decays dominantly to dijets, the elusive Z couples more strongly to leptons than to quarks (due to the B − L number). Furthermore, it has large couplings to the SM singlets, specially χ and ω which carry large B − L charges. Thus, typical branching ratios are ∼70% invisible (i.e. into SM neutrinos and ω), ∼12% to quarks and ∼18% to charged leptons. 4 LHC Z → e + e − , µ + µ − resonant searches [81,82] can be easily recast into constraints on the elusive Z . The production cross section times branching ratio to dileptons is given by where s is the center of mass energy, Γ(Z → qq) is the partial width to qq pair given by eq. (3.4), and C qq is the qq luminosity function obtained here using the parton distribution function MSTW2008NLO [83]. . Summary plots of our results. The red region to the left is excluded by LHC constraints on the Z (see text for details), the region above g BL > 0.5 is non-perturbative due to g BL · q max ≤ √ 2π. In the blue shaded region DM is overabundant. The orange coloured region is already excluded by direct detection constraints from LUX [72], the short-dashed line indicates the future constraints from XENON1T [74] (projected sensitivity assuming 2t · y), the long-dashed line the future constraints from LZ [75] (projected sensitivity for 1000d of data taking). Results We now combine in figure 2 the constraints coming from DM relic abundance, DM direct detection experiments and collider searches. We can clearly see the synergy between these different observables. Since the DM candidate in our model is a thermal WIMP, the relic abundance constraint puts a lower bound on the gauge coupling, excluding the blue shaded region in the panels of figure 2. On the other hand, LHC resonant searches essentially put a lower bound on the mass of the Z (red shaded region), while the LUX direct detection JHEP10(2017)169 experiment constrains the product g BL · M Z from above (orange shaded region). For reference, we also show the prospects for future direct detection experiments, namely, XENON1T (orange short-dashed line, projected sensitivity assuming 2t · y) and LZ (orange long-dashed line, projected sensitivity for 1000d of data taking). Finally, if the gauge coupling is too large, perturbativity will be lost. To estimate this region we adopt the constraint g BL · q max ≤ √ 2π and being the largest B − L charge q max = 5, we obtain g BL > 0.5 for the non-perturbative region. The white region in these panels represents the allowed region. We present four different DM masses so as to exemplify the dependence on m χ . First, we see that for DM masses at 1 TeV (upper left panel), there is only a tiny allowed region in which the relic abundance is set via resonant χχ → Z → ff annihilation. For larger masses, the allowed region grows but some amount of enhancement is in any case needed so that the Z mass needs to be around twice the DM mass in order to obtain the correct relic abundance. For m χ above 20 TeV (lower right panel), the allowed parameter space cannot be fully probed even with generation-2 DM direct detection experiments. On top of the DM and collider phenomenology discussed here, this model allows for a rich phenomenology in other sectors. In full analogy to the standard ISS model, the dynamical ISS mechanism here considered is also capable of generating a large CP asymmetry in the lepton sector at the TeV scale, thus allowing for a possible explanation of the baryon asymmetry of the Universe via leptogenesis [84][85][86][87]. Moreover, the heavy sterile states typically introduced in ISS scenarios, namely the three pseudo-Dirac pairs from the states N R and N R can lead to new contributions to a wide array of observables [12, such as weak universality, lepton flavour violating or precision electroweak observables, which allow to constrain the mixing of the SM neutrinos with the extra heavy pseudo-Dirac pairs to the level of 10 −2 or even better for some elements [112,113]. Conclusions The simplest extension to the SM particle content so as to accommodate the experimental evidence for neutrino masses and mixings is the addition of right-handed neutrinos, making the neutrino sector more symmetric to its charged lepton and quark counterparts. In this context, the popular Seesaw mechanism also gives a rationale for the extreme smallness of these neutrino masses as compared to the rest of the SM fermions through a hierarchy between two different energy scales: the electroweak scale -at which Dirac neutrino masses are induced -and a much larger energy scale tantalizingly close to the Grand Unification scale at which Lepton Number is explicitly broken by the Majorana mass of the righthanded neutrinos. On the other hand, this very natural option to explain the smallness of neutrino masses automatically makes the mass of the Higgs extremely unnatural, given the hierarchy problem that is hence introduced between the electroweak scale and the heavy Seesaw scale. The ISS mechanism provides an elegant solution to this tension by lowering the Seesaw scale close to the electroweak scale, thus avoiding the Higgs hierarchy problem altogether. In the ISS the smallness of neutrino masses is thus not explained by a strong hierarchy JHEP10(2017)169 between these scales but rather by a symmetry argument. Since neutrino masses are protected by the Lepton Number symmetry, or rather B − L in its non-anomalous version, if this symmetry is only mildly broken, neutrino masses will be naturally suppressed by the small parameters breaking this symmetry. In this work, the possibility of breaking this gauged symmetry dynamically has been explored. Since the ISS mechanism requires a chiral structure of the extra right-handed neutrinos under the B − L symmetry, some extra states are predicted for this symmetry to be gauged due to anomaly cancellation. The minimal such extension requires the addition of three new fields with large non-trivial B − L charges. Upon the spontaneous breaking of the B − L symmetry, two of these extra fields become a massive heavy fermion around the TeV scale while the third remains massless. Given their large charges, the Z gauge boson mediating the B − L symmetry couples preferentially to this new dark sector and much more weakly to the SM leptons and particularly to quarks, making it rather elusive. The phenomenology of this new dark sector and the elusive Z has been investigated. We find that the heavy Dirac fermion is a viable DM candidate in some regions of the parameter space. While the elusive nature of the heavy Z makes its search rather challenging at the LHC, it would also mediate spin-independent direct detection cross sections for the DM candidate, which place very stringent constraints in the scenario. Given its preference to couple to the dark sector and its suppressed couplings to quarks, the strong tension between direct detection searches and the correct relic abundance for Z mediated DM is mildly alleviated and some parts of the parameter space, not far from the resonance, survive present constraints. Future DM searches by XENON1T and LZ will be able to constrain this possibility even further. Finally, the massless dark fermion will contribute to the amount of relativistic degrees of freedom in the early Universe. While its contribution to the effective number of neutrinos is too small to be constrained with present data, future EUCLID-like surveys could reach a sensitivity close to their expected contribution, making this alternative probe a promising complementary way to test this scenario. JHEP10(2017)169 liance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
7,744.8
2017-10-01T00:00:00.000
[ "Physics" ]
Exploring the effects of Clustering Algorithms on Free Text Recommendation In this paper, we provide a study on the effects of applying classical clustering algorithms, such as k-Means to free text recommender systems. A typical recommender system may face problems when the number of items from a database goes from a few items to hundreds of items. Currently, one of the most prominent techniques to scale the database is applying clustering, however clustering may have a negative impact on the accuracy of the system when applied without taking into consideration the underlying items. In this work, we build a conceptual text recommender system and use k-Means to partition its search space into different groups. We study how the variation of the number of clusters affects its performance in the light of two performance measurements: recommendation time and precision. We also analyze if this clustering is affected by the representation of text we use. All the techniques used in this study uses word-embeddings to represent the document. One of the main findings of this work is that using clustering we can improve the recommendation time in up to almost 30 times without affecting much off its initial accuracy. Another interesting finding is that the increment of the number of clusters is not directly translated into linear performance. Introduction Recommendation Systems are systems specialized in retrieving items from a database that may be of interest of users (top-n recommendation problem).A typical Recommendation System will collect information from an user inputs and match this information with a set of heuristics to fetch the items it believes will best suit the user needs.Efficient recommendation systems can retrieve meaningful items in a short amount of item, i.e., the accuracy and the retrieval time of the system are widely used as metrics to evaluate recommendation systems. The structure of the data directly influences the capacity of a system to analyze the items parameters and produce recommendations.However, natural text documents does not have a clear constitutional rule, and can vary greatly in terms of size, contents and internal text structure.Because of that, when one wants to create a recommendation system, she requires to apply techniques that are able to generate a structure that represents these free text documents.The most classical representation approaches are: Bag of words (BOW), in which each document is represented as the set of its internal words, with variations removing or not stopwords, and term-frequency inverse-document-frequency (tf-idf ), in which each word in the document has its importance calculated by the frequency it appears in the text against the frequency it appears in the corpora. Classical document representations often fails to capture the whole complexity that exists in the natural language text field, like synonyms or polysemous words (words with several meanings).Word2Vec models were introuced in 2013 by Mikolov, et.eal.[1] with the main objective of generating some structure to represent words that could carry their semantic meaning.In their technique, words are represented as a high-dimensional vector, that are generated from the probabilities of their existence in a corpora.From their work, different authors employed word-embedding to calculate the distance between documents. When we recommend documents by finding the closest documents in the search space, the amount of documents to compare directly influences the recommendation time.If we represent a document as a vector of n dimensions and we have d documents in our search space, the complexity of calculating the most similar items is defined as O(n × d).However, if we reduce this search space by a fraction k of its original value, the search time is also reduced by n k .However, dividing the search space into k fractions can hurt the accuracy of the system.For instance, the clustering algorithm could generate highly heterogeneous clusters, which could spread documents of the same classes among the clusters and reduce the probability of them being recommended when we provide recommendations from a single cluster. In this paper, we apply a clustering technique to divide the search space of a recommendation system and calculate the influences on precision and recommendation time on this system.We also compare the effects of using clustering into different document representations.Both document representations uses word embeddings. The rest of the paper is organized as follows.Section 2 presents a review of the literature regarding clustering and recommendation systems.Section 3 briefly explains some of the necessary background to better understand this work.Section 4 gives a more detailed explanation about the techniques we use in this paper.In Section 5, we introduce the datasets, the experimental setup, and the set of experiments conducted.In Section 6 we discuss the results of our experiments, we show that our methodology was able to improve the recommendation time by almost 4 times for our tested dataset.Finally, we conclude by highlighting the main points of our work, and identifying some possible perspectives. Related Works Cluster documents for reducing search time has been studied by different authors.On Cutting et.al. [2], and Hearst and Pedersen [3], clustering is used to build a table-of-contents of the documents collection and hence facilitate the browsing through the collection.Zamir and Etzioni [4] introduces a new clustering algorithm called the Suffix Tree Clustering (STC) which first identifies sets of documents that shares common phrases, and then clusters according to these phrases.Although we do not use the phrases for clustering, we do use the document's combined bag of word to generate the clusters. Many authors uses content-based clustering for retrieving information from the web [5].On [6] the author ranks the search query and then generate results based on this rank, they then merge these results to generate a cluster.We also use the contents of the document to cluster the documents, but instead of clustering when we return the document, we pre-cluster them based on their distances. Although all of those techniques have efficient ideas of how to separate documents, none of them takes into consideration that the documents are a word-embedding form.To the best of our knowledge, we are the first authors to try to use classical clustering techniques to separate documents for a recommender system that uses word embeddings document representations. Word2Vec Word2Vec is a word-embedding technique that was introduced by Mikolov et al. [1] and is based on the skip-gram model with sampling.They propose a simple neural network, with emphasis on efficiency at the training phase rather than precision.The neural network architecture consists of an input layer, a projection layer, and an output layer to predict nearby words.They train the vectors with the aim of maximizing the log probability of neighboring words within a specific corpus.More formally, given a sequence of words w 1 , ..., w T , it tries to maximize the average log probability.where c is the size of the training context and p(w j |w t ) is the hierarchical softmax function of the associated words w j and w t . .Since the model is usually trained over large datasets, the words representation are able to learn complex word relationships and also carry semantic meanings.The learning phase of word embeddings is unsupervised and it can be computed on different corpus based on the user interest or pre-computed in advance.In this paper we use a pre-trained version of Google's word2vec [7] as our word embedder, but other embeddings techniques are also available (Collobert and Weston, 2008 [8]; Mnih and Hinton, 2009 [9] ; Turian et al., 2010 [10]). K-Means Clustering is the unsupervised classification of patterns into groups (clusters) [11].The k-Means clustering is an iterative algorithm that initially assigns random groups to all the points, and then proceeds to its update step by computing the initial mean of the center each random assigned cluster.This update step will be run until the group's center doesn't variate much from each iteration.On termination the algorithm returns a list of each cluster it assigned to the data points. More formally, given a set of dimensional training input vectors x 1 , x 2 , ..., x n , the k-Means clustering algorithm partitions the n training set into k sets of data points or clusters S = S 1 , S 2 , ..., S n , where k ≤ n, such that the within cluster sum of squares is minimized.That is: where c i is the centroid or mean of data point in cluster S i .The algorithm tries to improve the result by updating the centroids described in Equation 2. A simple content-based recommendation system Figure 1 displays a general idea of a recommender system.In a content-based document recommendation system, when an input arrives, with respect to an input document, also called a query-document, a set of meta-data related to this query document is extracted and used by the system so it can, based on a set of internal beliefs, judge which items may be of the interest of the user.Then, the system needs to proceed to a search step in which it will look for its database for the items it intents to recommend.The recommendation can be divided into five distinct steps: • Convert the document's contents to embedded vectors.This step converts every word of a document to its embedded version to be used by the document representation models.• Convert the embedded vectors to a document representation.This step converts the set of word embeddings into a single vector.The shape of this vector depends on the representation used.• Label from clustering algorithm.This step is responsible for getting the contents of the document and generating a label based on it.This label represents the group in which the document belongs.• Documents retrieval.In this step, we select documents from the database to be recommended. Those documents will be used in the next ste.• Sorting.In this step we apply a sorting algorithm to rank the documents based on their similarity to the query document. Figure 2 displays the updated work-flow for our recommender system.In this section we describe each step in detail. Once we have means to compare documents, and a function that can rank documents for recommendation, the last thing that the system requires is a tool to fetch items that will be passed along the query document to the rank function and produce a recommendation.On Subsections 4.4 and 4 we will introduce the techniques we use in this work to retrieve documents from the database. Document to Word Embeddings In this step, we use a tokenizer to extract all the words of the document.Then we can use a word embedder to capture the semantic vector of each word.This step is identical for every document representation.At the end of this step, for each document, we obtain a set of keyed vector v that represent the document. Generating Representations In this step, we would like to convert the document's keyed vectors v into a single vector that contains the main meaning of the document. Formally, for a matrix M w,v containing every word w of a document in its vectorial form v. Let X be the document representation converter function that applied on M w,v converts it into a vector v x that represents the document for this representation. Since there is no straightforward mean of calculating the distance of two free-text documents, we had to generate a structure to represent documents, which provides us a tool to compare them.We use two document representation techniques, both uses word2vec to convert every document used in the system.The first technique is called Column Average Representation (CAR) and was introduced in [12] and uses the average of each column feature to represent documents.The second one is called Doc2vec and was introduced in [13] and uses concepts from word2vec to represent documents.Both techniques are explained in details on Subsections 4.2 and 4.3 respectively. Labeling a document So far, we have obtained the document representation for each document.Next we would like to cluster those documents based on their position in the search space using the methodology described in Section 4.5.We use K-Means to group all the documents contained in the database.Given the vectors of the documents of the database, we can generate a model that is able to not only separate the existing documents, but also capable of providing a cluster to new documents, in the form of a label.These labels will be used to the next step to retrieve documents from the database.Step 5 Step 2 Step 4 Step 3 Step 1 More formally, for a document d in its representation form v x , let y be the labeling algorithm that applied on v x generates a label l to be used to the retrieval step to search documents. Document Retrieval In this step, we use the model generated by the previous step to selectively retrieve documents from the dataset.Every time we need to make a search in the database, we will select only members from a determined group of documents.This reduced amount of retrieved documents will reduce the load of the sorting algorithm since it will have less items to compare. Sorting and Ranking Documents This is the final step for recommending systems.In this step we rank the items according to their similarity with the query document.In this work, we will use a function based on distance to rank items and make recommendations.The closer a document from the recommendation space is from the query document, the higher will be its rank on the system.To calculate the distance we use the k-neighbors algorithm [14]. Column Average Document Representation To be able to compare the distance of documents, we want to have a model that is able to move document representations from the textual domain to the numeric domain in order to be able to mathematically compare them. The Column Average representation is a technique introduced in [12] that tries to grab the document meaning by extracting every word of the document, converting them to vectors, and averaging the vectors dimensions.By doing so, they represent documents as a vector and hence have a representation that allows us to directly compare documents. The workflow of this method can be seen in Figure 3.The first step is to convert the document's text to the word embedding version.We use the word embedder to do such task.More formally, if we have a document d containing a set of words W, we can convert every word of W into embedded vectors and organize them into a matrix matrix M i,j , in which each line i represents a different word from the text and each column j represents a feature of the word embedder.From this matrix we can calculate the mean of each column of M, which results into a vector v with j dimensions, in which each value j of the vector is the mean of the corresponding column j of M. With this technique we are capable of converting free text documents to vectors that can be used in our recommender system.The length of each of those vectors depends only on the number of features of the word embedder, in the sense that every document vector has the same length, and therefore they can be directly compared, feature by feature. A major advantage of using this technique when compared to other techniques is the complexity of the recommendation.A recommender system recommendation time is directly influenced by the size of the document representation, so a more compact representation becomes attractive for such system since having fast recommendations is always desired.By using the Column Average Representation, the complexity of the recommendation algorithm will be O(n) × D, in which D is the set of documents contained in the corpora. Doc2vec Doc2vec was presented in 2014 by Mikolov and Le [13] and tries to solve the problem of representing a document as a vector, regardless of the document length. The concept they used was an extension of the word2vec model.They added another vector that contains the paragraph id, and maps it to a document. They invented a model called Distributed Memory version of Paragraph Vector (PV-DM).It acts as a memory that remembers what is missing for the current context -or as the topic of the paragraph.While the word vectors represent the concept of a word, the document vector intends to represent the concept of a document. Baseline Recommendation Characteristics As stated in Subsection 4.1, to be able to perform recommendations, the system needs to fetch documents into a database.Our baseline approach will fetch the whole database when making a recommendation. To have an idea of the efficiency of this approach, we can calculate the complexity of the search on this set.Since we need to compare n documents, represented as vectors v, the complexity of the baseline algorithm is O(n × v).Also, we need to perform a sorting operation after the distances are calculated.We use Timsort, which has complexity O(n × log(n)) for the average and worst case.So, the total complexity of a recommendation for our baseline case is O(n × v) + O(n × log(n)). Our approach characteristics In our approach, we try to reduce the search time by clustering the results and searching on a fraction of the recommendation space. In this work, we want to apply those clustering algorithms to our content-based recommender system in order to reduce the recommendation time when performing recommendations.The main motivation behind this idea is that as we separate the documents into groups, the amount of calculations required to make a recommendation are reduced.A reduction in the amount of calculations can be directly translated into a reduction in recommendation time.In order to test our hypothesis, we separate the whole database into clusters. More formally, assume that for a query document q i , contained in the test set Q = q 1 , ..., q n the system returns a set containing a fixed number k of documents determined R = r 1 , ..., r k .For each document of R, we will apply an arbitrary function g that compares the class of the document r k and the query document q i and returns 1 in case they have the same class and 0 otherwise.Now assume that we have a search space that contains all the d documents from the train set represented as one of the vectorial techniques introduced in the previous subsections. If the documents are expressed in their vectorial form, we can represent them as a matrix Z i,j ∈ R d× f .Each line of Z contains a document d.The j-th column, x ∈ R f , represents the feature f of the document in a multi-dimensional space.Every time the system receives a query document q, it will search for recommendation on its whole database.It will have to compare all the f columns of each d document and the q.The search algorithm performance will be O(d × f ).If we partition this space S and into β sub-spaces SS = s 1 , ..., s β , β ∈ S, and perform searches on an arbitrary group s β .It is expected that the number of comparisons are reduced β times.Figure 4 illustrates how clustering can be used to prune the search space and improve recommendation time. By dividing the search space, when we perform a search using a query document, we can use a model to, based on the query document contents, would select which group fits it better.If the groups are uniformly distributed, the expected speedup for the application is directly proportional to the number of clusters.Partitioning the search space means that we are able of reducing the complexity of searching for recommendation items from O(d × c). to O( d×c β ).One of the main advantages of using clusters for recommender systems is that when the system searches for documents, there are multiple answers that can be viewed as correct.In our system, the evaluation is based on the class of the returned documents, which means that for the recommendation to be judged as useful, it only has to return documents from the same class of the query document.In this current of thought, the best partition would be the ones that evenly divide the documents into groups, and within each group there are only members of the same class.It is not incorrect for the system to have clusters containing members exclusively from a single class, in fact, the best case would be if the system separate the documents evenly among clusters. Dataset and Experimental Setup This section will give details of the dataset we used as well as the experimental setup we used for our experiments. Dataset In the remainder of this article we use the Reuters-22172 [15] dataset to evaluate the performance of the recommendation system.Reuters is a classic news dataset labeled by news topics (we use the 8-class version).It contains 8 classes, 5485 training documents and 2189 test documents.We will evaluate the quality of the recommendations based on whether or not the recommended documents are in the same class as the query document. Experimental Setup To test our model, we created a reference implementation of our method.We created a script that convert all the documents from a folder to the representation explained before and store them in a database.Then another script clusters those documents, and store the model that assign clusters to the file system.The information related to the cluster that each train document belongs is then stored into the database.Finally we run another script that takes the whole test set, convert each document to the representation introduced before and then calls a recommendation routine for each of them.The recommendation routine is responsible for reading the query document, and using the cluster model to assign a cluster to this document.It then retrieves all the documents from the assigned cluster from the database.It will then calculate the distance between each document and the query document, sort the results, and return the k first documents. To extract the bag-of-words of a document, we used the NLTK's sentence tokenizer and extracted all the words from sentences.For all the steps that required clustering, we used the sci-kit learn's version of K-Means.For the steps that required database for storing documents and labels, we used MongoDB. Regarding word-embeddings, we download the pre-trained one from Google's webpage.To load the model and furthermore be able to convert words to its respective embeddings, we used Gensim, an open-source tool that can be used to load word pre-trained embedding and converts words to an embedding vector. All the code necessary to run those experiments, and all the experimental results introduced on the next sections is public and available at github 1 . Our experiments were run in a Ubuntu 17.10 virtual machine hosted in a dedicated 10-core 3.2GHZ Intel i7-6900K with 16Gb of memory. Experimental Results In this section, we will present the experimental results for the system and dataset introduced in Section 5.In this recommender system, we will use both document representations introduced in Section 4, and we will analyze how the system will perform using each of them with and without clustering. Testing and Analysis Criteria The system in this work will be evaluated based on two metrics: recommendation time and precision.The recommendation time is the total time the system takes to perform all the recommendations.Precision is the ratio between the number of recommended documents that are relevant and irrelevant. For the data described in Subsection 5.1, we recommend 10 documents for each document of the test set.This means that for our test set of 2189 documents, we will make 2189 recommendations, each of them returning up to 10 documents.We then compare the classes of the test document and recommended items to calculate the precision.The recommendation time will be calculated by summing the time of each individual recommendation. Time Efficiency We start our analysis by evaluating how much clustering influences recommendation time.Figure 5 displays the sum of each individual recommendation time for against the number of clusters.For our first case, the baseline in which all the documents are grouped into a single cluster, the total recommendation time is 795 seconds, which averages a time of 0.36 second per recommendation. After the baseline, we increase the number of clusters from 8 to 104 (with steps of 8) and display their time.We notice that by using clustering only, regardless of the number of selected clusters, we have a positive impact over the recommendation time, as expected.We can also see that although the total recommendation time keeps decreasing from 8 clusters until we have 24 clusters, the recommendation time of the system starts to increase until we reach 48 clusters, after that it reduces to 258 seconds and stabilizes on numbers near 270 seconds for the rest of the experiment. The system in general had an overall improvement, reaching highly improvements specially when we had over 100 clusters.Doc2Vec was the one that benefit the most from using clustering to separate the database, having a speedup of near 30 times.Column Average Representation also had a significant improvement, being the most significant of them the one when we went from using no clustering (1 cluster) to using clusters.Table 1 gives us more details about the maximum variance on time for each representation. Despite the system overall positive results using clustering, our maximum speedup was far from the theoretical speedup that could have been achieved.One of the reasons that may be causing this disparity between the real speedup and the theoretical one may be in the division of documents.If the clusters are separated evenly, this speedup should be linear, since the system would require 1 n less calculations to achieve a result.In order to better understand the effects of the partition configuration over the time, we take a closer look into the configuration of the partitions. Assessing the Cluster Partitions Figure 4 shows the distribution of members among the clusters for the following cluster sizes: 8, 24, 40 and 104 clusters.The ideal speedup is obtained when we have the same amount of documents inside each cluster, however as Figure 4 shows, the distribution of elements is not even among the clusters, which causes an imbalance on the recommendation time.Small clusters yield shorter execution times, while larger clusters have recommendation time closer to the baseline.In the end, the overall recommendation time is heavily influenced by the size of the clusters selected for the search stage.To reason over the difference in time of a small and a big cluster, we analyze the recommendation time of each cluster size for two out of four cases aforementioned.Figures 7 and 8 displays the recommendation time separated by cluster size for each k-Means version.The recommendation time grows as the number of elements inside a cluster grows.For 8 clusters, we can see that the bigger cluster has 2 times the size of the smallest one, but it can reach up to 4 times the recommendation time.For 24 clusters, this difference cannot be so easily perceived because the clusters have a more similar size, and consequently a more similar recommendation time. Effects on Precision We compute the precision by calculating the amount of correct guesses inside the set of recommended items.If we recommend 10 items, and 8 are from the same label as the query document, we say that the recommendation had 80% of precision. Since we artificially separated the data, we expected the precision to be negatively impacted.However, the impact on the precision were minimal.The results for the baseline alongside with all the cluster sizes is displayed on Figure 9. Regardless of the cluster size, the precision is not strongly affected.From the table we can draw two important results: The first one concerns the fact that the precision did not change significantly as the number of clusters changed, independently of the representation used.This result means that k-Means was able to take advantage of the document representation and generate highly accurate partitions.The second observation regards the document representations: From the baseline case without clustering, we had 90% precision for the CAR representation and around 55% for the Doc2vec representation.From these results we can draw the conclusions that both the document representations chosen for this experiment were sufficient accurate to capture the distinction that separates documents for this dataset; With CAR being more precise than Doc2vec.A more elaborated discussion over those two and other document representations can be found at [12]. Conclusion and Future Works In this paper, we have studied the effects of adding clustering when doing recommendations of text files.Our main goal was to reduce the search time, without greatly affecting the precision.We investigated how different number of clusters for k-Means can affect the performance in relation of time and accuracy.We empirically tested our hypothesis using a real-world dataset and assessed that the time can be improved up to almost 30 times when using a correct configuration of clustering.We also verified that disregarding the number of clusters or document representation, using k-Means to partition the documents usually yields improvements in terms of time, without affecting much of the precision. Figure 1 . Figure 1.A simple recommendation: The system combines the query document and the database documents into a rank function to produce recommendations. Figure 2 . Figure 2. Recommendation Workflow:The first step converts the query document to an embedded version of it, the second step converts it to the representation, the third step searches for documents in the database and then, rank and sort them, finally the last step return the items sorted by rank. Figure 3 . Figure 3. Document Representation: First we convert the document to a matrix of its bag-of-words, then we convert it to a single vector. Figure 4 . Figure 4. (a) Non-clustered corpora with search time of O(d × c).(b) Clustered corpora reducing the search time by a fraction β. Figure 5 . Figure 5. Reduction of the size of the document representation by means of using our model.The model is able to reduce the size of the corpora representation from a 3D matrix to a 2D one. Figure 6 . Figure 6.Cluster individuals distribution for every cluster using k-Means with 8(a), 24(b), 40(c) and 104(d) clusters configuration.The higher the number of clusters, the more distributed the elements should be among the clusters. Figure 7 . Figure 7. Recommendation time for every cluster size generated using k-Means with 8 clusters.We can see a pattern that the recommendation time increases as function of the cluster size. Figure 8 . Figure 8. Recommendation time for every cluster size generated using k-Means with 24 clusters.We can see a pattern that the recommendation time increases as function of the cluster size. Table 1 . Maximum difference in time and precision for each document representation.
7,099.4
2018-11-07T00:00:00.000
[ "Computer Science" ]
Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. Background In applied studies, the relationship between an explanatory and a dependent variable is routinely measured using a statistical model. For instance, in epidemiology it is quite common that a study focuses on one particular risk factor. The scientific problem is to analyze whether this risk factor has an influence on the risk of occurrence of a disease, a biological trait , or another outcome. To answer to this http://www.biomedcentral.com/1471-2288/ 13/75 Binary coding is often used in epidemiology, either to make interpretation easier, or because a threshold effect is suspected. In a regression model with multiple explanatory variables, the interpretation of the regression coefficient for a binary variable may be easier to understand than a change in one unit of the continuous variable. Dichotomous transformations of a variable X are defined as: Other transformations are also used, in particular Box-Cox transformations which have been defined as: but the choice of the transformation is often subjective. The arbitrariness of the choice of cutpoints may lead to the idea of trying more than one set of values. Hence to analyze data, the statistician may have to use several transformations, and for each the statistician applies a test for "β = 0" (where β is the coefficient representing the effect of the risk factor of interest). The most favorable transformation is then chosen. The cutpoint giving the minimum p value is often termed "optimal" [2,3]. When testing several codings of a variable, there is a problem with the multiplicity of tests performed, leading to an incorrect p value and possible overestimation of effects [4]. Generally, researchers fail to consider this problem and do not correct the significance level in relation to the number of tests performed [3], which can lead to an increase in the Type-I error [5]. The p value should thus be corrected to take into account the multiplicity of tests. In many cases, it is now widely recognized that categorization of a continuous variable could introduce major problems to an analysis and interpretation of the associated model [1,3]. It is important to note that the aim of this paper is not to defend this practice, but to improve a practice commonly used by epidemiologists in terms of multiple testing. Furthermore, despite known loss of power following dichotomization in the univariate case, Westfall [6] shown that dichotomizing continuous data can greatly improve the power when multiple comparisons are performed. Many methods of correction exist, the most simple and well known being the Bonferroni rule. Several authors have improved this method to make it more powerful, however most do not take into account the correlation between the tests [7][8][9][10][11]. If the tests are independent, or moderately dependent, then they provide an upper bound which may be satisfactory. Efron [12] proposed a correction that account for the correlation between two consecutive tests if there is a natural order between the tests, with high correlation between adjacent tests. Liquet and Commenges [13,14] and Hashemi and Commenges [15] proposed a more exact correction, accounting for the whole correlation matrix, for score tests obtained in logistic regression, generalized linear model and proportional hazards models. Here, we proprose extending these studies to a categorical transformation (with m > 2 categories) of the continuous variable by involving more than one parameter in the model; m − 1 dummy variables are introduced in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect. In this context, we propose a method and an R program based on resampling approaches to determine the significance level for a series of several transformations (including dichotomous, Box-Cox and categorical transformations) of an explanatory variable in a Generalized Linear Model. The problem of correcting the estimation of the effect will not be examined here. First, we revisit the example proposed by Liquet and Commenges [14] on the relationship between cholesterol and dementia [16] to provide a framework for our discussion. In section 'Methods: Statistical context' , we present the statistical contexts relating to multiple testing; the model, the maximum test and the minimum p value procedure and finally the score tests are exposed. Section 'Methods: Significance level correction' presents the different methods of correction of the Type-I error. A simulation study for the different strategies of coding, and application of the model to the initial example are presented in the section 'Results' . Concluding remarks are given in the two last sections. Example: revisiting the PAQUID cohort example We revisited the example presented in the article of Liquet and Commenges [13] for a coding of a binary variable in a logistic regression. This example is based on the work of Bonarek et al. [16], who studied the relationship between serum cholesterol levels and dementia. The data came from a nested case-control study of 334 elderly French subjects aged 73 and over who participated in the PAQUID cohort (37 subjects with dementia and 297 controls). The variables age, sex, level of education and wine consumption were considered as adjustment variables. The analysis focused on the influence of HDL-cholesterol(high-density lipoprotein) on the risk of dementia. Bonarek et al. [16] first considered HDL-cholesterol as a continuous variable; then, to ease clinical interpretation, they chose to transform the HDLcholesterol into a categorical variable with four classes. Finally, as there was no significant difference between the first three quartiles, HDL-cholesterol was split into two categories with a cutpoint at the last quartile. The best p value , 0.007, was obtained in the latter analysis http://www.biomedcentral.com/1471-2288/13/75 and was selected for interpretation. However, this p value did not take into account the numerous transformations performed to determine the best representation of the variable of interest. Legitimate questions arising from this include the following: What is the real association between dementia and HDL-cholesterol, with a correction of the Type-I error? Is it really significant? Liquet and Commenges [14] proposed correcting the p value associated with multiple transformation including dichotomous and Box-Cox transformation, however, their method cannot be used with categorical transformation. Statistical context Model Let us consider a Generalized Linear Model with p explanatory variables [17], where Y i (1 ≤ i ≤ n) are independently distributed with probability density function in the exponential family defined as follows: and where a(·), b(·), and c(·) are known and differentiable functions. b(·) is three times differentiable, and its first derivative b (·) is invertible. Parameters (θ i , φ) belong to ⊂ R 2 , where θ i is the canonical parameter and φ is the dispersion parameter. In this context, we wished to test the association between the outcome Y i and explanatory variable of interest X i , adjusted on a vector of explanatory variables Z i . The form of the effect of X i is unknown, so we may consider K transformations of this variable X i (k) = g k (X i ) with k = 1, . . . , K. For example, if we transform the continuous variable in m k classes, m k − 1 dummy variables are defined from the function g k (·): . Different numbers of level m k of the categorical transformation are possible. The model for a transformation k can then be obtained by modeling the canonical parameter θ i as: ) and γ = (γ 0 , . . . , γ p−1 ) T is a p − 1 vector of coefficients, and β k is the m k − 1 vector of coefficients associated with a categorical transformation k of the variable X i . For dichotomous or Box-Cox transformations β k reduce to a scalar (β k ∈ R). The hypothesis of the test for the transformation k is defined as follows: Thus all the null hypotheses are the same, and denote it by H 0 . Maximum test and minimum P-value procedures For each coding, k, of the variable X i , a test statistic T k is performed on the nullity of the vector β k . We then have a vector of test statistics T = (T 1 , . . . , T K ) for the same null hypothesis (no effect of the risk factor of interest). In the context of dichotomous and Box-Cox transformations, each test statisitic, T k , has asymptotically, a standard normal distribution. Thus rejecting the null hypothesis if one of the absolute values of the test T k is larger than a critical value c α , is equivalent to rejecting the null hypothesis To cope with the multiplicity problem, Liquet and Commenges [13,14] proposed that the probablity of Type-I error for the statistic T max under the null hypothesis be computed as: where t max is the realization of T max . An equivalent approach is to use a procedure based on the individual p value of each test T k noted P k = P(|T k | > |t k |) (where t k is the realization of T k ). The minimum of the K realized p value corresponds to the test k which obtains the highest realization (in absolute values; k/ t max = |t k |). Then, we have: where P min = min(P 1 , . . . , P K ) and p min is the realization of P min . The interest of using a procedure based on the p value is the possibility of combining statistical tests which do not follow the same distribution. In the current context, we will combine dichotomous, Box-Cox and categorical transformations with more than two levels. Score test We briefly present the score test used for all of the K transformations where the same null hypothesis is tested (i.e. H 0 : "β k = 0 m k −1 " given by θ i (k) = γ Z i (with different alternatives)). We present the main results obtained by Liquet and Commenges [14] for the Generalized Linear Model in the context of dichotomous and Box-Cox transformations, and then consider the score test for categorical transformations. Dichotomous and Box-cox Transformations In the context of dichotomous and Box-Cox transformations, the score test used for testing the effect of the transformed http://www.biomedcentral.com/1471-2288/13/75 variable (β k = 0 with β k ∈ R) follows asymptotically a standard normal distribution: The correlation between the different tests has been defined by Liquet and Commenges [14]. Asymptotically, the joint distribution of T 1 , . . . , T K is a multivariate normal distribution with zero mean and a certain covariance matrix. Thus Liquet and Commenges [14] propose that the p value (associated with the test T max ) defined in (2) using numerical integration [18] be calculated. They called their method the "exact method". Categorical transformations In the context of a categorical transformation in m k classes, the score test testing H 0 : "β k = 0 m k −1 " (with β k ∈ R m k −1 ) follows asymptotically a χ 2 distribution with m k − 1 degrees of freedom and is defined as: where U k and I k are respectively the score function and the Fisher information matrix under the null hypothesis [19]. To compute the p value defined in (3), it is necessary to know the joint distribution of T = (T 1 , . . . , T K ). Some studies have defined the distribution of the multivariate χ 2 [20,21]. However, even though the correlation between the different tests could be easily estimated, it has not been possible, as far as we know, to obtain the joint distribution of T = (T 1 , . . . , T K ). To overcome this problem, we propose approximating the p value (defined in (3) by the minimum p value procedure) using a resampling method (defined in the next section) which also accounts for the correlation between the test statistics. Significance level correction Bonferroni method One of the most common corrections in multiple testing is the Bonferroni method. It has been described by several authors in various applications [7,11,22]. It allows an upper bound of the significance level of the minimum p value procedure to be computed as: where K is the number of tests. This method is very simple and does not require any assumption about the correlation between the different tests. It can therefore be applied directly to the different possible codings of an explanatory variable. However, this only provides an upper bound of the p value , which may be very conservative if the correlation between tests are high and the number of transformation are large. Resampling based methods We propose the use of resampling based methods [23,24] with the aim of building a reference distribution for the test statistics. These procedures have the advantage of taking into account the dependence of the test statistics for evaluating the correct significance level of the minimum p value procedure (or the maximum test procedure). The principle of resampling procedures is to define new samples from the probability measure defined under H 0 : "β k = 0 m k −1 ". Permutation test procedure Permutation methods can be used to construct tests which control the Type-I error rate [25]. In our context, the algorithm of the permutation procedure is defined as follows: 1. Apply the minimum p value procedure to the original data for the K transformations considered. We note p min the realization of the minimum of the p value ; 2. As under H 0 , the X i variable has no effect on the response variable, a new dataset is generated by permuting the X i variable in the initial dataset; 3. Generate B new datasets s * b , b = 1, . . . , B by repeating B times the step 2; 4. For each new dataset, apply the minimum p value procedure for the transfomation under consideration. We note p * b min the smallest p value for each new dataset. 5. The p value defined in (3) is then approximated by: where I {·} is an indicator function. However, it is important to note that exchangeability need to be satisfied [25][26][27][28][29][30]. This condition is much more restrictive than it appears at first sight. In fact, Commenges [29] and Commenges and Liquet [25] showed that the permutation test approach for the score test is robust if the model has only one intercept under the null hypothesis, or if X i are independent of Z i for all i in the context of a linear model and the proportional hazards model. This issue applies in our context. Thus we investigated, the robustness of the permutation method when the exchangeability assumptions is violated. Parametric bootstrap procedure In 2000, Good [31] explained: "Permutations test hypotheses concerning distributions; bootstraps test hypotheses concerning parameters. As a result, the bootstrap implies less stringent assumptions". Therefore, an alternative way may be to use http://www.biomedcentral.com/1471-2288/13/75 resampling method based on bootstrap [32], which give us an asymptotic reference distribution. This procedure could be defined by the following algorithm: 1. Apply the minimum p value procedure to the original data for the K transformations being considered. We note p min the realization of the minimum of the p value ; 2. Fit the model under the null hypothesis, using the observed data, and obtainγ , the maximum likelihood estimate (MLE) of γ ; 3. Generate a new outcome Y * i for each subject from the probability measure defined under H 0 . For example, for a logistic model (where a(φ) Repeat this for all the subjects to obtain a sample noted . . , B by repeating B times the step 3; 5. Apply for each new dataset, the minimum p value procedure for the transformation considered. We note p * b min the smallest p value for each new dataset. 6. Then, the p value defined in (3) is then approximated by: Simulation study The aim of this simulation study was to assess the performance of the two resampling methods to correct the significance level. Three different scenarios of transformations were investigated: dichotomous transformations, categorical transformations with three classes, and categorical transformations with different numbers of classes. To shorten the simulation study section we have not presented the results for the Box-Cox transformations. For each simulation case, the control of the Type-I error and the power of the developed methods were evaluated. For all simulations, the data come from a logistic model (where a(φ) = 1, b(θ i ) = log(1 + e θ i ), and μ i = E(Y i ) = e θ i /(1 + e θ i )) consisting of two explanatory variables: Z, an adjustment variable, and X, the variable of interest. We considered the following models: where Z i and X i are independent and were generated according to a standard normal distribution and the vector X i (k) was a transformation of a continuous variable X i . The sample size was set to be 100. We used 1000 replications for each simulation and 1000 samples for the resampling methods. Dichotomous transformations We only considered dichotomous transformations to explore a shape effect of the variable of interest. To obtain the best transformation, several cutpoints c k may be tested. When epidemiological references are not available, a strategy based on the quantile of the continuous variable is most commonly applied. In this simulation we used the median for one dichotomous transformation. For two dichotomous transformations we used the first tercile as the first cutpoint, and the second tercile as the second cutpoint, and so on. This strategy is summarized in Table 1. Firstly, we investigated the Type-I error rate. For a replication, the rejection criterion of the null hypothesis (β k = 0) was a p value less than 0.05. Thus, for a simulation of 1 000 replications, the empirical Type-I error rate was the proportion of tests where the p value was less than 0.05. Figure 1(a) shows the evolution of the Type-I error rate for dichotomous transformations. The naive method, without correction of the multiple testing, increases the Type-I error rate with the number of codings tried. For ten codings this error rate reached 0.27. The error rate calculated by the Bonferroni method decreased with the number of cutpoints. This correction was therefore too conservative whereas the exact method and resampling methods gave a Type-I error rate close to the nominal 0.05 value. When information on the shape of the effect of the explanatory variable was unknown we investigated the power of the methods applied above. We studied the power for a threshold effect model with a cutpoint value at the first tercile. Figure 1(b) gives the power as a function of the number of cutpoints tried. The power of the exact and resampling methods are quite similar to one another, and higher than the Bonferroni method. The difference between these methods and Bonferroni method increases with the number of cutpoints. We also observed that the power was highest at two cutpoints (two transformations). This result, was in fact, expected since we used the first and second terciles respectively as cutpoints for each dichotomous transformation. Power increased again when trying five and eight codings due to the fact that one of these codings corresponded to the first tercile. To conclude, the simulation study with dichotomous transformations showed that the resampling methods provide similar results for the Type-I error rate control and the power as those seen with the exact method. Categorical transformations with same number of classes We considered here only categorical transformations with three classes. In this situation, the choices of the two cutpoints (noted c 1 k and c 2 k ) defining the categorical variables into three classes are also subjective. For this simulation study, our strategy was to attempt to find the most favorable transformation into three classes. This consisted of using the tercile of the variable for one transformation with two cutpoints (c 1 1 = q 1/3 and c 2 1 = q 2/3 ); for two transformations we add to the previous choice a transformation with the first quartile and the third quartile for the two cutpoints (c 1 2 = q 1/4 and c 2 2 = q 3/4 ). The global strategy until we obtain 10 transformations in three classes is presented in Table 2. We investigated the Type-I error rate. Figure 2(a) shows the evolution of the Type-I error rate for categorical transformations in three classes. The results are similar to those we observed for dichotomous transformations. The Bonferroni correction was still too conservative, while resampling methods gave a Type-I error rate close to the nominal 0.05 value. Next we considered the power of the different methods when the simulated model was specified with a categorical transformation of the continuous variable in three classes defined by cutpoints at the first and third quartile. The two resampling methods gave similar results with a higher power than the Bonferroni method (see Figure 2(b)). The power was highest for two transformations. This result was also expected because, with the strategy presented in Table 2, the transformation into three classes with cutpoints at the first and third quartile is used. Various categorical transformations In this last simulation, we presented a more realistic situation where different kinds of transformations were used to investigate the effect of the variable of interest. We proposed trying different categorical transformations and varying the number of classes. The most natural method is to use a dichotomous transformation at the median for one transformation. For two transformations, we added the previous coding and a categorical transformation in Table 2 three classes based on the tercile. For three transformations, we added the two previous codings and a categorical transformation in four classes based on the quartile, and so on. The strategy proposed in this simulation is presented in Table 3. The results for the Type-I error rate were similar to the previous simulation case (not shown here). We then studied the power of the different methods when the simulated model is specified with a categorical transformation of the continuous variable in five classes defined by cutpoints at the quintile. We can see in Figure 3, that, in this situation, the parametric bootstrap method seems slightly more powerful than the permutation method. The resampling methods were also more powerful than the Bonferroni method. Finally, as expected, we can see that the power was highest for four transformations, where one of the transformations used corresponded to a categorical transformation with quintiles as cutpoints. Robustness of resampling methods We investigated the robustness of the resampling methods when the exchangeability assumption is violated. The data came from the model defined in (4) with two dependent variables X i (k) and Z i . The dependency between X i (k) and Z i (formalized by the correlation ratio(η 2 )) was specified by the following model: where X i (k) is the binary coding of the X i variable with a cutpoint at the median. The coefficient β * was computed according to η 2 and the variance of X i (k) variable. We tested three different binary codings with cutpoints at the first, the second and the third tercile. The strategy is used for various values of the correlation ratio (η 2 ) from 0 to 0.6. The robustness of the permutation method when the exchangeability assumption is violated was evaluated with respect to the results of the exact method. For different correlation ratios (η 2 ) we evaluated the control of the Type-I error, the power, the Mean Square Error (MSE) of the estimated p value (p value from the exact method was used as a reference), and the rate of good decision (same decision as for the exact method). These results are presented in Figure 4 and show the good behavior of the permutation method since the Type-I error is controlled at the level 0.05, the power is the same for all the methods, the rate of good decision is always greater than 0.97, and the MSE is very low. Moreover, the distributions of the estimated p value are quite similar for different methods (not shown). Example: revisiting the PAQUID cohort example In order to find the real association between the two variables of interest in the example described at the end of Background section, we applied our newly developed approach which combined different kinds of transformations. Liquet and Commenges [14] have proposed seven dichotomous and five Box-Cox transformations. However, their method did not allow for categorical transformations. We proposed to add, to the seven dichotomous and http://www.biomedcentral.com/1471-2288/13/75 five Box-Cox transformations for this application, four codings in three classes and four codings in four classes. The best transformation appeared to be the dichotomous transformation of HDL-cholesterol with a cutpoint at the third quartile, as already found by Bonarek et al. [16]. The Bonferroni correction gave a p value equal to 0.140, thus not significant for an α level at 0.05. The p value , which is given by both resampling based methods is 0.038. To conclude, it is important to chose a powerful method of correction, because in this context the p value with no correction given by Bonarek et al. [16] was very optimistic (0.007), and the Bonferroni correction was very conservative, yielding an incorrect conclusion. The proposed approach based on the resampling procedure gave a result which was still significant and more realistic than the uncorrected p value . Discussion In this paper, we have considered the problem of correction of significance level for a series of several codings of an explanatory variable in a Generalized Linear Model with several adjusting variables. The methods developed, based on resampling methods, enable us to consider categorical transformations as more flexible in order to explore the unknown shape of the effect between an explanatory and a dependent variable. The simulation studies presented above show, firstly, that the resampling method provides similar results for the Type-I error rate control and the power as those found with the exact method proposed by Liquet and Commenges [14] for dichotomous and Box-Cox transformations. Secondly, in the situation of categorical transformations, these simulations demonstrate the good performance of our proposed approaches. Finally we observed the robustness estimation of the p value by the resampling methods. These methods can be easily generalized to other models, such as the proportional hazards model, and to potentially extend the work of Hashemi and Commenges [15] in the same context. Conclusion To conclude, the methods developed, based on resampling, demonstrate good performances, and we have implemented different methods and different strategies of coding in an R package called CPMCGLM M (for Correction of the Pvalue after Multiple Coding in a Generalized Linear Model).
6,561.8
2013-06-08T00:00:00.000
[ "Mathematics" ]
A Comprehensive Literature Search of Digital Health Technology Use in Neurological Conditions: Review of Digital Tools to Promote Self-management and Support Background The use of digital health technology to promote and deliver postdiagnostic care in neurological conditions is becoming increasingly common. However, the range of digital tools available across different neurological conditions and how they facilitate self-management are unclear. Objective This review aims to identify digital tools that promote self-management in neurological conditions and to investigate their underlying functionality and salient clinical outcomes. Methods We conducted a search of 6 databases (ie, CINAHL, EMBASE, MEDLINE, PsycINFO, Web of Science, and the Cochrane Review) using free text and equivalent database-controlled vocabulary terms. Results We identified 27 published articles reporting 17 self-management digital tools. Multiple sclerosis (MS) had the highest number of digital tools followed by epilepsy, stroke, and headache and migraine with a similar number, and then pain. The majority were aimed at patients with a minority for carers. There were 5 broad categories of functionality promoting self-management: (1) knowledge and understanding; (2) behavior modification; (3) self-management support; (4) facilitating communication; and (5) recording condition characteristics. Salient clinical outcomes included improvements in self-management, self-efficacy, coping, depression, and fatigue. Conclusions There now exist numerous digital tools to support user self-management, yet relatively few are described in the literature. More research is needed to investigate their use, effectiveness, and sustainability, as well as how this interacts with increasing disability, and their integration within formal neurological care environments. Background Neurological conditions present a human and economic challenge worldwide. How best to manage them remains a perennial issue. Digital health technology offers a potential solution. It would seem plausible that digital technology could play some role in supporting patients in self-management or health care professionals in the delivery of care. However, the digital health market contains a bewildering variety of websites, online platforms, and apps, some with empirical support, making it difficult to make sense of what is available, and their potential benefits. The objective of this paper was to conduct a literature search of the research on digital health technology in the self-management of neurological conditions, and to investigate what functions the technology provides and what benefits to users have been reported. Neurological Conditions Neurological conditions refer to a group of medical disorders often resulting from disease or physical damage that affect the brain, or central or peripheral nervous systems. They can negatively impact patient mental health [1][2][3][4], psychological well-being [3], life satisfaction [3], health-related quality of life [5][6][7][8], cognitive functioning [3], and social support [9]. Worldwide, they are identified as significant predictors of disability and death [10,11]. They can also be detrimental to caregivers in terms of their mental health, quality of life, and caregiver burden [12][13][14]. As well as a human burden there is also an economic one. In the United Kingdom, statistics from the Neurological Alliance [15] indicate 16.5 million people in England have a neurological disorder. This statistic is equivalent to 1 in 6 of the population, and a prevalence believed to be increasing [15]. It is estimated that the National Health Service (NHS) cost of addressing neurological disorders is around £4.4 (US $3.0) billion [15], and may account for up to 14% of social care spending [5]. Many neurological conditions will be long term and incurable, and have symptoms that produce persistent or sporadic difficulties. Their onset may be sudden or gradual and their trajectories are marked by variance in stability or progression. Treatment and management may vary in complexity and include a combination of medication, rehabilitation, information, and support, and the involvement of a range of health care, allied health, and social care professionals [5]. Neurological conditions are generally managed in the community and there is increasing recognition of the importance of individuals self-managing their conditions [5,16]. Recent qualitative research by Kilinc et al [16] demonstrated the complex psychological and behavioral processes underlying self-management in neurological patients. The involvement of technology is one approach to supporting such efforts [5], while research by Gandy et al [3] indicated that there is interest among patients in web-based platforms to promote self-management. Digital Health Technology Digital health technology, including terms such as eHealth, mobile health (mHealth), and digital tools, refers to the utilization, or application, of internet and smart-based technology to the promotion of health or health care [17]. Innovative technologies such as wearable devices, smartphone apps, internet-based self-help platforms, and health record databases have the ability to record, store, or present health-related data. This information can then be used to enhance the understanding, management, or monitoring of medical conditions by patients, carers, or health care professionals. A range of digital technologies have already been applied to several individual neurological conditions such as epilepsy [18], MS [19], headache and migraine [20], Parkinson disease [21], and acquired brain injury [22]. There appears use and interest, at least in the short-term, and some evidence, with regard to web-based platforms, of a potential beneficial influence on mental health and quality of life [23]. However, it remains unclear how digital technologies become normalized within health behaviors and systems of care delivery in the medium-to-longer term. Furthermore, there may be significant patient and care-provider barriers that need to be considered [18,24,25]. A limitation of the present literature is that recent reviews and commentaries have tended to focus on individual neurological conditions (eg, [18][19][20][21][26][27][28][29][30]). There is an absence of reviews presenting digital tools across conditions that makes it difficult for clinicians and researchers, especially those new to digital health, to make comparisons, evaluations, and recommendations. It would be advantageous to know what digital tools are available to different patient groups, the underlying functionalities that support or promote self-management, and any salient psychosocial or clinical benefits for users identified. Aims The literature search had several interrelated aims. First, we aimed to obtain an overview of the research on the use of digital health technology in the self-management of neurological conditions. Second, we aimed to identify the different types of digital health tools used by patients, carers, and health care professionals. Third, we aimed to develop an understanding of the underlying functionalities that allow digital health tools to support or promote self-management. Finally, we aimed to identify any salient outcomes, in terms of psychosocial or clinical benefits for users, associated with digital health technology use. Literature Search Databases and Search Terms We conducted a search of 6 databases: CINAHL, EMBASE, MEDLINE, PsycINFO, Web of Science, and the Cochrane Review. The searches were conducted using free text and equivalent database-controlled vocabulary terms. Search terms used were iteratively generated and informed by our interest in investigating digital health technology use in neurological conditions and neurodegenerative diseases. Multimedia Appendix 1 provides an example of the search terms. Within each database, search terms were grouped into 2 categories: condition terms (ie, neurological conditions and neurodegenerative diseases) and digital technology terms. Search terms were combined using standard AND/OR commands. Where possible, filters were applied within databases to restrict searches to human participants, adults, and beginning from January 2000 onward. Searches spanned from January 2000 to February 2020, and were rerun in January 2021. Inclusion Criteria The inclusion criteria for articles were as follows: Research conducted with human participants and published in English. Studies that had a focus on the use of digital health technology to help support self-management in patients or caregivers living with a neurological condition or neurodegenerative disease. Self-management was understood to refer to activities used to control a medical condition or maintain optimal health [31]. The self-management health component had to be delivered digitally, for example, via a computer, mobile/tablet app, or over the internet. Exclusion Criteria Articles were excluded if they were not conducted with human participants or if they focused on artificial intelligence, biochemistry, computational modeling, diagnosis/assessment, cognitive stimulation/training, epidemiology, genetics, neuroimaging, neuropathology, physiotherapy, rehabilitation, scale development/validation, sensor technology, treatment, or interventions delivered by telephone. These areas were excluded to help narrow down the focus of digital health technology involved in self-management. Literature reviews, book chapters, study protocols, conference presentations, poster presentations, and unpublished theses were excluded. Search Methodology Multimedia Appendix 2 shows that the overall search resulted in 26,572 articles being identified. Articles were downloaded into an Endnote library and duplicates were removed. The remaining articles were then exported to Rayyan reference management software, which allowed for the collaborative screening of articles by 2 reviewers. Articles were screened by reading the title and abstract of each article and applying the inclusion and exclusion criteria. Any articles where reviewers had conflicting opinions were discussed at the end of this process until consensus was met on inclusion or exclusion for full-text screening. Following title and abstract screening, 96 articles moved forward to full-text screening. Microsoft Excel was used to list the 96 articles and extract salient information related to each article's aims, methodology, results, and use of digital health technology. Full-text screening resulted in 45 articles being excluded for not meeting the inclusion criteria. Rerunning the database searches and using keyword searches in Google Scholar resulted in 2 further articles being included. This resulted in a final total of 53 articles. A total of 27 articles focused on digital health technology use in neurological conditions and 26 on digital health technology use in dementia. The present paper only discusses the neurological condition articles. Study Methodological Quality and Value of Findings We used the Critical Appraisal Skills Programme (CASP) Appraisal Tool to evaluate the methodological quality and value of the findings reported for each of the 27 included articles. All of the articles were considered to be of satisfactory methodological quality and produced findings of value. No cut-off scores were used and no articles were excluded as a result of using the tool. Analysis The analysis is reported in 4 parts. First, we describe the contextual background of the articles. Using a data extraction table, we extracted from each article information about its nationality, the type of neurological condition studied, and methodological details (eg, participants, designs, outcome measures). Second, we describe the digital health tools identified. From each article, we extracted information about the digital tool reported, including its name, the neurological condition it addressed, the format of the technology, its users, and its broad aims. Third, we describe the underlying functionalities of the digital health tools that appeared to promote or support self-management. This information was obtained by extracting from each article the description of how each digital tool functioned. By iteratively reading through descriptions several different categories of function could be identified across the articles. These categories were then grouped together based on the similarity of functions to create 5 overarching categories that represented the main functionalities provided by the digital tools. Finally, we describe salient psychosocial and clinical benefits associated with the digital health tools. This information was obtained by extracting the main outcomes reported that reflected psychosocial or clinical benefits to users. Preliminary data analysis, findings, and interpretations from the review were presented at internal research group meetings for sense checking and feedback. Contextual Background The search identified 27 articles. These articles came from 9 different countries. The majority of articles were from the United States with 15. This was followed by 4 articles from Holland and 2 from Australia. There was 1 article each from Belgium, Germany, New Zealand, and Turkey. One additional article reported on a sample including participants from the UK and Canada, and 1 with participants from the UK and New Zealand. A total of 10 articles focused on MS, 6 on epilepsy, 6 on stroke, 4 on headache or migraine, and 1 on pain. Two articles with a focus on MS also included participants with Parkinson disease and postpolio syndrome. The majority of articles centered on patients (n=21), with a minority on carers (n=4). One article included patients and carers, and 1 patients and health care professionals. The majority of articles reported studies using quantitative or mixed quantitative-qualitative designs. Only 2 articles reported qualitative studies. Across the articles a range of measurements were employed, including widely used questionnaire instruments (eg, on mental health, fatigue), process evaluation metrics (eg, usability, satisfaction), digital technology system metrics or stored data (eg, recorded usage of a digital tool), and open-ended questions (eg, on subjective experience). We identified approximately 100 questionnaire instruments, including instruments used more than once. When these instruments were broadly grouped together based on the similarity of construct being measured, 16 measurement domains could be identified (Table 1). Among the most prevalent areas measured were mental health, quality of life, fatigue/physical activity, disability, and self-efficacy. Table 2 shows that 17 different digital tools were reported across the articles. A number of them, for example, PatientsLikeMe, WebEase, Mymigraine, and Caring-Web, were reported by more than 1 article. The majority of digital tools were website/web-based platforms and a minority were smartphone apps. Digital Tools and Aims MS had the highest number of reported digital tools with 8, and this was followed by epilepsy and stroke both with 3, and headache and migraine with 2. The platform painACTION was reported in 2 different conditions-headache and migraine, and pain. The majority of digital tools focused on patients, while only 2 platforms, both related to stroke, focused on carers. In the MS group, there were tools that specifically targeted fatigue and depression as well as personal health record management. In epilepsy, there were tools that involved collaborative self-management with a health care professional and information sharing within a health-related social network. For stroke, provision of stroke-related education was offered to carers and patients. In headache and migraine, tools provided training to promote self-management potential, and in pain there was a digital tool that addressed cognitive and emotional aspects of pain self-management. Knowledge and Understanding Around two-thirds of the digital tools had functionality involving increasing neurological condition knowledge and understanding. This category included tools providing psychoeducational/ self-help information and cognitive behavior therapy guidance. Users could engage with learning-orientated "modules" or "lessons," often presented using interactive multimedia formats, and in some cases the completion of "homework" activities [32][33][34][35][36][37][38]. Around half of the digital tools provided some form of psychoeducational/self-help information. This support could include information on medical or psychosocial issues, coping and managing, or healthy living, and in some cases internet links to related resources [32,33,35,[38][39][40][41]. In the case of stroke carers, there was comprehensive information on caring for a patient with stroke at home [41,42]. Approximately one-third of digital tools drew on or included a cognitive behavior therapy component. This function involved engagement with learning activities that encouraged users to address challenging condition-related cognitions, behaviors, lifestyles, or expectations; increase self-awareness or self-understanding; and learn new skills and their application [23,[32][33][34]36,38,43,44]. Behavior Modification Around one-third of digital tools aimed to prompt behavior modification and included a focus on stimulating behavior change and providing coaching or motivation. A small number of tools addressed behavior change using activities such as assessment and evaluation of behavior, establishing behavior objectives, and utilizing "action plans" [39,[45][46][47]. Selected digital tools also had the ability to provide user feedback, "motivational" messaging, advice, reminders, or encouragement [35,39,[48][49][50]. Self-management Support Overlapping with behavior modification were digital tools with the function of facilitating users in psychological or tangible self-management. This function assisted users in contemplating their own or preferable self-management, in some cases bolstered by feedback, and encouraged consideration of processes or targets to aid enhancement [39,[45][46][47]49]. Tangible self-management was offered by the PatientSite platform that permitted users to access aspects of their own health record including their medical record, test results, health care appointments, and medication prescriptions [40]. Facilitating Communication Approximately half of the digital tools facilitated communication either between users and health care professionals or peer-to-peer. Communication was often asynchronous, could be condition or intervention related, and used various formats, for example, email or discussion groups [38,39,44,46]. User communication with health care professionals could involve sharing health information, making requests, or asking questions [40,41,51], while health care professional communication could take the form of replies to users, supportive messages, reminders, or feedback [35,38,44]. Peer-to-peer communication could involve sharing experiences or advice [7,35,39]. Recording Condition Characteristics Around one-third of digital tools included a function for recording condition-related information that could then be "tracked," "monitored," or "shared" to enhance management or understanding [7,34,39,45,51,52]. Finally, there was a digital tool, Caring Web, that included an entertainment function, whereby users had access to amusements (eg, "jokes" and "games") and topical news features [41]. Digital Tools and Outcomes For the majority of digital tools some form of acceptability (eg, effectiveness, feasibility) was reported. This could be in the context of user responses, as a method of data collection, or in producing certain outcomes. Self-management per se was seldom measured but instead proxies were used such as self-efficacy or coping. Where condition self-management could be directly measured as in epilepsy, digital tools such as WebEase and PatientsLikeMe were associated with enhanced self-management [39,52]. Across the conditions migraine, epilepsy, and a sample including MS, Parkinson disease, and postpolio syndrome, the digital tools painACTION, WebEase, Fatigue Self-Management, PatientsLikeMe, and Mymigraine were associated with improved condition-related self-efficacy [33,35,36,39,44,46,52]. Across the conditions migraine, pain, and stroke, the digital tools Mymigraine, painACTION, and Post-Discharge Support, respectively, were associated with either increased coping or use of positive coping strategies [33,34,37,50]. Depression was a frequently measured outcome and produced mixed findings. Scales used to measure depression included the Beck Depression Inventory [53]; Depression, Anxiety, and Stress Scale [54]; Hospital Anxiety and Depression Scale [55]; and the Centre for Epidemiological Studies Depression scale [56]. Across the conditions MS, migraine, and pain, digital tools such as Problem Solving Therapy, painACTION, Deprexis, and Fatigue Self-Management were associated with lower depression [23,[33][34][35]43]. However, across the conditions MS and stroke, digital tools such as MS TeleCoach, Fatigue Self-Management, MSInvigor8, and Caring-Web showed no association with depression [38,48,57,58]. An outcome frequently measured in MS articles was fatigue and robust findings were identified. Measures of fatigue included the Fatigue Scale for Motor and Cognitive Functions [59] and a version of the Fatigue Impact Scale [60]. Digital tools such as MS TeleCoach, Deprexis, Fatigue Self-Management, and MSInvigor8 were associated with better fatigue scores [23,35,38,48,57]. Although quality of life was frequently measured, only the digital tool Deprexis appeared to show a positive influence [23]. Principal Findings This review provides an overview of self-management digital tools across a number of neurological conditions. The findings offer a complementary perspective to the literature on digital tool development and implementation by focusing on functionality and beneficial outcomes. Five broad categories of interrelated functions can be discerned that allow digital tools to promote self-management. Among these functions are the provision of information to increase knowledge and understanding; encouragement of positive behavior change; support in psychological and tangible self-management; facilitating communication between users and health care professionals or users in a similar situation; and the ability to record, monitor, and share condition information. The digital tools appeared modestly associated with psychosocial or clinical benefits to users. Depression was frequently measured and yet while some digital tools indicated potential for reducing depression, for others there was no association. By contrast, a number of MS digital tools demonstrated some potential in managing fatigue. Interestingly, self-management in itself was seldom measured outside of epilepsy; however, certain digital tools were associated with increased self-efficacy and use of positive coping strategies. Across the literature we found little discussion about health service adoption or endorsement of digital tools or how they fit with the formal neurological care individuals receive [32,35]. For health service adoption, functionalities and user outcomes should be compatible with existing models of care. Functionalities such as promoting knowledge and understanding, facilitating communication with health care professionals, and recording condition information may lend themselves well to health service adoption. However, the evidence of user benefits may still be too limited. Indeed, future research should test digital tools by embedding and evaluating them within clinical care pathways. As such, the digital tools reviewed may best be considered as supplementary resources to any formal neurological care being received. There was also little discussion across the literature about uptake and continued use of digital tools beyond a research context [38,39]. As part of analyzing articles, using the internet to conduct searches, we found it difficult to identify whether some digital tools were still in use or not. Indeed, future research could attempt to establish how many of the digital tools reported are still in use and how many have been abandoned and why (eg, changes in technology, low user uptake, cost). There are a number of methodological limitations that should be considered. We excluded articles focused on assessment, cognitive training, physiotherapy, and sensor technology and this could have influenced the findings. These articles were excluded as at an early stage of screening it was judged that these areas contributed more to diagnosis, rehabilitation, and assistive technology than self-management. We did not identify as many self-management apps as we had expected; this may have been caused by not including within our searches the brand names of any apps or app marketplaces; however, more likely, many apps exist that are simply not reported in the scientific literature. Furthermore, we did not search the gray literature for self-management apps. Future research should try to establish user preferences toward identifying the functions used most frequently, considered most useful, and that produce clinical benefits. Research should also consider whether user needs and preferences are being addressed. Prospective research could investigate the effect of medium-to-longer-term usage on user outcomes, and the effect on formal neurological care usage. Understanding the effect of integrating data from digital tools into formal clinical records, and the impact of utilizing multiple different tools simultaneously would also be worthwhile. Conclusions Digital health technology has been applied to a number of neurological conditions, yet there is a relatively limited literature on its use and usefulness in the context of self-management. It is likely that numerous other apps and websites have yet to enter the research literature. Detailed analysis and description of the self-management process is lacking as are condition-specific self-management scales, comparison of digital tools, and consideration of comparative outcomes. There appear to be modest associations with psychosocial or clinical outcomes but evaluation is needed of whether certain functionalities predict certain outcomes.
5,159
2021-07-11T00:00:00.000
[ "Medicine", "Computer Science" ]
Crystal structure of aqua(nitrato-κO)dioxido{2-[3-(pyridin-2-yl-κN)-1H-1,2,4-triazol-5-yl-κN 4]phenolato-κO}uranium(VI) acetonitrile monosolvate monohydrate The UVI atom exhibits a pentagonal–bipyramidal N2O5 coordination environment. In the complex, the 1,2,4-triazole ligand is coordinated in a tridentate manner. In the title compound, [U(C 13 H 9 N 4 O)(NO 3 )O 2 (H 2 O)]ÁCH 3 CNÁH 2 O, the U VI atom is seven-coordinated in a distorted pentagonal-bipyramidal N 2 O 5 manner by one tridentate triazole ligand, one monodentate nitrate anion and one water molecule in the equatorial plane and by two uranyl(VI) O atoms in the axial positions. In the crystal, the U VI complex molecule is linked to the water and acetonitrile solvent molecules through N-HÁ Á ÁN, O-HÁ Á ÁO and O-HÁ Á ÁN hydrogen bonds, forming a sheet structure parallel to the bc plane. The sheets are further linked by an additional O-HÁ Á ÁO hydrogen bond, forming a threedimensional network. Chemical context The synthesis of coordination compounds with N-donor heterocyclic ligands is one of the fastest growing areas of coordination chemistry. 1,2,4-Triazoles and their derivatives can be assigned for such types of ligands. The presence of the 1,2,4-triazole ring in the organic ligand provides an additional site for coordination (Aromí et al., 2011). The presence of additional donor groups in the 3-and 5-positions of the triazole moiety provides a greater number of possibilities for chelation of metal ions, involving tridentate bis-chelate functions. It should be noted that UO 2 2+ complexes with such types of ligands have rarely been investigated. Thus, only three uranyl complexes with 1,2,4-triazole derivatives have been characterized (Daro et al., 2001;Weng et al., 2012;Raspertova et al., 2012). As part of our continuing study of uranium coordination compounds with nitrogen-donor ligands (Raspertova et al., 2012), we report here the structure of the title compound. Structural commentary The coordination polyhedron of the U VI atom in the title complex is a distorted pentagonal bipyramid. It is coordinated in a tridentate manner by the 1,2,4,-triazole ligand together with the water molecule and the monodentate nitrate anion in the equatorial plane. Two oxido ligands are placed in the axial positions (Fig. 1). The U1-O1 bond length [2.206 (3) Å ] is comparable with those reported for related six-membered chelate fragments involving phenolate and N-atom donors (Sopo et al., 2008;Ahmadi et al., 2012). The U-N bond lengths [2.489 (4) and 2.658 (4) Å ] are consistent with the situation in other pyridine-bonded uranium complexes (Amoroso et al., 1996;Gatto et al., 2004). The uranyl group is not exactly linear [O2 U1 O3 = 175.36 (14) ]. Non-linear O U groups are generally found in uranyl complexes with five non-symmetrically bonding equatorial ligands. All nonhydrogen atoms of the organic ligand are coplanar within 0.01 Å . The N1-C7 and C7-N2 bond lengths of the triazole ring are equalized [1.336 (5) Å for both]. This value is longer than a Csp 2 N double bond (1.276 Å ) and shorter than a Csp 2 -N single bond (1.347 Å ) (Orpen et al., 1994). It can be assumed that the structure of the triazole ring is the superposition of two possible resonance structures as shown in Fig. 2. Figure 3 Packing diagram of the title compound, viewed along the b axis. Intermolecular hydrogen bonds are shown as dashed lines. Figure 2 Scheme showing two possible resonance structures in the triazole ligand.
777
2016-01-06T00:00:00.000
[ "Chemistry" ]
A new unified stabilized mixed finite element method of the Stokes-Darcy coupled problem: Isotropic discretization In this paper we develop an a priori error analysis of a new unified mixed finite element method for the coupling of fluid flow with porous media flow in $\mathbb{R}^N$, $N\in\{2,3\}$ on isotropic meshes. Flows are governed by the Stokes and Darcy equations, respectively, and the corresponding transmission conditions are given by mass conservation, balance of normal forces, and the Beavers-Joseph-Saffman law. The approach utilizes a modification of the Darcy problem which allows us to apply a variant nonconforming Crouzeix-Raviart finite element to the whole coupled Stokes-Darcy problem. The well-posedness of the finite element scheme and its convergence analysis are derived. Finally, the numerical experiments are presented, which confirm the excellent stability and accuracy of our method. into the water supply. This coupling is also important in technological applications involving filtration. We refer to the nice overview [9] and the references therein for its physical background, modeling, and standard numerical methods. One important issue in the modeling of the coupled Darcy-Stokes flow is the treatement of the interface condition, where the Stokes fluid meets the porous medium. In this paper, we only consider the so-called Beavers-Joseph-Saffman condition, which was experimentally derived by Beavers and Joseph in [4], modified by Saffman in [31], and later mathematically justified in [20,21,23,28]. It is well known that the discretization of the velocity and the pressure, for both Stokes and Darcy problems and the coupled of them, has to be made in a compatible way in order to avoid instabilities. Since, usually, stable elements for the free fluid flow cannot been successfully applied to the porous medium flow, most of the finite element formulations developed for the Stokes-Darcy coupled problem are based on appropriate combinations of stable elements for the Stokes equations with stable elements for the Darcy equations. In [1-3, 6, 12, 13, 15, 18-20, 22-27, 29, 30, 32-34], and in the references therein, we can find a large list of contributions devoted to numerically approximate the solution of this interaction problem, including conforming and nonconforming methods. There are a lot of papers considering different finite element spaces in each flow region (see, for example, [8,13,14] and the references therein). In contrast to this, other articles use the same finite element spaces in both regions by, in general, introducing some penalizing terms (ref. for examples [2,27,30] and the references therein). In [2], a conforming unified finite element has been proposed for the modified coupled Stokes-Darcy problem in a plane domain, which has simple and straightforward implementations. The authors apply the classical Mini-element to the whole coupled Stokes-Darcy coupled problem. An a priori error analysis is performed with some numerical tests confirming the convergence rates. In this article, we propose a modification of the Darcy problem which allows us to apply a variant nonconforming finite element to the whole coupled Stokes-Darcy problem. We use a variant nonconforming Crouzeix-Raviart finite element method that has so many advantages for the velocities and piecewise constant for the pressures in both the Stokes and Darcy regions, and apply a stabilization term penalizing the jumps over the element edges of the piecewise continuous velocities. We prove that the formulation satisfies the discrete inf-sup conditions, obtaining as a result optimal accuracy with respect to solution regularity. Numerical experimants are also presented, which confirm the excellent stability and optimal performance of our method. The difference between our paper and the reference [2] is that our discretization is nonconforming in both the Stokes domain and Darcy domain (in Ω ⊂ R N , N = 2 or 3). As a result, additional terms are included in the priori error analysis that measure the non-conformity of the method. One essential difficulty in choosing the unified discretization is that, the Stokes side velocity is in H 1 while the Darcy side velocity is only in H(div). Thus, we introduce a variant of the nonconforming Crouzeix-Raviart piecewise linear finite element space (larger than the space H h used in [30]). The choice of H h [see (28)] is more natural than the one introduced in [30] since the space H h approximates only H(div, Ω d ) and not [H 1 (Ω d )] N , while our a priori error analysis is only valid in this larger space. The rest of the paper is organized as follows. In Section 2 we present the modified coupled Stokes-Darcy problem in Ω ⊂ R N , N = 2 or 3, notations and the weak formulation. Section 3 is devoted to the finite element discretization and the error estimation. Finally, in Section 4, we present the results of numerical experiments to verify the predicted rates of convergence. Each interface and boundary is assumed to be polygonal (N = 2) or polyhedral (N = 3). We denote by n s (resp. n d ) the unit outward normal vector along ∂Ω s (resp. ∂Ω d ). Note that on the interface Γ I , we have n s = −n d . The Figures 1 and 2 give a schematic representation of the geometry. For any function v defined in Ω, since its restriction to Ω s or to Ω d could play a different mathematical roles (for instance their traces on Γ I ), we will set v s = v |Ωs In Ω, we denote by u the fluid velocity and by p the pressure. The motion of the fluid in Ω s is described by the Stokes equations while in the porous medium Ω d , by Darcy's law Here, µ > 0 is the fluid viscosity, D the deformation rate tensor defined by and K a symmetric and uniformly positive definite tensor representing the rock permeability and satisfying, for some constants 0 < K * K * < +∞, f ∈ [L 2 (Ω)] N is a term related to body forces and g ∈ L 2 (Ω) a source or sink term satisfying the compatibility condition Ω g(x)dx = 0. Finally we consider the following interface conditions on Γ I : Here, Eq. (3) represents mass conservation, Eq. (4) the balance of normal forces, and Eq. (5) the Beavers-Joseph-Saffman conditions. Moreover, {τ j } j=1,...,N −1 denotes an orthonormal system of tangent vectors on Γ I , κ j = τ j · K · τ j , and α 1 is a parameter determined by experimental evidence. Eqs. (1) to (5) consist of the model of the coupled Stokes and Darcy flows problem that we will study below. 2.2. New weak formulation. We begin this subsection by introducing some useful notations. If W is a bounded domain of R N and m is a non negative integer, the Sobolev space H m (W ) = W m,2 (W ) is defined in the usual way with the usual norm · m,W and semi-norm | · | m,W . In particular, H 0 (W ) = L 2 (W ) and we write · W for · 0,W . Similarly we denote by (·, ·) W the L 2 (W ) [L 2 (W )] N or [L 2 (W )] N ×N inner product. For shortness if W is equal to Ω, we will drop the index Ω, while for any m ≥ 0, · m,l = · m,Ω l , | · | m,l = | · | m,Ω l and (., .) l = (·, ·) Ω l , for l = s, d. For a connected open subset of the boundary Γ ⊂ ∂Ω s ∪ ∂Ω d , we write ., . Γ for the L 2 (Γ) inner product (or duality pairing), that is, for scalar valued functions λ, η one defines: We also define the special vector-valued functions space To give the variational formulation of our coupled problem we define the following two spaces for the velocity and the pressure: and Q = L 2 0 (Ω) := q ∈ L 2 (Ω) : Multiplying the first equation of (1) by a test fonction v ∈ H and the second one by q ∈ Q, integrating by parts over Ω s the terms involving div D(u) and ∇p, yield the variational form of Stokes equations: Using interface conditions (4) and (5) in (11), we obtain: We apply a similar treatment to the Darcy equations by testing the first equation of (2) with a smooth fonction v ∈ H and the second on by q ∈ Q, integrating by parts over Ω d the terms involving ∇p d , yield the variational form of Darcy equations: Now, incorporating the first boundary interface condition (3) and taking into account that the vector valued functions in H have (weakly) continuous normal components on Γ I (see [16,Theorem 2.5]), the mixed variational formulation of the coupled problem (1)-(5) can be stated as follows [27]: Find (u, p) ∈ H × Q that satisfies where the bilinear forms a(·, ·) and b(·, ·) are defined on H × H and H × Q, respectively, as: By last, the linear forms L and G are defined as: It is easy to prove that a et b are continuous, b satisfies the continuous inf-sup condtion and a is coercive on the null space of b. It is also clear that F and G are continuous and bounded. Then, using the classical theory of mixed methods (see, e.g., [16, Theorem and Corollary 4.1 in Chapter I]) it follows the well-posedness of the continuous formulation (17) and so the following theorem holds [27]: (17). and (3) hold ( the differential equations being understood in the distributional sense), while the interface conditions (4) and (5) are imposed in a weak sense. Also, we observe that the mixed variational formulation of the coupled problem (1)-(5) is equivalent to weak formulation (2.4) (and also (2.5) of [33]), with the particularity that, in our case, for any v ∈ H, we have that v s − v d , n s p s Γ I = 0. Now we introduce a modification to the Darcy equation, with the purpose in mind of the development of a unified discretization for the coupled problem, that is, the Stokes and Darcy parts be discretized using the same finite element spaces. The modification that we apply to the Darcy equation follows the idea (same argument) given in [2]. Indeed, we observe that taking the second equation of Darcy' problem (2) we can write, for any v ∈ H, Then, by adding this equation to the first equation of the variational form in (15), we get: From now on, we work with this modified variational form of Darcy equations. In the same way that before, incorporating the boundary conditions (3) and remambering that, since v ∈ H, it was (weakly) continuous normal components on Γ I , the variational form of the modified Stokes-Darcy problem can be written as where the bilinear formsã(·, ·) and b(·, ·) are defined on H × H, H × Q, respectively, as: By last, the linear formsL and G are defined as: Then, applying the classical theory of mixed methods it follows the well-posedness of the continuous formulation (21). There exists a unique (u, p) ∈ H × Q solution to modified formulation (21). In addition, there exists a positive constantC, depending on the continuous inf-sup condition constant for b, the coercivity constant forã and the boundedness constants forã and b, such that: We end this section with some notation. In 2D, the curl of a scalar function w is given as usual by curl w := ( ∂w ∂x 2 , − ∂w ∂x 1 ) while in 3D, the curl of a vector function w is given as usual by curl w := ∇ × w. Finally, let P k be the space of polynomials of total degree not larger than k. In order to avoid excessive use of constants, the abbreviations x y and x ∼ y stand for x cy and c 1 x y c 2 x, respectively, with positive constants independent of x, y or T h . A priori error analysis 3.1. Finite element discretization. In this subsection, we will use a variant of the nonconforming Crouzeix-Raviart piecewise linear finite element approximation for the velocity and piecewise constant approximation for the pressure. Let {T h } h>0 be a family of triangulations of Ω with nondegenerate elements (i.e. triangles for N = 2 and tetrahedrons for N = 3). For any T ∈ T h , we denote by h T the diameter of T and ρ T the diameter of the largest ball inscribed into T and set We assume that the family of triangulations is regular, in the sense that there exists σ 0 > 0 such that σ h σ 0 , for all h > 0. We also assume that the triangulation is conform with respect to the partition of Ω into Ω s and Ω d , namely each T ∈ T h is either in Ω s or in Ω d (see Fig. 3 where With every edges E ∈ E h , we associate a unit vector n E such that n E is orthogonal to E and equals to the unit exterior normal vector to ∂Ω if E ⊂ ∂Ω. For any E ∈ E h and any piecewise continuous function ϕ, we denote by [ϕ] E its jump across E in the direction of n E : For i ∈ {0, · · · , N }, we set: The where for each i ∈ {0, · · · , N }, λ i (T ) is barycentric coordonates of T ∈ T h . In classical reference element T , the basis fonctions are given by: Based on the above notation, we introduce a variant of the nonconforming Crouzeix-Raviart piecewise linear finite element space (larger than the space H h used in [30]) and piecewise constant function space is the space of the restrictions to T of all polynomials of degree less than or equal to m. The space Q h is equipped with the norm · while the norm on H h will be specified later on. The choice of H h is more natural than the one introduced in [30] since the space H h approximates only H(div, Ω d ) and not [H 1 (Ω d )] N , while our a priori error analysis is only valid in this larger space. Let us introduce the discrete divergence operator div h ∈ L(H h ; Q h ) ∩ L(H; Q) by Then, we can introduce two bilinear forms Then the finite element discretization of (21) is to find This is the natural discretization of the modified weak formulation (21) except that the penalizing term J(u h , v h ) is added. This bilinear form J(., .) is defined by following the decomposition (32) of E h : where Here, h E is the length (N = 2) or diameter (N = 3) of E. Note that each element of E h only contributes with one jump term in J(u, v). Remark 3.1. The Eq. (31) have the matrix representation where U (resp. P) denote the coefficients of u h (resp. p h ) expanded with respect to a basis for H h (rep. Q h ). We are now able to define the norm on H h (see [30]): In the sequel, we will denote by α, β and C i various constants independent of h. For the sake of convenience, we will define the bilinear form: From Hölder's inequality, we derive the boundedness of A h (·, ·) and b h (·, ·): Lemma 3.1. (Continuity of forms) There holds: Theorem 3.1. (Coercivity of A h ) There is an α > 0 such that: and for ψ ∈ [H 1 (T )] N , we define with the semi-norm Using Young's inequality and Green formula, we have: We have by Cauchy-Schwarz inequality: Also, we have: Then, Hence we deduce • Now we estime the term By Cauchy-Schwarz, we obtain: Thus we deduce the estimation: Then, . We apply Korn's discrete inequality [5] and we get: Thus Hence, We have, The estimates (44), (45), (46) and (47), lead to (37). The proof is complete. In order to verify the discrete inf-sup condition, we define the space: We define also the Crouzeix-Raviart interpolation operator r h : W → H h by: Lemma 3.2. The operator r h is bounded: there is a constant C 5 > 0 depending on σ, µ and N such that Proof. The proof is similar to [30]. Then, we have the following result Proof. We use Fortin argument i.e. for each [H 1 0 (Ω)] N ⊂ W , hence v ∈ W . We take v h = r h v ∈ H h and we have: = 0 ( from the identities (49) and (50)). Thus, we obtain Using the system (53), we have: From (54) et (55), we deduce: The Inf-Sup condition holds and the proof is complete. From Theorem 3.1 and Theorem 3.2 we have the following result: There exists a unique solution (u h , p h ) ∈ H h × Q h to the problem (21). A convergence analysis. We now present an a priori analysis of the approximation error: The use of nonconforming finite element leads to H h H, so the approximation error contains some extra consistency error terms. In fact, the abstract error estimates give the following result: (21) and (u h , p h ) ∈ H h × Q h be the solution of the discrete problem (31). Then we have where E 1h and E 2h are the consistency error terms define by: For estiming the approximation error, we assume that the solution (u, p) of problem (21) satisfies the smoothness assumptions: We begin with the estimates for the terms: inf Finally, let us consider the term Thus, we havẽ where In order to evaluate the four face integrals, let us introduce two projections operators in the following. For any T ∈ T h and E ∈ E(T ), denote by P 0 (E) the constant space of the restrictions to E and π E the projection operator from L ( E) on to P 0 (E) such that The operator π E has the property [7]: For any v ∈ [L 2 (E)] N , we let Π E v be the function in [P 0 (E)] N such that Using inequality (64), we obtain Then we have the following lemma: Lemma 3.5. (Estimation the four face integrals) There holds: Proof. (1) Estimate (66): We begin with an estimate for the first term R 1 (v h ). For any face E ∈ E h (Ω + s ), there exists at least one element T ∈ T s h such that E ∈ E(T ). Then, from condition (63), Höder's inequality and inequality (65), it follows that (2) Estimate (67): Thus, Furthermore, summing on E ∈ E h (Ω + s ) faces, we obtain the estimate: (3) For the terms R 3 (v h ) and R 4 (v h ), we use the same techniques as in the proof of the bounds for R i (v h ), i ∈ {1, 2}, and we obtain: The proof is complete. From Lemma 3.3, Lemma 3.4 and Lemma 3.5, now we derive the following convergence theorem: Theorem 3.4. Let the solution (u, p) of problem (21) satifies the smoothness assumption (Assumption 3.1). Let (u h , p h ) be the solution of the discrete problem (31). Then there exists a positive constant C depending on N , µ, K * , K * , α 1 and σ such that: (72) Numerical experiments In this section we present one test case to verify the predicted rates of convergence. The numerical simulations have been performed on the finite element code FreeFem++ [11,17] in isotropic coupled mesh of Fig. 13. The solutions have been represented by Mathematica software. For simplicity we choose each domain Ω l , l ∈ {s, d} as the unit square, α 1 = µ = 1, and the permeability tensor K is taken to be the identity. The interface Γ I , is the line x = 1, i.e. Ω = [0, 1[∪{1}∪]1, 2] like the show the Figure 10. We consider the application φ : In Ω, we define u = (u 1 , u 2 ) = curl φ − ∂φ ∂y , ∂φ ∂x and we obtain: We choose quadratic pressure p ∈ L 2 (Ω) by Thus, Ω p(x, y)dxdy = 0 and ∇p = (2x − 2y, −2x + y) . Conclusion In this contribution, we investigated a new mixed finite element method to solve the Stokes-Darcy fluid flow model without introducing any Lagrange multiplier. We proposed a modification of the Darcy problem which allows us to apply a slight variant nonconforming Crouzeix-Raviart element to the whole coupled Stokes-Darcy problem. The proposed method is probably one the cheapest method for Discontinuous Galerkin (DG) approximation of the coupled system, has optimal accuracy Figure 28. Right-hand term k 2 in Ω d . with respect to solution regularity, and has simple and straightforward implementations. Numerical experiments have been also presented, which confirm the excellent stability and accuracy of our method. Acknowledgment The author thanks Professor Emmanuel Creusé (University of Lille 1, France) for having sent us useful documents and for fruitful discussions concerning the numerical tests.
5,042.8
2019-08-05T00:00:00.000
[ "Engineering", "Mathematics", "Physics" ]
Search for lepton-flavour violation in high-mass dilepton final states using 139 $\mathrm{fb}^{-1}$ of $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector A search is performed for a heavy particle decaying into different-flavour, dilepton final states, using 139 $\mathrm{fb}^{-1}$ of proton-proton collision data at $\sqrt{s}=13$ TeV collected in 2015-2018 by the ATLAS detector at the Large Hadron Collider. Final states with electrons, muons and hadronically decaying tau leptons are considered ($e\mu$, $e\tau$ or $\mu\tau$). No significant excess over the Standard Model predictions is observed. Upper limits on the production cross-section are set as a function of the mass of a Z' boson, a supersymmetric $\tau$-sneutrino, and a quantum black-hole. The observed 95% CL lower mass limits obtained on a typical benchmark model Z' boson are 5.0 TeV (e$\mu$), 4.0 TeV (e$\tau$), and 3.9 TeV ($\mu\tau$), respectively. Introduction Lepton-flavour-violation (LFV) is forbidden in the Standard Model (SM) theory of particle physics.However, the observation of flavour oscillations among neutrinos has shown that lepton flavour is not a conserved symmetry of nature [1].The search for charged LFV (CLFV) remains an area of active interest, motivating many experimental searches for the CLFV decays, such as the MEG experiment for → [2], → 3 with the SINDRUM spectrometer [3] and also many searches of LFV -lepton decay [4].So far, there is no experimental evidence that LFV also occurs in interactions between charged leptons, and any observation of LFV would be a clear signature of new physics. Many extensions of the SM predict LFV couplings, such as models with ′ bosons [5], scalar neutrinos in -parity-violating (RPV) [6,7] supersymmetry (SUSY) and quantum black holes (QBH) in low-scale gravity [8].These dilepton LFV processes are usually produced at TeV energy scale, with a clear detector signature of a prompt, different-flavour lepton pair, which decreases the contribution from SM background processes. A common extension of the SM is the addition of an extra (1) gauge symmetry resulting in a massive neutral vector boson known as a ′ boson.The Sequential Standard Model (SSM) [5] is used as a benchmark in this paper, where the ′ boson is assummed to have the same quark couplings and chiral structure as the SM boson, but allowing for lepton flavour violating couplings.Additonal LFV coupling parameters are assigned for ′ → ℓ ℓ processes, where , = 1 . . . 3 represent the three lepton generations.It is assumed that the parameters equal to the SM boson coupling for = .The latest ATLAS Collaboration study placed lower limits of 4.5, 3.7, and 3.5 TeV on the mass of a ′ boson decaying into , , and pairs, respectively, using 36.1 fb −1 of the 13 TeV data sample [9].The CMS Collaboration has placed limits up to 5.0 TeV on a ′ boson decaying into an final state using 138 fb −1 [10].Following the same methodology as in Ref. [11], the polarisation of -leptons is not included in the model, but its impact on the sensitivity to a possible signature is found to be negligible. In the RPV SUSY model, the Lagrangian terms allowing LFV can be expressed as ē + ′ d , where and are the (2) doublet superfields of leptons and quarks, and are the (2) singlet superfields of charged leptons and down-type quarks, and ′ are Yukawa couplings, and the indices , , and denote generations.A -sneutrino ( ν ) may be produced in proton-proton ( ) collisions by d annihilation and later decay into , , or .Although only ν is considered in this paper, results apply to any sneutrino flavour.For the theoretical prediction of the cross-section times branching ratio, the ν coupling to first-generation quarks ( ′ 311 ) is assumed to be 0.11 for all channels.As in the ′ model, each lepton-flavour-violating final state is considered separately.It is assumed that 312 = 321 = 0.07 for the final state, 313 = 0.07 for the final state, and 323 = 0.07 for the final state, while 331 and 332 are set to be zero, due to the gauge invariance for these channels, resulting in the ν cross-section times branching ratio in the channel being up to approximately twice as large as in the or channel.These values are chosen for easy comparisons with previous ATLAS and CMS searches [11][12][13].However in the previous ATLAS search [9] and the latest CMS publication [10], 331 and 332 were assumed to be the same as 313 and 323 respectively, so the results of RPV model in this paper are not directly comparable to those in the previous publications for and channels.The latest ATLAS results with 36.1 fb −1 using the 13 TeV data [9] have excluded RPV SUSY models below the ν masses 3.4 TeV, 2.6 TeV and below 2.3 TeV for , and channels respectively with exactly the same parameters mentioned above.The CMS Collaboration has recently excluded RPV SUSY models below 2.2 TeV for 312 = 321 = ′ 311 = 0.01 [10].Various models introduce extra spatial dimensions to reduce the value of the Planck mass and resolve the hierarchy problem.The search described in this paper presents interpretations based on two models: the Arkani-Hamed-Dimopoulos-Dvali (ADD) model [14], assuming = 6, where is the number of extra dimensions, and on the Randall-Sundrum (RS) model [15] with one extra dimension.Due to the increased strength of gravity at short distances, in these models collisions at the LHC could produce states exceeding the threshold mass ( th ) to form black holes.In this paper, th is assumed to be equivalent to the extra-dimensional Planck scale.The quantum gravity regime [16][17][18] is applied only when considering the mass region below 3-5 th , since for masses beyond this region it is expected that thermal black holes would be produced.The non-thermal (or quantum) black holes could decay into two-particle final states, producing the topology investigated in this paper.QBHs would have a continuum mass distribution from th up to the beginning of the thermal regime which for the models considered in this paper is assumed to start at 3 th .This approach is consistent with the previous ATLAS analyses, such as Ref. [9].The decay of the QBH would be governed by a yet unknown theory of quantum gravity.The two main assumptions of the extra-dimension models considered in this paper [8] are that (a) gravity couples with equal strength to all SM particle degrees of freedom and (b) gravity conserves local symmetries (colour and electric charge), but can violate global symmetries such as lepton-flavour and baryon-number conservation.Following these assumptions, the branching ratio to each final state can be calculated.QBHs decaying into different-flavour, opposite-charge lepton pairs are created via q () with a branching ratio to ℓℓ ′ of 0.87% (0.34%) [8].As in the ′ model, each lepton-flavour-violating final state is considered separately.These models were used in previous ATLAS and CMS searches for QBH in dijet [19][20][21], lepton+jet [22], photon+jet [23], [10], and same-flavour dilepton [24] final states.The latest ATLAS Collaboration study placed lower limits of 5.5 (3.4), 4.9 (2.9), and 4.5 (2.6) TeV on th of QBH with ADD (RS) model decaying into , , and pairs, respectively, using 36.1 fb −1 of the 13 TeV data sample [9].This paper describes a search for new phenomena in final states with two leptons of different flavour using 139 fb −1 of data from collisions at √ = 13 TeV at the Large Hadron Collider (LHC).The dilepton signal final states consisting of , , or pairs are considered, where the -lepton decays hadronically.This analysis is looking for a localised excess in the distribution of dilepton invariant mass in TeV range.There are three signal regions defined, one for each decay mode.Corresponding control regions are also designed to extract a normalisation of the most prominent SM backgrounds: production of top quarks and dibosons.The contributions from fake leptons are calculated from the data.The final simultaneous fit, with the signal and control regions, is performed separately in each decay mode.Four benchmark models ( ′ , RPV SUSY ν and QBH: ADD n=6; RS n=1) are used to interpret the results.Compared with the previous ATLAS search with 36 fb −1 of collision data at √ = 13 TeV [9], this analysis benefits from a factor of four increase in integrated luminosity, improvements in object reconstruction (such as electron and tau lepton identification) the use of a more robust background estimation method (e.g. the 4 × 4 matrix method in channel), and the application of a simultaneous fit with both signal region and control regions to constrain systematic uncertainties.In this analysis, the b-jet veto is also included as a part of the baseline selection in every channel, which highly suppresses the top quark related backgrounds. The ATLAS detector The ATLAS detector [25] is a general-purpose particle detector with approximately forward-backward symmetric cylindrical geometry.It covers nearly the entire solid angle around the collision point. 1 It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadron calorimeters, and a muon spectrometer incorporating three large superconducting air-core toroidal magnets. The inner-detector system (ID) is immersed in a 2 T axial magnetic field and provides charged-particle tracking in the range of || < 2.5.The high-granularity silicon pixel detector covers the vertex region and typically provides four measurements per track, the first hit normally being in the insertable B-layer (IBL) installed before Run 2 [26,27].It is followed by the silicon microstrip tracker (SCT), which usually provides eight measurements per track.These silicon detectors are complemented by the transition radiation tracker (TRT), which enables radially extended track reconstruction up to || = 2.0.The TRT also provides electron identification information based on the fraction of hits (typically 30 in total) above a higher energy-deposit threshold corresponding to transition radiation. The calorimeter system covers the pseudorapidity range || < 4.9.Within the region of || < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead/liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering || < 1.8 to correct for energy loss in material upstream of the calorimeters.Hadron calorimetry is provided by the steel/scintillator-tile calorimeter, segmented into three barrel structures within || < 1.7, and two copper/LAr hadron endcap calorimeters.The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic energy measurements respectively. The muon spectrometer (MS) comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroidal magnets.The field integral of the toroids ranges between 2.0 and 6.0 T m across most of the detector.Three layers of precision chambers, each consisting of layers of monitored drift tubes, cover the region || < 2.7, complemented by cathode-strip chambers in the forward region, where the background is highest.The muon trigger system covers the range || < 2.4 with resistive-plate chambers in the barrel, and thin-gap chambers in the endcap regions. Interesting events are selected by the first-level trigger system implemented in custom hardware, followed by selections made by algorithms implemented in software in the high-level trigger [28].The first-level trigger accepts events from the 40 MHz bunch crossings at a rate below 100 kHz, which the high-level trigger further reduces in order to record events to disk at about 1 kHz. An extensive software suite [29] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations and in the trigger and data acquisition systems of the experiment. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe.The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards.Cylindrical coordinates (, ) are used in the transverse plane, being the azimuthal angle around the -axis. Data and simulated event samples The data sample used for this analysis was collected during 2015 to 2018 from collisions at a centre-ofmass energy of 13 TeV.After selecting periods with stable beams and applying data-quality requirements, the total integrated luminosity is 139 fb −1 with an uncertainty of 1.7% [30,31]. The → ′ → ℓℓ ′ Monte Carlo (MC) simulated samples were generated at leading order (LO) using the generator Pythia 8.186 [32] with the NNPDF2.3LO[33] parton distribution function (PDF) set and the A14 [34] set of tuned parameters.The signal samples were generated for the SSM, including fifteen mass points ranging from 1 TeV to 8 TeV in steps of 500 GeV.The cross-section of the ′ signal was corrected from LO to next-to-next-to-leading order (NNLO) in the strong coupling constant with a rescaling, which was computed with VRAP 0.9 [35] and the CT14NNLO PDF [36] set.The NNLO QCD correction was applied as a multiplicative factor from 1.44 to 0.29 for different ′ masses on total cross-section.No mixing of the ′ boson with the and * bosons is included. The d → ν → ℓℓ ′ samples were generated at LO with MadGraph5_aMC@NLO v2.3.3 [37] interfaced to the Pythia 8.186 parton shower model with the NNPDF2.3LOPDF set and the A14 tune.The signal samples were generated with the same mass values as for the ′ model described above.The cross-section was calculated at LO with the same generator used for simulation and corrected to next-to-leading-order (NLO) using LoopTools v2.2 [38].The NLO correction factor ranges from 1.1 to 1.4, depending on different ν masses. The → QBH → ℓℓ ′ samples were generated with the program QBH 3.00 [39] using the CTEQ6L1 [40] PDF set and the A14 tune, for which Pythia 8.205 [41] provides showering and hadronisation.For each extra-dimensional model, eleven th points in 500 GeV steps were produced: from 4 TeV to 9 TeV for the ADD = 6 model and from 2 TeV to 7 TeV for the RS = 1 model.These two models differ in the number and nature of the additional extra dimensions (large extra dimensions for ADD and one highly warped extra dimension for RS).In particular, the ADD model predicts black holes with a larger gravitational radius and hence the parton-parton cross-section for this model is larger than for the RS model.Therefore, the th range of the generated samples differs for the two models. The SM background in the LFV dilepton search is due to several processes which produce a final state with two different-flavour leptons.For the mode, the dominant background contributions originate from t and single-top production with the subsequent decay of both of the bosons (including those from top quark decays) in the event into leptons, and diboson (, , and ) production.Other backgrounds originate from -lepton pair production ( q → / * → ).Both diboson and -lepton pair production produce different-flavour final states, through the leptonic decays of the and bosons or the -leptons.For the and modes, multijet and +jets processes are the dominant backgrounds.They contribute due to the misidentification of jets as leptons. Backgrounds from top quark production include t and production.The production of t events was modelled using the Powheg-Box v2 [42][43][44][45] generator at NLO with the NNPDF3.0NLOPDF set [46] and the ℎ damp parameter2 set to 1.5 top [47].The events were interfaced to Pythia 8.230 [41] to model the parton shower, hadronisation, and underlying event, with parameters set according to the A14 tune and using the NNPDF2.3LOset of PDFs.The decays of bottom and charm hadrons were performed by EvtGen 1.6.0[48].The backgrounds were modelled by the Powheg-Box v2 generator at NLO in QCD using the five-flavour scheme and the NNPDF3.0NLOset of PDFs.The diagram removal scheme [49] was used to remove interference and overlap with t production.The events were interfaced to Pythia 8.230 using the A14 tune and the NNPDF2.3LOset of PDFs. For diboson samples, final states were simulated with Sherpa 2.2.12 [50,51], including off-shell effects and Higgs boson contributions where appropriate.Fully leptonic final states and semileptonic final states, where one boson decays leptonically and the other hadronically, were simulated using matrix elements (ME) at NLO accuracy in QCD for up to one additional parton and at LO accuracy for up to three additional parton emissions. The SM Drell-Yan process was generated at NLO using Powheg-Box v1 MC generator which is used for the simulation at NLO accuracy of the hard-scattering processes of -boson production and decay in the , , and -lepton final states.It was interfaced to Pythia 8.186 with parameters set according to the AZNLO [52] tune.The CT10 PDF set [53] was used for the hard-scattering processes, whereas the CTEQ6L1 PDF set was used for the parton shower. Processes such as +jets and multijet production with jets that are misidentified as leptons were estimated through a combination of data-driven methods and simulation, detailed in Section 5.The +jets contribution was estimated with the aid of the Sherpa 2.2.2 simulated samples using NLO MEs for up to two partons, and LO matrix elements for up to four partons calculated with the Comix [54] and OpenLoops libraries.They are matched with the Sherpa parton shower using the MEPS@NLO [55] prescription using the set of tuned parameters developed by the Sherpa authors [50]. For all samples used in this analysis, the effects of multiple proton-proton interactions per bunch crossing (pileup) were included by overlaying minimum-bias events simulated with Pythia 8.186 and reweighting the simulated events to reproduce the distribution of the number of interactions per bunch crossing observed in the data.The generated events were processed with the ATLAS simulation infrastructure [56], based on Geant4 [57], and passed through the trigger simulation and the same reconstruction software used for the data. Event reconstruction and selection The search for new phenomena presented here is aimed at the high mass range, thus event selections are optimised accordingly.Events are kept only if they can satisfy a single-muon or single-electron trigger.The trigger T threshold is different for different data-taking periods.For 2015 data runs, a T threshold of 20 or 50 GeV for muons is applied, and the value for electrons is 24, 60 or 120 GeV.For 2016 -2018 data runs, the T threshold is 26 or 50 GeV for muons, and 26, 60 or 140 GeV for electrons.This analysis relies on triggers with T threshold above 50 (60) GeV for muons (electrons).The single-electron trigger with higher T threshold has a looser identification requirement, resulting in an increased trigger efficiency in the high T region which is especially important for a high mass resonance search.In addition to the trigger selection, events are required to have at least one offline reconstructed signal lepton matched to the object that fired the trigger. Electron candidates are formed by associating the energy in clusters of cells in the electromagnetic calorimeter with a track in the ID [58].Candidate electrons are identified using a likelihood-based method.The likelihood discriminant utilises lateral and longitudinal calorimeter shower shapes together with tracking and cluster-track matching quantities.The discriminant criterion is a function of the T and || of the electron candidate.Two operating points are used in this analysis, as defined in Ref. [59,60]: Medium and Tight, which correspond to an efficiency of 88% and 80% for a prompt electron with T = 40 GeV respectively.The Tight working point is required for electrons in signal events while the Medium working point is required in order to define an electron used for the reducible background estimates.Electron candidates must have T > 65 GeV and || < 2.47, excluding the region 1.37 < || < 1.52, where the electron performance is degraded due to the presence of extra inactive material.Further requirements are made on their tracks: the transverse and longitudinal impact parameters relative to the primary vertex of the event ( 0 and Δ 0 ) must satisfy | 0 / 0 | < 5 and |Δ 0 sin | < 0.5 mm.Candidates are required to satisfy relative track-based and calorimeter-based isolation requirements with an efficiency of 99% to suppress background from non-prompt electrons [61]. Candidate muon tracks are reconstructed independently in the ID and the MS which are then used in a combined fit.To ensure optimal muon momentum resolution to very high T region (∼10% at 1 TeV), the High- T operating point [62] is used in this analysis.Only tracks with hits in each of the three stations of the muon spectrometer are considered.Moreover, muon candidates are required to be within a pseudorapidity of || < 2.5, satisfy | 0 / 0 | < 3 and |Δ 0 sin | < 0.5 mm, T > 65 GeV, and satisfy a track-based isolation criterion with an efficiency of 99% to further reduce contamination from non-prompt muons.The track isolation is similar to the one defined for electrons.Muon candidates fulfilling all selection except the isolation criterion are called "Loose muons" which are used in the reducible background estimates.An additional upper cut on T at 6 TeV is applied to remove poorly calibrated muons.This only affects the very high-mass signals, and no events fail this requirement in either the data or the background MC. Jets are reconstructed using the anti- algorithm [63] with a radius parameter of 0.4.The inputs to the jet clustering are built by combining the information from both the calorimeters and the ID using a particle-flow algorithm [64].The cluster energies are calibrated according to in situ measurements of the jet energy scale [65].Jets with T < 60 GeV and with || < 2.4 are further required to satisfy the jet vertex tagger [66], which is a likelihood discriminant that uses a combination of track and vertex information to suppress jets originating from pile-up activity. Hadronic decays of -leptons are composed of a neutrino and a set of visible decay products ( had ), typically one or three charged pions and up to two neutral pions.The reconstruction of had starts with jets reconstructed from topological clusters [67,68].The had candidates must have || < 2.5 with the transition region (1.37 < || < 1.52) excluded, a transverse momentum T > 65 GeV, one or three associated tracks with ±1 total electric charge.To discriminate against jets, had candidates are identified with a multivariate algorithm that employs a Recurrent Neural Network (RNN) using shower shape and tracking information [68].All had candidates are required to satisfy the Medium operating point which corresponds to an efficiency of 75% (60%) for 1-prong (3-prong) -leptons, and a jet background rejection of 35 (240) for 1-prong (3-prong) -lepton candidates.Furthermore, a dedicated Boosted Decision Tree-based veto is applied to reduce the number of electrons misidentified as had candidates.Jets with || < 2.5 containing a -hadrons are identified with a -tagging algorithm based on a deep-learning neural network [69].The information used includes distinctive features of -hadron decays in terms of the impact parameters of the tracks and the displaced vertices reconstructed in the ID.The chosen operating point has an efficiency of 85%.In this study, a veto on -tagged jets is applied to reject contributions from events containing top quarks. The missing transverse momentum vector ì miss T , with magnitude miss T , is defined as the negative vector sum of the transverse momenta of all identified physics objects and an additional soft term.The soft term is constructed from all tracks that are associated with the primary vertex, but not associated with any selected physics object [70]. Candidate signal events must have a reconstructed primary vertex with at least two associated tracks, defined as the vertex whose constituent tracks have the highest sum of 2 T .There should be exactly two different-flavour, opposite-sign leptons satisfying the previously mentioned criteria: , had or had .Events with an additional electron, muon or had fulfilling all the selections are vetoed.For the channel only, events with an extra electron or muon fulfilling the "loose" criteria are also vetoed.For all three channels, the lepton candidates must be back-to-back in the transverse plane with Δ(ℓ, ℓ ′ ) > 2.7.The invariant mass of the dilepton pair larger than 600 GeV is used as the discriminant.To account for differences between data and simulation, corrections are applied to the lepton trigger, reconstruction, identification, and isolation efficiencies as well as to the lepton energy/momentum resolution and scale [58,62,67]. To avoid double counting among objects, a lepton-lepton and lepton-jet overlap removal is applied based on the Δ distance metric.The T threshold of muons considered in the overlap removal with had is lowered to 2 GeV.Further details can be found in Ref. [71]. After the event selection mentioned above, for a ′ boson with a mass of 1.5 TeV, the fractions of signal events that satisfy all of the selection requirements are approximately 45%, 20%, and 15% for the , , and final states, respectively. Because of the presence of a neutrino in the -leptons hadronic decay, for the and channels the dilepton invariant mass cannot be fully reconstructed, so an approximation is used.The mass of the BSM particle from the signals considered is much larger than the masses of the SM leptons, thus producing very high energy -leptons when it decays.The hadronic cascade decay of the high energy -lepton results in the neutrino and the resultant jet being nearly collinear.The neutrino four-momentum is reconstructed from the magnitude of the missing transverse momentum and the direction of the had candidate.This technique improves the dilepton mass resolution and search sensitivity [12].For a simulated ′ boson with a mass of 2 TeV, the mass resolution improves from 8% (17%) to 4% (12%) in the () channel. Background estimation The background processes for this search can be divided into two categories: reducible and irreducible.The latter is composed of processes, which can produce two different-flavour opposite-sign prompt leptons, including diboson, t, single-top , and / * → production.These processes are modelled with MC simulated event samples and normalised to their theoretical cross-sections before further adjustment.Reducible backgrounds originate from jets mis-reconstructed as leptons, such as +jets and multijet production.These backgrounds are estimated from the data.The contribution from reducible backgrounds is small in the channel-approximately 10% in the signal region (SR)-whereas in the and channels they are among the leading components and account for 20−30% of the total background in the SR.Table 1 shows the regions used in the final fit mentioned in Section 7 in different channels. Irreducible backgrounds Among the irreducible backgrounds, t and are the dominant processes in all channels.Their contributions are evaluted with MC simulation, which are corrected by factors derived in the relevant control regions (CR).For the t process, which is the dominant contribution in "Top Quarks" backgrounds, a CR is built with the same selection as signal events but reversing the -jet veto cut in each of , For the and channels, CRs are not defined due to non-negligible contributions from fake backgrounds.However, due to lepton flavour universality, the correction factor ratios between the t and processes are assumed to be the same in the and -lepton channels: where ( ) is the correction factor for the process in the or channel, is the correction factor for the process in the channel.The ( ) t is the correction factor for the t process in the or channel, and t is the correction factor for the t process in the channel.The correction factor for the process in the channel can be extrapolated to the or channel with a factor equal to the ratio of the t correction factors between the channel and channels: From the simultaneous fit in all CRs for the channel, the fitted uncertainty of is 12% and it is highly correlated with t .The same uncertainty is assigned to ( ) .Conservatively, it is treated as uncorrelated to t , thus leading to an over-estimate in the final fit uncertainties.The 12% uncertainty on the correction is also much larger than the correction effect itself in the and channels, which is less than 5% on the diboson background. The t and correction factors are used to correct the total yields of "Top Quarks" and "Diboson" backgrounds respectively.Other irreducible background processes, such as → ℓℓ ′ are evaluted directly with MC simulation. Reducible backgrounds The dominant reducible backgrounds, +jets and multijet production, are estimated by using data-driven techniques.In the channel, the matrix method [24] is used, while in the and channels, the fake-lepton background is extrapolated from dedicated CRs to SR with the method described in Ref. [12]. Fake backgrounds in 𝒆 𝝁 channel The goal of the matrix method is to estimate the fraction of events in the data sample with a "real" electron and a "real" muon ( RR ), events with a jet faking an electron ("fake" electron) and a "fake" muon ( FF ), and events with one "real" lepton and one "fake" lepton ( RF and FR ). Two selection criteria are defined: "Tight" and "Loose" for muons (electrons) based on their lepton quality (identification and isolation) respectively.As the selection efficiency is different between a "real" lepton and a "fake" lepton, the contribution of "fake" leptons is estimated by the number of data events passing each selection criteria. Four data samples are constructed based on the lepton quality: • a sample where both the electrons and muons pass the "Tight" quality cut, with number of events TT ; • a sample where the electrons pass the "Loose" but failed the "Tight" selection, and the muons pass the "Tight" selection, with event number LT ; • a sample where the muons pass the "Loose" but failed the "Tight" selection, and the electrons pass the "Tight" selection, with event number TL ; • a sample where both the electrons and muons pass the "Loose" but fail the "Tight" selection, with event number LL .By solving a system of linear equations involving the numbers of events for these four data samples, the reducible background contributions can be evaluated. (3) Among these equations, "" is the probability of a "Loose" lepton matched to a true lepton to satisfy the "Tight" quality selection, defined as the "real efficiency."It is evaluated from → ℓℓ ′ simulated events.The " " refers to the probability that a jet is misidentified as a "Tight" quality lepton, so called "fake rate" which is determined in a multijet-enriched data sample.The creation of this sample starts with the selection of a "Loose" lepton back-to-back with a jet.To suppress the +jets contamination to the multijet-enriched sample, the events are required to have miss T < 25 GeV and the transverse mass T < 50 GeV.Transverse mass is defined as , where ℓ ± T and miss T denote the transverse energies of the final state charged lepton and neutrino with Δ as the azimuthal angle difference.Since most of the "fake" muons are from heavy flavour jets decays in the muon fake rate measurement, the requirements on both of the impact parameters 0 and 0 are reversed in order to further suppress the +jets contamination.Remaining contamination are subtracted relying on simulations from +jets and other SM background processes (top, diboson and → ℓℓ).Both the real efficiency and the fake rate are determined as a function of T . The uncertainties associated with the matrix method are evaluated by considering systematic effects on the lepton fake rate measurement and uncertainties in the real lepton efficiency.The latter is considered to be the difference in the lepton distribution between SR and → ℓℓ MC [71].It is assigned by comparing the real lepton efficiency measured with → ℓℓ plus zero jet events and inclusive → ℓℓ events.The systematic uncertainties in the fake rate include: • the choice of multijet-enriched region; • the impact of reversing 0 and 0 on muons; • the difference in the fake rates between the SR and multijet-enriched region due to the difference in the jet flavour components. The overall uncertainty on the reducible background is about 30% in the SR.Given that in the channel SR, this contribution is about 10% of the total background over the invariant mass range considered, the uncertainties in the fake-lepton background estimates have little impact on the results. Fake backgrounds in 𝒆𝝉 and 𝝁𝝉 channels One of the major backgrounds for the and channels is the +jets process, where a jet is misidentified as a lepton candidate.As it is difficult to model misidentification of jets as leptons, particularly in the high T region, the +jets backgrounds are determined from dedicated CRs.They are built from events selected with the same criteria as used in the signal selection except for requiring miss T > 30 GeV to enhance the +jets contribution and suppress the → ℓℓ ′ contamination.The dilepton mass requirement is also lowered to 130 GeV to reduce the statistical uncertainty in these control regions.Furthermore, the Δ ℓℓ ′ requirement is reversed (Δ ℓℓ ′ < 2.7) to ensure orthogonality with the SR.Simulation studies indicate that the multijet background is negligible in this CR.Contributions from other SM background processes are estimated by using MC simulation.The +jet MC samples are used to calculate the ratio of events in the CR to events in the SR.Then the total +jet backgrounds yields are extrapolated from CR to SR with this ratio.Kinematic distributions of the +jets background are derived by relying on MC simulation.The uncertainties considered in this method include • the construction of the +jets CR, • uncertainties in the estimates of other SM background processes in the +jets CR, • and the +jets MC shape uncertainty which is assigned based on the data versus prediction agreement in the +jet CR. The overall uncertainty of +jets in and channels is approximately 20%. The contribution from multijets production is estimated also from a CR with the same selection as the SR except that the leptons are required to have the same electric charge, assuming that the probability of misidentifying a jet as a lepton is independent of the charge.The dilepton mass requirement is also lowered to 130 GeV to estimate the low mass multijets contribution used in the high mass extrapolation.The contamination from processes with prompt leptons is subtracted from data in the same-sign CR using MC simulation, while the contamination from +jets is subtracted using the previously mentioned procedure. The assumption of charge independence of the jet misidentification rates is tested in a multijet-enriched region with two non-isolated leptons.After subtraction of other SM backgrounds, the ratio of opposite-sign to same-sign events is found to be in reasonable agreement in the range of 10% to 20% up to the TeV dilepton invariant mass range.The remaining lack of closure is taken as an uncertainty.Besides this charge independence assumption uncertainty, the uncertainties in multijets estimates mainly come from uncertainties in the +jets subtraction and other MC irreducible background subtractions.As the uncertainty in +jets is quite significant, they are also propagated to the multijets estimate through the +jets subtraction.In this way, the yield of the multijets background is highly anti-correlated with the +jets background, and they are combined together as "fake" background.The overall uncertainty in the "fake" background is approximately 25% in the and channels. Considering the lack of "fake" background event numbers at high dilepton invariant mass values in the signal region, a function of the form ( ) = +log( ) with free parameters , and is fitted to the dilepton invariant mass distribution in the range 340 < < 3000 GeV.This function is used to extrapolate the fake background estimate for the whole signal region ( > 600 GeV).The statistical uncertainty of the fitted function is determined by the propagation of the statistical uncertainties on the fitted parameters.The uncertainty due to the extrapolation is dominated by the fit function choice uncertainty, and it is evaluated by comparing the nominal estimate with the one obtained when changing the fit function ( ( ) = , with the free parameters and ).The difference of these two functions is more than 100% above 2 TeV.The same functions have been previously used to model the jet backgrounds in other analyses [72].Both the data-driven uncertainties and the fit related uncertainties are taken into account as independent sources for the fake estimation uncertainty. Systematic uncertainties Systematic uncertainties affect the event yields and the shape of the invariant mass distribution in the signal and control regions.They are grouped into two types: theoretical and experimental uncertainties.The systematic uncertainties related to the estimates of fake backgrounds are discussed in Section 5. Experimental uncertainties considered include those from the trigger, reconstruction, identification and isolation requirements of the final-state particles, such as electrons [58], muons [62], -leptons [67], jets [65], miss T [73], and b-tagging [74].Their energy scale and resolution uncertainties are also taken into account.An additional uncertainty (1.7%) from the measurement of the integrated luminosity is included.Sources of uncertainty are considered for both the simulated background and signal processes.Experimental uncertainties affect the event yields of background and the signal cross-section through their effects on the acceptance and the event migration between different analysis regions. Theoretical uncertainties are considered for the renormalisation ( ) and factorisation ( ) scales, the choice of PDF, the choice of the value of the strong coupling constant , and the modelling of the t and diboson backgrounds.The strategy used in the same analysis with partial data sample at √ = 13 TeV [9] is followed in this search. Variation of the renormalisation and factorisation scales is used to estimate the uncertainty due to missing higher order corrections.Pairwise variations of and are considered to find the maximum and minimum variations.The PDF uncertainty consists of the contribution from the PDF set used in the matrix element calculation.It is estimated using different PDF sets and eigenvector variations within a particular PDF set for the top-quark, diboson, and +jets backgrounds.The uncertainty is evaluated by using the same PDF set evaluated with two different values.The uncertainty in the modelling of the t background is assessed by evaluating two MC samples which are generated by Powheg-Box+Pythia8 and aMC@NLO +Pythia8 as sources of modelling uncertainty.The uncertainty in the modelling of the diboson background is assessed by evaluating two MC samples which are generated by Sherpa 2.2.12 and Powheg-Box+Pythia8. Experimental systematic uncertainties common to signal and background processes are assumed to be correlated.For signal processes, only experimental systematic uncertainties and uncertainties due to the limited statistical precision of simulated samples are considered.All uncertainties are evaluated as a function of ℓℓ ′ . For illustration, Table 2 lists the post-fit impact of uncertainties in the RPV SUSY ν model for each measurement grouped by their respective sources.The ′ and QBH models have similar results.The sources of the largest systematic uncertainties are the statistical uncertainties associated with the background estimate and background modelling uncertainties in the channel.In the -lepton channels, the uncertainties are dominated by the statistical precision of the background estimate and experimental uncertainties related to -leptons and the estimate of fake backgrounds.The total uncertainties are dominated by the statistical precision in all three channels. Statistical analysis The dilepton invariant mass distribution in the SR and CRs are fitted simultaneously to test for the presence of a signal.For hypothesis testing, binned profile-likelihood fits are performed, following a modified frequentist method [75] implemented in RooStats [76].The fits are performed for each decay channel separately.For the channel, and low Δ ℓℓ ′ t CRs are included in the fit to extract the overall normalisation of the diboson background while keeping the normalisation of the top quark background uncorrelated in the high-and low-Δ ℓℓ ′ regions.Separate normalisation factors of the top quark background are used for the high-and low-Δ ℓℓ ′ regions in the simultaneous fit, by which their correlation is determined. For the and channels, only t CRs are included in the fit and the diboson correction factors are calculated from the CR-only fit results and put into the final fit as fixed normalisation factors. The binned likelihood function (L (, )) is constructed as a product of Poisson probability terms over all bins considered in the search.The likelihood function depends on the parameter of interest (POI), the signal strength parameter , a factor multiplying the theoretical signal production cross-section and a set of nuisance parameters that encode the effect of systematic uncertainties in the signal and background expectations.All nuisance parameters are implemented in the likelihood function as Gaussian constraints. Unconstrained normalisation factors are applied on the top quark and diboson background components in the channel, and the top quark background in the -lepton channels.They are controlled by the previously mentioned CRs in the respective channels.The expected event yield in a bin depends on the normalisation factors and on the nuisance parameters.The nuisance parameters adjust the expected event yields for signal and background according to the best fit to data. The likelihood function is fitted to the data to test for the presence of a signal.The test statistic is defined as the profile likelihood ratio, = −2 ln L (, θ)/L ( μ, θ), where μ and θ are the values of the parameters that simultaneously maximise the likelihood function, and θ are the values of the nuisance parameters that maximise the likelihood function for a fixed value of .Compatibility of the observed data with the background-only hypothesis is tested by setting = 0 in the test statistic 0 .Upper limits on the signal production cross-section for each considered signal scenario are computed using in the CL method [75] with the asymptotic approximation [77].A given signal scenario is considered to be excluded at the 95% confidence level (CL) if the value of the signal production cross-section (parameterised by ) yields a CL value less than 0.05. Results Tables 3, 4 and 5 show the observed data event yields and the expected background event yields in the CRs and SRs for , and channels.The background is dominated by t and diboson events, while +jets events are dominant for the and final states.The +jets background in the and channels is highly suppressed by the RNN identification comparing to the previous analysis. Binned profile-likelihood fits are performed on dilepton invariant mass spectra for each signal scenario in , and channels.Table 6 shows the fitted correction factors for "Top Quarks" and "Diboson" backgrounds applied to SR. Figures 1 and 2 show the post-fit dilepton invariant mass distributions of the background-only fit in the CRs for the , and channels.Figure 3 shows the post-fit dilepton invariant mass distributions of the background-only fit in the SR for the , and channels.The SM expectations agree well with the data in the SR of the channel.Mild tension with the SM background estimates is observed between 2.0 TeV and 2.3 TeV in the SRs of the -lepton channels.Nevertheless, SM expectations are still statistically consistent with the data within 2. Figures 4-6 show the observed and expected 95% CL upper limits on the production cross-section times branching ratio of the ′ , RPV SUSY ν and QBH models for each of the final states considered.The limits for signal masses above 3.0 TeV are dominated by the last bin in the dilepton invariant mass spectrum, thus the limit curve tends to be flat in the high mass region. The extracted limits are not as strong for signal masses above about 2.5 TeV due to a decrease in acceptance at very high T , and specifically to the LFV ′ model, low-mass signal production due to PDF suppression at high masses.The observed limits are slightly more stringent than the expected limits above 2 TeV because of the small deficit observed in data in channel.In the -lepton channels, the observed limits are less stringent than the expected limits above 2 TeV because of the mild excess above the background estimates between 2.0 TeV and 2.3 TeV.The acceptance times efficiency of the ADD and RS QBH models agree with each other within 1%, and the same prediction is used for the limit extraction.The results are summarised in Table 7. Table 3: Observed data event yields and the expected background event yields with their total uncertainties in the CRs in the channel after applying all selection criteria and the background-only fit.The "Fake" background refers to "+jets" and "Multijet" events with fake leptons.The "Top Quarks" contains single top and t events.and (c) QBH ADD and RS production cross-section times branching ratio for decays into an final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty band.and (c) QBH ADD and RS production cross-section times branching ratio for decays into an final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty bands.and (c) QBH ADD and RS production cross-section times branching ratio for decays into a final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty bands. Conclusions A search for a heavy particle decaying into , or final states is conducted using 139 fb −1 of proton-proton collision data at √ = 13 TeV recorded by the ATLAS detector at the Large Hadron Collider.The data are found to be consistent with the Standard Model expectation in the SR ( ℓℓ ′ > 600 GeV).With no evidence for new physics, profile likelihood fits are used to set 95% CL lower limits on the mass of a Z ′ vector boson with lepton-flavour-violating couplings at 5.0, 4.0 and 3.9 TeV for the , or final states, respectively; on the mass of a supersymmetric -sneutrino with R-parity-violating couplings at 3.9, 2.8 and 2.7 TeV; and on the threshold mass for quantum black-hole production in the context of the ADD n=6 (RS n=1) model at 5.9 (3.8), 5.2 (3.0) and 5.1 (3.0) TeV respectively. For the mass limit of the ′ signal model, as an illustration, the observed results based on the full data samples recorded by the ATLAS experiment at √ = 13 TeV are more stringent by 0.6, 0.3 and 0.4 TeV than the corresponding limits based on the 36.1 fb −1 data samples at √ = 13 TeV [9].The expected sensitivity is improved by 0.5, 0.6 and 0.7 TeV for the , and channels respectively.The increases on the observed limits in and channels are not as strong as the expected ones due to the slight excess observed in data.The major improvement comes from the four times larger data sample.The increase of sensitivity is also due to the application of b-jet veto, more accurate data-driven background estimates, as well as better particle reconstruction and identification. Figure 1 : Figure 1: The background-only post-fit invariant mass distribution of (a) Low Δ ℓℓ ′ t CR, (b) t CR and (c) CR for data and the SM background predictions in the channel.The error bars show the statistical uncertainty of the observed yields, while the hashed band includes the post-fit total uncertainties taking into account all the correlations.The ratio of data to the best-fit prediction is shown in the bottom panels of the plots.The last bin contains the overflow events.The binning of the control regions was limited by the background statistics. Figure 2 : Figure 2: The background-only post-fit invariant mass distribution of (a) and (b) pairs for data and the SM background predictions in the t CR.The error bars show the statistical uncertainty of the observed yields, while the hashed band includes the post-fit total uncertainties taking into account all the correlations.The ratio of data to the best-fit prediction is shown in the bottom panels of the plots.The last bin contains the overflow events. Figure 3 : Figure 3: The background-only post-fit invariant mass distribution of (a) , (b) and (c) pairs for data and the SM background predictions in the SR.The error bars show the statistical uncertainty of the observed yields, while the hashed band includes the post-fit total uncertainties taking into account all the correlations.The dashed line shows a typical pre-fit signal mass distribution (RPV ν at 3 TeV).The ratio of data to the best-fit prediction is shown in the bottom panels of the plots.The arrow shows the difference between data and MC which exceed the ratio range.The last bin contains the overflow events. Figure 4 : Figure 4: The observed and expected 95% CL upper limits on the (a) Z ′ boson, (b) RPV -sneutrino ( ν )and (c) QBH ADD and RS production cross-section times branching ratio for decays into an final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty band. Figure 5 : Figure 5: The observed and expected 95% CL upper limits on the (a) ′ boson, (b) RPV -sneutrino ( ν )and (c) QBH ADD and RS production cross-section times branching ratio for decays into an final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty bands. Figure 6 : Figure 6: The observed and expected 95% CL upper limits on the (a) ′ boson, (b) RPV -sneutrino ( ν )and (c) QBH ADD and RS production cross-section times branching ratio for decays into a final state.The signal theoretical cross-section times branching ratio lines for the ′ model, the QBH ADD model assuming six extra dimensions, and the RS model with one extra dimension are obtained from the simulation of each process, while the ′ is corrected to NNLO and the RPV ν is corrected to NLO.The theoretical uncertainties are not considered in the mass limit calculation.The acceptance times efficiency of the ADD and RS QBH models agree to within 1% and the same curve is used for limit extraction.The expected limits are shown with the ±1 and ±2 standard deviation uncertainty bands. Table 2 : Summary of the different sources of relative uncertainty (post-fit) in percentage on the observed signal strength of the RPV SUSY ν model with a mass of 3 TeV in the , , and channels."Other" refers to luminosity, jet-vertex-tagger (JVT) and pile-up weight uncertainties."NA" stands for not applicable. Table 4 : Observed data event yields and the expected background event yields with their total uncertainties in the CRs in the and channels after applying all selection criteria and the background-only fit.The "Fake" background is estimated directly by +jets MC, and the multijet contribution is not considered in the t CRs.The "Top Quarks" contains single top and t events. Table 5 : Observed data event yields and the expected background event yields with their total uncertainties in the SR ( ℓℓ ′ > 600 GeV) in the , , and channels after applying all selection criteria and the background-only fit.The "Fake" background refers to "+jets" and "Multijet" events with fake leptons.The "Top Quarks" contains single top and t events. Table 6 : The fitted correction factors of "Top Quarks" and "Diboson" backgrounds which are applied in the SR for the background-only fit in the , and channel.For the and channels, the diboson correction factors are calculated from the CR-only fit results and put into the final fit as fixed normalisation factors. Table 7 : Expected and observed 95% CL lower limits on the mass of a ′ boson with lepton-flavourviolating couplings, a supersymmetric -sneutrino ( ν ) with -parity-violating couplings, and the threshold mass for quantum black-hole production for the ADD = 6 and RS = 1 models.
12,352.2
2023-07-17T00:00:00.000
[ "Physics" ]
Modeling rational decisions in ambiguous situations: a multi-valued logic approach If a decision context is completely precise, making good decisions is relatively easy. In the presence of ambiguity, rational decision-making is incom-parably more challenging. We understand ambiguous situations as cases, where the decision-maker has imprecise (uncertain or vague) knowledge that is acquired from incomplete information (without limiting it to probability judgements as in common terminology). From that, we assume that imprecisions in knowledge can affect all elements of the decision field as well as the objective function. For the modeling of such decision situations, classical logics are no longer considered as means of choice, so that we suggest using approaches from the field of multi-valued logic. In the present work, we take suitable calculi from the so-called intuitionistic fuzzy logic into account. On that basis, we propose a model for the formulation and solving of decision problems under ambiguity (in the general sense). Particularly, we address decision situations, in which a decision-maker has sufficient information to specify point probability values, but insufficient information to express point utility values. Our approach is also applicable for modeling cases in which the probability judgments or both, probability and utility judgements are imprecise. Our model is novel in that we combine core elements of established approaches for the formal handling of uncertainty (maxmin and a -maxmin expected utility models) with the mathematical foundation of intuitionistic fuzzy theory. Introduction Theories of rational decision-making behavior under uncertainty have always been central subjects in prescriptive decision theory. Bernoulli's work (1738) 1 with its later axiomatization by von Neumann and Morgenstern (1947) forms the theoretical basis of rational behavior in decisions under risk, the expected utility theory (EU). It proposes that if a decision-maker's preferences concerning risky alternatives fulfill a set of well-defined axioms, a utility function can be derived. This function assigns a real number to the consequences of each alternative in every state of nature. It reflects the decision-maker's attitude towards the consequence values as well as his or her attitude towards risk. The sum of the probability weighted single utilities for each alternative determines their respective expected utility values. According to the theory, a rational decision-maker maximizes his or her utility by choosing the alternative with the highest expected utility value. EU and the underlying axioms do not focus on the question of how the decision-relevant state probabilities are determined. This aspect is much more a subject of subjective expected utility theory (SEU). It postulates conditions, under which probabilities can be derived from preference statements. Its axiomatic foundation is attributed to Savage (1954) and is accounted as one of the most important approaches for rational decision-making under risk. Equally as important are the innumerable studies concerning behavioral violations of corresponding axioms; primarily, because they reveal the limits of rationality-forming theories and thus claim to provide proof of irrational behavior by decision-makers, who act inconsistently to them. The latter described efforts are often to be found in the literature, especially when it comes to violations of the rationality postulates of SEU in decision situations that are (at least partially) ambiguous. The concept of ambiguity has different interpretations in the literature, whereas the most common definitions and types can be ascribed to incomplete information on probabilities (see e.g., Franke 1978;Curley et al. 1986;Frisch and Baron 1988;Camerer and Weber 1992;Fox and Tversky 1995;Ghirardato et al. 2004). This terminology receives an increased scientific attention due to the work of Ellsberg (1961), who provides evidence for violations of Savage's axioms in decision situations under ambiguity. This kind of decision problems, which Ellsberg characterizes as situations between ''complete ignorance'' and ''risk'', attracts many researchers and results in a tremendous follow-up research. It mainly focusses on the description of behavioral inconsistencies regarding both preference-building mechanisms and probabilistic requirements of SEU (see e.g., Slovic and Tversky 1974;Einhorn and Hogarth 1986;Kahn and Sarin 1988;Curley and Yates 1989;Kunreuther et al. 1995). The ever-increasing amount of empirical results supporting Ellsberg's findings gives rise to another research stream, which develops a more critical view to most of these insights. While some of the corresponding works solely question the necessity and sufficiency of common rationality axioms as foundation for rational behavior, some other try to present approaches for a formal handling of inconsistencies with the rationality axioms in decision situations with ambiguous probability assessments. Particularly, the application of non-additive measures for the modeling of ambiguity settings has achieved great recognition in decision theory since the corresponding contributions by Schmeidler (1989) and Gilboa and Schmeidler (1989). By modifying the axioms of SEU, Schmeidler (1989) elaborates a subjective, non-additive measure based approach that can be applied to define ambiguity attitudes and formally handle inconsistencies with selected axioms of rational behavior. He uses Choquet's (1954) theoretical basis of non-additive capacities. Therefore, the corresponding theory is called Choquet expected utility theory (CEU). While in CEU the decision-maker's beliefs regarding the occurrence of states are expressed by non-additive probability substitutes (unique priors), in Gilboa's and Schmeidler's (1989) maxmin expected utility model (MEU) decisionmaker's beliefs are represented by a set of probabilities (multiple priors). Under consideration of Wald's (1949) maxmin rule, it is a pessimistic approach, which suggests selecting the alternative with the highest minimum expected utility value. By supplementing aspects of the Hurwicz criterion (1951), Ghirardato et al. (2004) have established the a-maxmin expected utility model (a-MEU). In accordance to MEU, a-MEU assumes that decision-maker's beliefs are represented by a set of probabilities. For decision-making, the overall expected utility is calculated as the weighted average of maximum and minimum expected utility for each alternative. Within this approach, the weights are understood as expressions of the decisionmaker's attitude towards ambiguity. 2 Al-Najjar and Weinstein (2009) provide a critical review of related approaches that were elaborated during the following two decades after Gilboa's and Schmeidler's initial work. A broader overview of related research contributions is given by Gilboa and Marinacci (2016). The work mentioned previously, primarily deals with the relaxation of axiomatic demands on decision-makers, regarding the formation of their preferences and probability judgments. Other than that, there are approaches, which rather deal with the formal structure of imprecise knowledge, and its handling within corresponding decision problems. Significant theories to mention in this context are theories of fuzzy measures and fuzzy sets, initially introduced by Zadeh (1965Zadeh ( , 1978. While fuzzy measure theory is primarily concerned with the analysis of alternative measures to the stringently axiomatized probability measure, the fuzzy set theory mainly provides tools for the mathematical modeling of imprecisions with respect to all possible components of the decision field (for discussion, see Metzger and Spengler 2017). The latter mentioned theories have great potential for the formulation of decision problems in which the decision-maker is faced with vague or incomplete information; in particular, because vague or incomplete information has a major impact on corresponding rationality considerations. We suggest not only reducing these to inconsistencies concerning probability judgements and preference statements, but rather focus on potential behavioral effects resulting from imperfect information. In this context, we want to refer to the following statement given by Gilboa and Marinacci (2016): […] (T)he (traditional) axiomatic foundations […] are not as compelling as they seem, and […] it may be irrational to follow this approach. […] (It) is limited because of its inability to express ignorance: it requires that the agent express beliefs whenever asked, without being allowed to say ''I don't know''. Such an agent may provide arbitrary answers, which are likely to violate the axioms, or adopt a single probability and provide answers based on it. But such a choice would be arbitrary, and therefore a poor candidate for a rational mode of behavior. We support this statement to the fullest and are strongly convinced, that this problem also appears, when a decision-maker is asked to determine (point) utility values. Subsequently, the question arises, whether the previously presented models can actually handle these limitations of (S)EU when it comes to determine rational decision behavior. From our previous discussion it follows that they are able to handle limitations of (S)EU, but only those associated with vague probability statements. Vagueness, that affects other components of the decision field, is not treated by these approaches. Additionally, all of them generate other requirements the decision-maker has to fulfill in order to apply these models in respective decision-making contexts. In this regard, we want to extend the theoretical analysis of (ir-) rational decision-making under incomplete information. Therefore, we specify our understanding of rationality and apart from that generalize the definition of ambiguity compared to the narrow one manifested by Ellsberg (1961). On that basis, we propose an approach for the formal handling of ambiguity in the general sense, including instruments of intuitionistic fuzzy theory. The remainder of this paper is structured as follows. In Sect. 2, we first describe our comprehension of the rationality and ambiguity terms in relation to our approach. In Sect. 3, we provide theoretical and terminological basics for the method that underlies to our approach. In Sect. 4, we introduce our model and illustrate it with a numerical example. In Sect. 5, we conclude with a discussion on our results and implications for future research. 2 Understanding of ambiguity and rationality within our approach Considering its etymology, rational behavior is reasonable and thoughtful behavior, while emotional behavior is one arising from intense and temporary mind movements. As long as people (and not machines) make decisions, they are always more or less emotional. Emotions thus accompany rational decisions, so that the interpretation as opposing concepts does not hit the core. In rationality concepts that are constructed bipolar, ''irrational'' is the opposite of ''rational'' et vice versa, and ''rational'' is the opposite of ''emotional'' et vice versa. In contrast to bipolar constructs, we assume here the possibility of complete independence (orthogonality) of rationality, irrationality, and emotionality, which may-but do not have to-be present within actions of an individual. Thus, it is possible for the decision-maker to show rational behavior for some components of the decision, and irrational as well as emotional behavior for others. For illustration, imagine a decision-maker that has to conduct calculations in order to obtain a reasonable solution for a decision problem. This individual accounts calculations as satisfying and generally enjoys it. During this calculation procedure, he or she makes an unconscious mistake and on that basis takes the wrong decision. In this case, the procedure itself would be rational and also emotional to some degree. Due to the mistake in the calculations, the result would be also irrational at the same time. Constructing all three dimensions orthogonally, for which we want to plead here, the overall interrelation can be illustrated graphically in the form of a cube (Fig. 1). The notion of ambiguity is mainly of Latin (later also French) origin and generally means equivocation (see e.g., Ries 1994). In decision-logic contexts, which we are essentially concerned with here, this addresses the equivocation of elements of the decision field and the objective function. This in turn can refer to alternatives, consequences, environmental states and probability judgements on the one hand and (above all) to the preference function on the other hand. Therefore, we propose to understand ambiguous situations as general cases, where the decisionmaker has imprecise (uncertain or vague) knowledge that is acquired from incomplete information (without limiting it to probability judgements). From that, we assume that imprecisions in knowledge can affect all elements of the decision field. This understanding of ambiguity goes beyond the terminology and conceptualization as introduced by Ellsberg (1961). Extensive discourses on ambiguity in the broader sense are provided by, e.g., Furnham and Ribchester (1995), Furnham and Marks (2013), McLain, Kefallonitis, and Armani (2015) and Lauriola et al. (2016). How an individual deals with ambiguity depends on his or her ambiguity attitudes (see e.g., Budner 1962;McLain 1993). We will come back later to the particular impact of ambiguity attitudes within decision situations. If the decision context is completely precise, making good decisions is relatively easy. In the case of ambiguity, rational decision-making is incomparably more difficult, irrespective of the degree of irrationality and emotionality. Classical logics are then no longer considered as means of choice, so that one is well advised to use approaches from the field of multi-valued logic. The term 'multi-valued logic' describes all logical concepts that do not satisfy the bivalence principle and therefore have more than two truth values, in contrast to two-valued logic, which allows something only being true (= 1) or false (= 0) (see e.g., Dubois and Prade 1980;Gottwald 2006). In the present work, we take suitable calculi from the socalled intuitionistic fuzzy logic into account. Fuzzy theory and intuitionistic fuzzy theory and terminology basics The foundation of our approach is Atanassov's (1986) intuitionistic fuzzy set theory (or i-fuzzy-, for short), which in the past decades has received increasing scientific attention as extension of Zadeh's (1965) fuzzy set theory. The starting point of our model is the construct of a fuzzy set (in the following we call it traditional fuzzy set) as introduced by Zadeh (1965). Let X be a finite classical set with its elements x. A corresponding fuzzy setà is determined by assigning to each x 2 X a value lÃðxÞ 2 0; 1 ½ that expresses the membership degree of the elements x to this fuzzy setÃ. The higher the membership degree lÃðxÞ, the more element x belongs toÃ. Structurally we get a set containing ordered pairs, where là represents a set function with là : X ! 0; 1 ½ . We want to illustrate this approach by the following example (Spengler 2015): A manager wants to assess his or her satisfaction with potential annual profit levels. Applying the traditional fuzzy set approach, (s)he first has to formulate a classical set X with realizable profit values x. This example set may contain the following elements (in thousand €): X ¼ 100; 200; 300; 400; 500; 600 f g . Subsequently (s)he has to assess to what extent each potential annual profit level x 2 X satisfies him or her. In the sense of traditional fuzzy set theory, the manager assesses to which degree lÃðxÞ the annual profit values belong to the fuzzy setà of satisfactory profits. Here,à formally represents the fuzzy statement ''x is a satisfactory annual profit level'' and exemplarily can appear as follows: While traditional fuzzy set theory does not specify how to interpret the inverse membership degree 1 À lÃðxÞ, Atanassov (1986) makes this aspect to a core research subject within his i-fuzzy set theory. He proposes a further differentiation of 1 À lÃðxÞ by introducing a degree of non-membership and a degree of indeterminacy, which enable a decision-maker to undertake a much stronger content-related and formal information differentiation. Furthermore, this approach provides a sophisticated basis for representation of ambiguous knowledge, which allows us to describe real decision problems in a more appropriate way. But what is intuitionistic about Atanassov's i-fuzzy sets? The concept of intuition is essentially based on the Latin noun intuitio (= the immediate contemplation). Intuitive assessments are based more on afflatus or anticipated grasp (''from the gut'') and less on scientifically discursive justifications (see e.g., Dorsch et al. 1994). While classical logics (e.g.) are based on the bivalence principle, according to which a statement is either clearly true or clearly false, more than two (truth) values are allowed in non-classical (multi-valued) logics. The latter includes intuitionistic logic (Brouwer 1913). 3 This logic is not about truth functionality, but about the question of whether A _ :A can be proved. Consequently, the law of excluded middle does not apply in it, just as it does not apply in fuzzy logic (see e.g., Dubois and Prade, 1985). An extension of the intuitionistic logic is the intuitionistic fuzzy logic (see e.g., Takeuti and Titani 1984;Atanassov 1999). In the present work, we want to use i-fuzzy sets in Atanassov's sense, so that the interesting terminological discourse between and Dubois et al. (2005) is only marginally mentioned here. In contrast to the notation used in traditional fuzzy set theory, we denote an intuitionistic fuzzy set byÂ. Having a finite set X with its elements x, we now can assign to each element x a membership degree lÂðxÞ 2 0; 1 ½ , a non-membership degree mÂðxÞ 2 0; 1 ½ , and degree pÂðxÞ, where pÂðxÞ ¼ 1 À lÂðxÞ À mÂðxÞ. pÂðxÞ represents the degree of indeterminacy regarding the (non-) membership of the element x to the i-fuzzy setÂ. These structurally form a set of ordered triplets with the following definition: ¼ ðx; lÂðxÞ; mÂðxÞÞ j x 2 X È É . In this standard notation, the degree of indeterminacy is not explicitly noted. It implicitly results from the subtraction mentioned above. From this notation also can be derived, that if pÂðxÞ ¼ 0 then mÂðxÞ ¼ 1 À lÂðxÞ. In this case, we again have a traditional fuzzy set definition. It follows, that traditional fuzzy sets are special cases of i-fuzzy sets. Considering the intuitionistic fuzzy approach within our previous example, in addition to assessing its satisfaction with profit levels x 2 X, the manager may indicate to what extent (s)he does not account them as satisfying. For this (s)he has to determine the degree mÂðxÞ, to which (s)he is dissatisfied with the single profit values. If (to a certain degree) (s)he is not sure, how (dis-) satisfying the profit levels are, (s)he also can specify a degree pÂðxÞ. The corresponding i-fuzzy set exemplarily can appear as follows: ¼ 100; 0:2; 0:8 ð Þ ; 200; ð f 0:3; 0:5Þ; 300; 0:5; 0:4 ð Þ ; 400; 0:7; 0:2 ð Þ ; 500; 0:8; 0:1 ð Þ ; 600; 1 ð Þg. In this paper we want to focus on constructs called intuitionistic fuzzy values (or i-fuzzy values, for short), which are strongly interrelated with the i-fuzzy set concept. Based on the above defined i-fuzzy sets, aðxÞ ¼ l a ðxÞ; m a ðxÞ ð Þis called i-fuzzy value, where l a ðxÞ 2 0; 1 ½ , m a ðxÞ 2 0; 1 ½ and l a ðxÞ þ m a ðxÞ 1. The degree p a ðxÞ with p a ðxÞ ¼ 1 À l a ðxÞ À m a ðxÞ maps the indeterminacy of the decision-maker when evaluating an element x with respect to a defined attribute. In the following, we use the triple notation of an i-fuzzy value in the form aðxÞ ¼ l a ðxÞ; m a ðxÞ; p a ðxÞ ð Þ (see e.g., Xu and Yager 2009). To illustrate possible geometrical representations of i-fuzzy values, we go back to the example of the manager, who wants to assess his or her satisfaction with potential annual profit levels. In this context, we ''translate'' the previously deduced elements of i-fuzzy set into i-fuzzy values. From that we get six i-fuzzy values að100Þ ¼ ð0:2; 0:8; 0Þ, að200Þ ¼ ð0:3; 0:5; 0:2Þ, að300Þ ¼ ð0:5; 0:4; 0:1Þ, að400Þ ¼ ð0:7; 0:2; 0:1Þ, að500Þ ¼ ð0:8; 0:1; 0:1Þ and að600Þ ¼ ð1; 0; 0Þ, which can be geometrically represented in an MNO-triangle (Fig. 2) as suggested by Szmidt and Kacprzyk (2010). M, N and O are the corner points of the triangle, where, respectively, one of the elements l a ðxÞ, m a ðxÞ or p a ðxÞ equals 1 and the other two elements are equal to zero. Point Mð1; 0; 0Þ, where l a ðxÞ equals 1, represents the ideal-positive element. For our example að600Þ is such an ideal point, because the corresponding annual profit level satisfies the manager to the fullest. Point Nð0; 1; 0Þ where m a ðxÞ equals 1, is called ideal-negative element. It is insofar ''ideal'' because one can argue (on the base of our example), that for the manager perfectly knowing what completely dissatisfies him is as good as perfectly knowing what satisfies him to the fullest. Point Oð0; 0; 1Þ, where p a ðxÞ equals 1, expresses total ignorance concerning the positivity or negativity of the corresponding attribute referred to x. In the case of our manager, e.g., selected achievable profit levels may be entailed with consequences that (s)he cannot at all assess in advance. The line connecting point M and N, with p a ðxÞ ¼ 0 and therefore l a ðxÞ þ m a ðxÞ ¼ 1, represents elements that are compatible with the traditional fuzzy set definition. In our example, point að100Þ ¼ ð0:2; 0:8; 0Þ would be such a point, because we also can find a full corresponding element inà from the traditional fuzzy example. Lines parallel to the line connecting point M and N, capture elements with equal degrees of indeterminacy. In our example, elements að300Þ, að400Þ and að500Þ have equal indeterminacy degrees (0.1). Graphically, they are therefore displayed on one parallel line. Generally, the closer a parallel line is to point O, the higher is the degree of indeterminacy. Finally, we want to present selected arithmetic operations on i-fuzzy values. Based on operations for i-fuzzy sets (Atanassov 1986;De et al. 2000) Xu (2007a defines the following arithmetic operations for two given i-fuzzy values, aðxÞ ¼ ðl a ðxÞ; m a ðxÞÞ and aðyÞ ¼ ðl a ðyÞ; m a ðyÞÞ: For these definitions, Xu (2007a) uses the pair notation of i-fuzzy values. Here, the resulting indeterminacy degrees p a ðxÞ are determined from the difference 1 À l a ðxÞ 0 À m a ðxÞ 0 , where l a ðxÞ 0 and m a ðxÞ 0 are the results of the arithmetic operations. As already discussed in previous work (Metzger and Spengler 2017), i-fuzzy sets and i-fuzzy values have similar mathematical definitions, but their applications can pursue different goals. On the one hand, i-fuzzy values are used to condense information related to an element x. An example frequently presented in the literature is the group voting case. Imagine a group of 10 persons that are asked to vote on the implementation of a strategy. Three people vote for the implementation, five against and two abstain. The derived i-fuzzy value condensing these information would thus be a = (0.3, 0.5, 0.2) (see e.g., Szmidt 2014; Xu 2007b; Zhao et al. 2014). On the other hand, i-fuzzy values are often applied to model imprecision in multi-criteria decision problems. For that, e.g., one or several decision-makers are requested to (separately) assess predefined attributes of decision-relevant alternatives by use of i-fuzzy values. In this context l a ðxÞ represents the degree of the positive and m a ðxÞ the degree of negative assessment with respect to these attributes. Here, p a ðxÞ can be an expression of either neutrality, undecidedness or unknowingness. To generate an overall evaluation of the respective alternative, all i-fuzzy values regarding the corresponding attributes are aggregated to a single i-fuzzy value. In this way, all decision-relevant information that is available concerning alternatives is summarized and condensed to an i-fuzzy value triple (see e.g., Xu and Yager 2008). Using different ranking methods (for overview, see Szmidt 2014), the corresponding alternatives can then be ranked and placed in a preference order. These examples show that possible applications offered by the construct of an i-fuzzy value go beyond the settheoretic basic functions described in the beginning of this chapter. Overall, we can say that i-fuzzy theory provides powerful instruments to map uncertain knowledge acquired from incomplete information. Especially the construct of p a ðxÞ, which we can either interpret as undecidedness or as unknowingness will be the key element of our model presented in the next chapter. An intuitionistic fuzzy approach for decision problems with ambiguous information The starting point for our model is a decision matrix as presented in Table 1. We denote alternatives by a i i ¼ 1; 2; . . .; n ð Þand states by s j ðj ¼ 1; 2. . .; mÞ with corresponding probabilities pðs j Þ. Consequences are denoted by c ij . In a business Business Research (2019) 12:271-290 279 management context, for example a i could represent investment alternatives, s j various market development states and the c ij cash flows, which are dependent on the respectively chosen alternative and the occurring market development state. Within our approach, we assume that the decision-maker has sufficient information to specify point probability values for all states s j . Alternatively, we can assume that they are exogenously given. Other than that, (s)he is only able to present imprecise assessments on utility values for the respective consequences uðc ij Þ. The sources for such imprecise utility assessments can be different: On the one hand, the consequences themselves may be vague and thus have ambiguous utilities for the decision-maker. On the other hand, the respective consequences may be precisely determinable, but the corresponding utilities are not clear to the decision-maker. These cases are relevant in particular, if the consequences are nonmonetary. For reasons of simplicity, in the following we do not distinguish between these sources of utility ambiguity. Both can be processed equally within our approach. We rather want to focus on the formal expression and the handling of these imprecise utility assessments within ambiguous decision situations. For this, we use trivalent i-fuzzy values, which we substantially adapt for the underlying problem as follows: a u ðc ij Þ ¼ ðl a u ðc ij Þ; m a u ðc ij Þ; p a u ðc ij ÞÞ. Table 2 shows the structure of imprecise utility assessments formally described by i-fuzzy values. We interpret the single elements of a u ðc ij Þ as follows: l a u ðc ij Þ reflects the utility level, which is necessarily realized according to the decision-maker's judgements. In other words, this degree corresponds to the lowest possible utility value that the decision-maker assigns to the corresponding consequence c ij . m a u ðc ij Þ expresses the degree, to which c ij relatively displeases him. We can also understand it as a degree of relative disutility of c ij . In addition, p a u ðc ij Þ reflects the degree, to which the decision-maker is unsure about the utility assessment of c ij . The following interdependencies apply: l a u ðc ij Þ 2 0; 1 ½ ; m a u ðc ij Þ 2 0; 1 ½ with l a u ðc ij Þ þ m a u ðc ij Þ 1 and p a u ðc ij Þ ¼ 1 À l a u ðc ij Þ À m a u ðc ij Þ. Thus, i-fuzzy values, where p a u equals 0 can be ''translated'' to point utility values. This is because in that case we presume that the decision-maker has sufficient information to precisely determine the utility and disutility degree of the corresponding consequence. I-fuzzy values with p a u ðc ij Þ [ 0 indicate an incomplete information basis regarding utility assessment. This representation allows us to map decision-maker's attitudes towards consequence values in a much more differentiated way, especially because it enables us to express formally his or her ignorance towards these variables. Within the next step, we aggregate the imprecise utility judgments expressed by i-fuzzy values. To do this, we first apply Formula (3) to weight the i-fuzzy utilities with the corresponding state probabilities, and then aggregate them for each alternative using Formula (1). The values thus obtained, reflect the decision-maker's imprecise expected utility assessment for each alternative a i . Substantially they are also i-fuzzy values and are denoted by a u ða i Þ ¼ ðl a u ða i Þ; m a u ða i Þ; p a u ða i ÞÞ. To derive meaningful interpretations of the single elements of a u ða i Þ we define the two following sets that are interrelated with a u ða i Þ. 4 Let G a u a i ð Þ be a set of i-fuzzy values with a u ða i Þ being the reference element. This ½ , k 2 2 0; 1 ½ and k 1 þ k 2 1 describes all elements that can arise from possible (partial) redistributions of p a u ða i Þ. Such redistributions apply in cases, where indeterminacy according to an evaluated element reduces to a certain degree. Additionally, we define a subset H a u a i ð Þ G a u a i ð Þ as H a u a i ð Þ ¼ a u ða i Þjl a u ða i Þ þ kp a u ða i Þ; m a u ða i Þ þ 1 À k ð Þp a u ða i Þ È É with k 2 ½0; 1 representing all possible total redistributions of p a u ða i Þ. These are cases, where the indeterminacy referred to an evaluated element fully vanishes. We assume that formal redistributions of p a u ða i Þ and therefore (partial or full) reductions of indeterminacy, resulting from improvements of the decision-maker's information state. For illustration, let us exemplarily assume a u ða i Þ being ð0:3; 0:2; 0:5Þ. Mapping this element into our MNO-triangle, we can see from Fig. 3, that set G a u ða i Þ is geometrically represented by the hatched triangle and from Fig. 4, that its subset H a u ða i Þ is expressed by the highlighted black line. From Fig. 4 we can also see that all elements of H a u ða i Þ are bounded by two elements, which we denote by a u ða i Þ min and a u ða i Þ max . For our example we get a u ða i Þ min ¼ ð0:3; 0:7; 0Þ, which represents a full distribution of p a u ða i Þ to m a u ða i Þ and a u ða i Þ max ¼ ð0:8; 0:2; 0Þ, representing a full distribution of p a u ða i Þ to l a u ða i Þ. As previously defined, i-fuzzy values with an indeterminacy degree of 0 can be interpreted as point utility values. Bringing all this together, we can sum up the following for the present case: an alternative a i , which overall expected utility has been evaluated by ð0:3; 0:2; 0:5Þ, is highly ambiguous and indicates a decisionmaker having a relatively poor information state regarding this alternative. Improving this information state leads to a revision of the assessment, which formally results in a redistribution of p a u ða i Þ. We assume that after the occurrence of state s the resulting consequence of the previously chosen alternative is observable. Treating this as equivalent to the instant improvement of information state, the corresponding utility value is thus also observable for the decision-maker. In this regard H a u ða i Þ represents the set of expected values, which anticipates all potential cases of the redistribution of p a u ða i Þ and with it, all cases of realizable (point) expected utility values at the time of the decision. 5 From the ex ante perspective our example, a u ða i Þ ¼ ð0:3; 0:2; 0:5Þ, may take any expected utility values between a u ða i Þ min ¼ ð0:3; 0:7; 0Þ (least favorable case) and a u ða i Þ max ¼ ð0:8; 0:2; 0Þ (most favorable case). Translating these into point values, we would say that the actual expected utility value of a i is located between 0:3 and 0:8. Hence, the decision-maker has a vague decision basis. In order to choose that alternative, which maximizes his or her overall expected utility, (s)he needs further 5 There are also cases possible, in which after occurrence of state s the respective consequence of the previously chosen alternative a i is observable to the decision-maker. But yet, he or she is still not able to fully determine a precise utility value. Therefore, we would have to consider all elements of set G au ai ð Þ instead of its subset H au ai ð Þ . For reasons of simplification, we do not examine such cases in this paper. Fig. 3 Geometric interpretation of G auðaiÞ decision support. In the following, we introduce two types of suitable approaches for the choice of alternatives in those situations. First, we propose to make use of intuitionistic ranking functions, being the most common method for ranking i-fuzzy alternatives. The core elements of such ranking functions are similarity or distance measures. A broad overview and mathematical foundations of these concepts are presented by Szmidt (2014). For our purposes, we exemplarily apply the ranking method as suggested by Szmidt and Kacprzyk (2010). Within this approach, it is assumed that an alternative evaluated with ð1; 0; 0Þ represents the ideal-positive alternative (referring to our MNO-diagram this point is denoted by M). Possible interpretations are, e.g., the alternative is fully satisfying the decision-maker regarding his or her objectives or, per se, is leading to the maximum (expected) utility for the decision maker. The corresponding ranking values, which we denote by Rða u ða i ÞÞ, are based on a normalized Hamming distance (Hamming 1950) between Mð1; 0; 0Þ and the respective i-fuzzy alternative a u ða i Þ. We determine them as follows: The lower the value Rða u ða i ÞÞ, the better is the respective alternative a i in terms of the extent and reliability of (positive) information concerning its expected utility. To determine the relatively best alternative for the present decision situation, we propose to account Rðaða i ÞÞ as preference value and apply the following objective function: As alternative approach, we propose to combine the results obtained by the ifuzzy method with adapted decision criteria as applied in maxmin expected utility model (Gilboa and Schmeidler 1989) and a-maxmin expected utility model (Ghirardato et al. 2004). First, we want to focus on the maxmin approach. Having a set of possible expected utility values, it suggests choosing the alternative with the highest minimum expected utility value. It is considered as pessimistic approach, because the decision-maker rather prefers to ''play safe'', neglecting possibilities to achieve higher utility values. For our i-fuzzy alternatives we elaborated a u ða i Þ min being the element representing the least favorable case and with it expressing the lowest achievable utility value. This represents the situation, where p a u ða i Þ totally redistributes to m a u ða i Þ. Therefore, l a u ða i Þ is accounted as the only element that is relevant for the final assessment of a i and thus for the decision. On that basis, we suggest to apply the following objective function in order to determine the relatively best alternative for the corresponding decision situation: Unlike a pessimist, an optimistic decision-maker rather focuses on the most favorable cases regarding the development of variables. Within our i-fuzzy approach, we stated that a u ða i Þ max are elements representing the most favorable cases, and therefore the highest achievable utility values. Formally, it expresses a total redistribution of p a u ða i Þ to l a u ða i Þ. Therefore, the sum of l a u a i ð Þ and p a u a i ð Þ is accounted as preference value and the following objective function is applied 6 : Integrating the core ideas of the a-maxmin expected utility model by Ghirardato et al. (2004) as explained in Sect. 1, we can further determine the overall expected utility as the weighted average of maximum and minimum expected utility for each alternative. In terms of our approach we therefore need to refer to our previously defined set H a u ða i Þ , which represents all achievable combinations of the most and least favorable utility results. Formally, it expresses all possible redistribution of p a u ða i Þ to l a u ða i Þ and m a u ða i Þ. Contrasting the interpretation of Ghirardato et al. (2004), for our approach, we do not interpret the weights as decision-maker's ambiguity attitudes. For our approach, we regard it as more suitable to stick to the original interpretation for the weights as expressions of optimism and pessimism considerations, as suggested by Hurwicz (1951). Hence, the higher the value of k, the more the decision-maker believes in achieving a favorable result. Therefore, the sum of l a u ða i Þ and k-weighted p a u ða i Þ is accounted as preference value and the following objective function is applied: Which of the presented decision criteria a decision-maker should choose for the solution of the formulated problem, depends on his or her ambiguity attitude. For example, a decision-maker who has a strong aversion towards ambiguity and perceives ambiguity as threat would rather choose the i-fuzzy-maxmin criterion. A decision-maker who has a strongly positive perception of ambiguity would rather apply the i-fuzzy-maxmax criterion. The i-fuzzy-Hurwicz criterion is applicable for the formal expression of combinations of extreme attitudes towards ambiguity. In the following, we present a numerical example to illustrate our i-fuzzy approach and the application of the above elaborated decision criteria. For reasons of simplification we consider four alternatives a i i ¼ 1; 2; 3; 4 ð Þand two states s j ðj ¼ 1; 2Þ. Table 3 presents the corresponding problem structure, where utility values of the underlying consequences have already been assessed by the decisionmaker using i-fuzzy values. Weighting the single i-fuzzy utility values with the given probabilities according to Formula (3), we get weighted i-fuzzy utility values (rounded to two decimal places) as presented in Table 4. Aggregating the weighted i-fuzzy utility values for each alternative according to Formula (1), we get the following i-fuzzy expected utility values (rounded to two decimal places) for alternatives a 1 À a 4 : Using MNO-representation (Fig. 5), we can illustrate how the i-fuzzy values for alternatives a 1 À a 4 are geometrically distributed. Regarding these as reference elements as shown in Fig. 5, we can derive that the actual expected utility of a 1 is in between 0.67 and 0.84, of a 2 in between 0 and 0.77, of a 3 in between 0.47 and 1 and of a 4 is in between 0.44 and 0.66. Finally, Table 5 presents the results we get from the application of the proposed decision criteria to the i-fuzzy expected values from the example. The bold emphasized preference values indicate which alternative is the best and hence chosen by the decision-maker, when applying the corresponding criterion. Figures 6 illustrate the geometrical solutions of the latter four results from Table 5. The presented model is not limited to the formulation and solving of decision problems where (solely) the utility values are ambiguous. Analogously, we can use its basic concept to formalize and solve problems, where, e.g., probability assessments or both, utility and probability assessments are imprecise. The latter case has been examined in detail by Metzger and Spengler (2017). This work also presents a comprehensive discussion on interdependencies between selected fuzzy measures and i-fuzzy values, used as substitutes for probability measures. Conclusion In this paper, we propose a model for the formulation and solving of decision problems under ambiguity. Therefore, we generalize the definition of ambiguous situations, which we understand as cases, in which the decision-maker has imprecise (uncertain or vague) knowledge that results from incomplete information and can affect all elements of the decision field and the objective function. Adopting decision criteria from maxmin expected utility model (Gilboa and Schmeidler 1989) and a-maxmin expected utility model (Ghirardato et al. 2004), we develop a decision model that combines elements of established approaches for the formal handling of uncertainty with instruments of intuitionistic fuzzy theory. In particular, we use intuitionistic fuzzy values as expression of decision-maker's imprecise assessments on utility values and provide selected approaches for the solution of corresponding decision problems. The appropriateness of the applied criterion depends on the ambiguity attitude of the respective decision-maker. In this paper, we focus on the formulation and solving of decision problems where (solely) the utility values are ambiguous. Analogously we can use its basic concept to formalize and solve problems, where, e.g., probability assessments or both, utility and probability assessments are imprecise. In order to elicit imprecise utility assessments of decision-makers, it is possible to apply an adapted version of the classical Bernoulli game [based on Ramsey's (1926) work]. Imprecise probability values can be derived, e.g., from interval-valued probability judgements (see e.g., Metzger and Spengler 2017). Respective applications in intuitionistic fuzzy contexts can be addressed in further research projects. The presented approach provides great potential to undertake extensive decisionsupporting contribution in different (especially economic) areas. Similarly, the intuitionistic fuzzy approach offers a basis for modeling behavioral violations of rationality axioms of (subjective) expected utility theory. On this basis, we suggest to assess the predictive quality of the model by means of subsequent experimental investigations. In particular, subsequent experiments could aim an investigation of how the decision criteria choice of a ''real'' decision-maker is affected by his or her ambiguity attitudes. Before that, however, it is important to further examine the model concept, which is still at an early stage of development. For example, other possible decision criteria should be considered and reviewed in terms of their impact on outcomes. In addition, it is important to examine to what extent classical and adapted axioms of rational behavior are (not) compatible with the approach presented.
9,643.2
2019-01-18T00:00:00.000
[ "Computer Science", "Economics" ]
S ep 2 01 6 To Infinity and Beyond : Some ODE and PDE Case Studies When mathematical/computational problems “reach” infinity, extending analysis and/or numerical computation beyond it becomes a notorious challenge. We suggest that, upon suitable singular transformations (that can in principle be computationally detected on the fly) it becomes possible to “go beyond infinity” to the other side, with the solution becoming again well behaved and the computations continuing normally. In our lumped, Ordinary Differential Equation (ODE) examples this “infinity crossing” can happen instantaneously; at the spatially distributed, Partial Differential Equation (PDE) level the crossing of infinity may even persist for finite time, necessitating the introduction of conceptual (and computational) buffer zones in which an appropriate singular transformation is continuously (locally) detected and performed. These observations (and associated tools) could set the stage for a systematic approach to bypassing infinity (and thus going beyond it) in a broader range of evolution equations; they also hold the promise of meaningfully and seamlessly performing the relevant computations. Along the path of our analysis, we present a regularization process via complexification and explore its impact on the dynamics; we also discuss a set of compactification transformations and their intuitive implications.. Introduction Studying self-similar solutions that collapse in finite time is a topic of widespread interest in both the mathematical and the physical literature. The contexts range from scaling [1,2], to focusing in prototypical dispersive equations such as the Korteweg-de Vries (KdV) equation [3] and most notably the nonlinear Schrödinger (NLS) equation [4][5][6] on the one hand, and from droplets in thin films [7] and flow in porous media [8,9] to the roughening of crystal surfaces [10] on the other. One may try to to avoid collapse (e.g. by imposing space [11][12][13] or time modulations [14,15]) or by identifying the higher order effects that preclude collapse in physical experiments [16]. One may alternatively explore what happens to the mathematical, computational (or even physical [17]) setting at, or past, the moment of collapse; see, e.g., the relevant chapter of [6]. With this latter intent, we start here from an array of simple, controllable examples and progressively explore more elaborate ones. mobile" collapse point(s) scenario; we discuss and illustrate how to address that numerically. We show in this case too how complexification may lead to regularization. We then summarize our findings and present some conclusions and topics for future study. Some relevant auxiliary notions are discussed in the Appendix and the Supporting Information (SI). The ODE Context The standard textbook ODE for collapse in finite time (and its solution by direct integration) reads:ẋ The collapse time t ⋆ = 1/x(0), is fully determined by the initial condition, and the textbook presentation usually stops here. A numerical solver would overflow close to (but before reaching) t ⋆ ; yet we can bypass this infinity by appropriately transforming the dependent variable x near the singularity. Indeed, the "good" quantity y ≡ 1 x ≡ x −1 , satisfies the "good" differential equation dy dt = −1; this equation will help "cross" the infinity (for x) by crossing zero and smoothly emerging on the other side (for y). Once infinity is crossed, we can revert to integrating the initial ("bad", but now tame again) equation for x. To manifest the feature that infinity crossing should be thought of as being on equal footing with any other point on the rest of this orbit, we introduce a notion of compactification [23]. Reshuffling the (hyperbolic form of the) solution, we have Compactification through the variables X and Y X = cos(θ) = (t ⋆ − t − x)/(t ⋆ − t + x), (3) Y = sin(θ) = 2/(t ⋆ − t + x). (4) converts this hyperbola to a circle; one can verify that indeed −1 ≤ X ≤ 1 and −1 ≤ Y ≤ 1 and also that X 2 + Y 2 = 1. This puts both relevant infinities on equal footing with all other points of the orbit along the circle. The trajectory between the point (1, 0) (the infinity in t, the steady state in x) and the point (−1, 0) (the infinity in x) can be thought of as reminiscent of a "heteroclinic connection". Such connections often arise in dynamical systems with symmetries (see e.g. [24,25]). The compactification also suggests that, provided we utilize "the right variables", i.e., the right quantities to observe the solution, (e.g., in the form of this circle) we should obtain a consistent, smooth picture (with consistent, smooth numerics). Indeed, y(t) = 1/x(t)(= t ⋆ − t) is a transformation in itself singular, yet one which converts the "bad" exploding variable x(t) into a "good" variable y(t), satisfying dy dt = −1 that merely smoothly crosses 0. The following numerical protocol then naturally circumvents problems associated with infinity in a broad class of ODEs that collapse self-similarly, as power laws of time (or, importantly, as we will see below including in the SI, also asymptotically self-similarly): 3/20 • Solve the "bad" ODE of Eq. (1) for a while, continuously monitoring, during the integration, its growth towards collapse. • If/when the approach to collapse is detected, estimate its (asymptotically) self-similar rate (the exponent of the associated power law, here −1) and use it to switch to a "good" equation for y, relying on the singular transformation y = 1/x with this exponent (and on continuity, to obtain appropriate initial data for this good equation). • Run this "good" equation for y until 0 for it (or ∞ for the former, "bad" equation) is safely crossed, computationally observing for x an (asymptotically) self-similar "return" from infinity. • Finally, transform back to the "bad" equation (no longer that bad, as infinity has been crossed) and march it further forward in time. This protocol has been carried out in Fig. 1 (see caption), illustrating that the dynamics can cross infinity and computation can be continued for all time, provided that the self-similar approach to infinity is adaptively detected and the associated, and appropriately numerically estimated, singular transformation is used to cross it. Note that the compactification also allows the progression past infinity in time too, when now y crosses zero as time approaches positive infinity and then "returns" from negative infinity. We can now try to extend/generalize these ideas to other collapse rates (i.e. arbitrary powers/exponents of self-similarity). The collapse ofẋ = x 3 whose exact solution is x(t) = 1 √ t ⋆ −t is worth examining separately. The relevant singular transformation (here y(t) = 1/x 2 ) will take us to infinity in finite time, but, at first sight, will not cross -y(t) becomes imaginary beyond t ⋆ . An appropriate compactificaton resolves the issue leading to perfectly regular dynamics on a circle, so that the singularity is again "bypassed" in the spirit of Fig. 1. Yet an implicit multi-valuedness clearly arises as a crucial issue in selecting useful transformations for such exact solutions; we will return to this important issue below. For the time being, we argue that one can generalize the above notions for ODEs that asymptotically collapse self-similarly, x(t) ∼ 1/(t ⋆ − t) a , so as to produce a useful compactification in the form In this form, the dynamics "benignly" travels along the circle. Relevant examples can straightforwardly be extended to, e.g., fractional powers although it is known from standard ODE analysis that issues of uniqueness may arise there that we do not delve into in the present work. More generally then, the self-similarly collapsing ODE dx dt = ±x p has the solution ± 1 1−p x 1−p = t − t ⋆ and its scaling in time follows x(t) ∼ (t ⋆ − t) 1/(1−p) , with the collapse time once again determined by the initial data. Given a "legacy code" that integrates the ODEẋ = F (x), we monitor its growth approaching collapse (i.e., how F (x) scales as x p , or more generally with ||x||) 1 . Upon detection of asymptotically self-similar collapse, at sufficiently large |x| (e.g 10 2 in the ODE of Fig. 1, or 10 4 in the PDEs of the next section) we stop solving the "bad" ODE. We use instead the singular transformation y = x −p+1 (more generally y = 1/f (x)dx) to solve the "good" y(t) ODE that crosses 0 rather than infinity. Then, a little beyond the collapse time (beyond infinity for x(t), beyond 0 for y(t)) we simply revert to the original, "bad" (yet no longer dangerous !) ODE, with continuity furnishing the relevant matching conditions. An illustration of asymptotically self-similar blowups, where different transformations are used to cross two different infinities (the finite-time/infinite value and the infinite-time/finite value ones) is included in the SI. Examining such infinity crossings as regular, rather than singular points begs an "explanation" for the mechanism of exiting the real axis along +∞ and then re-emerging on the other side at −∞ (for dx dt = x 2 ) or -arguably more remarkably-from +i∞ back towards the origin in the example involving x 3 ). In the latter case there is an obvious ambiguity: the solution might just as well be chosen to re-emerge from −i∞: one can formally, past the collapse, accept This is perhaps a prototypical (and tangible) example of the "phase loss" feature argued in [6,17]. As a way of shedding further light into these features, we examine the complexified version of Eq. (1). The complexified versionż = z 2 leads to the two-dimensional dynamical system:ẋ = x 2 − y 2 ,ẏ = 2xy. The real axis is an invariant subspace, retrieving our real results; yet complexification endows the dynamics with an intriguing "capability": as Fig. 1 illustrates through the (x, y) phase plane, collapse is avoided in the presence of a minuscule imaginary part. Large "elliptical-looking" trajectories are traced on the phase plane, eventually returning to the neighborhood of the sole fixed point of (0, 0) -which in the real case one would characterize as semi-stable-. The system of Eq. (11) can be tackled in closed form since the ODEż = z 2 yields 1 z = −t + 1 z(0) . For z = x + iy (z(0) = x 0 + iy 0 ) we obtain the explicit orbit formula Eliminating time by dividing the two ODEs within Eq. (11) directly yields an ODE for y = y(x) (rather than the parametric forms of Eqs. (12)-(13)). From this ODE, one can obtain that the quantity is an invariant of the phase plane dynamics, and thus the latter can be written as . That is, the trajectory evolves along circles of radius R in the upper (resp. lower) half-plane if y 0 > 0 (resp. y 0 < 0.) Approaching the axis with y 0 → 0, the curvature of these circles tends to 0 and their radius to ∞ (retrieving the real dynamics as a special case). Fig. 1 through its planar projections illustrates not only the radial projection of the dynamics in the x − y plane, but the x − t and y − t dependencies. Starting with a minuscule imaginary part the real dynamics tends to infinity; yet when the real part gets sufficiently large (somewhat in the spirit of our computations above), the imaginary part "takes over", grows rapidly, and "chaperons" the real part to the negative side. Once the real part 5/20 reaches the "opposite" (absolutely equal) negative value, the imaginary part rapidly shrinks and the formerly bad, yet now benign real equation "takes over" again. We point out here that there is also a canonical way to generalize the compactification of this complex picture to the Riemann sphere through the inverse stereographic projection Now the real dynamics become a great geodesic circle, while all other complex plane curves become regular circles on the surface of the sphere. Under this transformation all points with x 2 + y 2 → ∞ are identified with (0, 0, 1), rationalizing the vanishing time needed to move from one to the other. For the cubic caseż = z 3 the two-dimensional dynamical system becomeṡ 6/20 The corresponding phase portrait is shown in Fig. 1(c), while a typical trajectory is shown in figure Fig. 1(f). Instead of one "leaf" in the upper half plane there are now two leaves, one in each quadrant, see Fig. 1(d); this suggests a natural generalization to n − 1 leaves in each half plane in the case ofż = z n . 2 In the cubic case there is collapse for both positive and for negative initial data, and reentry along either the positive (resp. the negative) imaginary infinity (i.e., from +i∞, resp.−i∞) could be chosen (in analogy to the arbitrariness in the phase factor). However, for even infinitesimally small imaginary data, the symmetry is broken, and unique trajectories are selected along each quadrant. A small real part (accompanied by a small imaginary part) as in the bottom right panel of Fig. 1 grows until eventually (when sufficiently large) the imaginary part takes over. The real part then decays rapidly to 0, while the imaginary decays slowly, closing the orbit in the first quadrant; this is again a natural extension of the limiting case of purely real initial data. This complex formulation also allows the quantification (in a vein similar to [26,27]) of how long it takes for initial data, say, on the real axis, to "emerge" on the imaginary axis. In the SI we show that this time tends to 0 for the transitions between +∞ and +i∞ foṙ z = z 3 (or from +∞ to −∞ inż = z 2 ). A PDE Case We now turn to a PDE example, illustrating one of the ways that space dependence modifies the crossing of infinity. Motivated by dx dt = ±x 2 , where x −1 crosses infinity at a single moment in time, we study the case where infinity is first reached in finite time, and then crossed continuously in time, but (in one spatial dimension) at isolated points in space. The geometry involved is illustrated in the top left panel of Fig. 2, showing a graph of the function v(x, t) = x 2 + (0.1 − 0.1t), a parabola shifting its values downwards, at constant speed, but without change of shape. Initially it is everywhere positive; the tip reaches the zero level set at t * = 1 and then crosses it. The function w(x, t) = 1/v(x, t) is shown in the top right panel of Fig. 2: it reaches infinity at t * = 1 and subsequently crosses at two points that move apart as dictated by the motion of the parabola. We can then agree that the waveform "returns from minus infinity" between these two crossing points as the definition of w(x, t) formally suggests. Extension to higher-dimensional geometries (e.g. a paraboloid initially touching a plane at a point and then crossing it along a closed curve, such as a circle that "opens up" in time starting at the initial point in two spatial dimensions) can also naturally be envisaged. The key observation is that the evolution of w(x, t) actually involves a (potentially asymptotically) self-similar collapse near the crossing of infinity. This suggests that, upon detection -on the fly-of such an asymptotically self-similar collapse and estimation of the associated exponents (see below) for a "bad" w(x, t) PDE, a search for a "good observable" v(x, t) be performed, so that a conceptual and computational program analogous to that of the previous Section on ODEs may be carried through to obtain, and work with, a "good PDE" in the vicinity of the collapse point. The simple, linear equation provides an "engineered", yet transparent and analytically tractable illustration of the relevant ideas and hence will be used as our workhorse in what follows. Generic initial data in this well-posed, linear model decay and concurrently spread, asymptoting to u(x, t) = 0 at long times. We select an arbitrary level set u * = r > 0 and a modified variable w ≡ 1 u(x,t)−r to study level set crossings; for initial conditions everywhere above r, and on its way to zero, u(x, t) will cross the level set r so that w(x, t) will cross the level set at infinity. The "bad" PDE for w(x, t) reads An auxiliary tool for the analysis of (asymptotic) self-similar collapse in such equations is the so-called MN-dynamics [20,28]; a dynamic renormalization scheme rescaling space, time and the amplitude of the solution so that the self-similar solution becomes a steady state in the "co-exploding" frame, i.e., the frame factoring out the symmetry/invariance associated with the (potentially asymptotic) self-similarity. This formulation is presented as a separate, detailed Section in the Appendix for completeness. In that Section, both the general case, and the special example of Eq. (21) are treated. From this formulation we can infer that w ∼ 1/(t * − t), which, in turn, suggests the choice of a "good variable" used below. In our illustrative example we use Neumann BC in [0, π] and initial conditions u(x, 0) = a cos(2x) + c (here a = 0.4, c = 1.5, so that the solution u(x, t) of Eq. (20) reads: 8/20 and we choose r = 1. We do not, however, pre-assume such knowledge of u(x, t) since the equation we have to solve is the "bad" (focusing) w(x, t) equation, i.e., Eq. (21); our MN framework applied to the focusing of the w(x, t) evolution then suggests that a good observable is v(x, t) ≡ w(x, t) −1 , a variable that will simply be crossing 0 and thus the "good PDE" would simply be The bottom panels of Fig. 2 show representative instances just before and just after the initial encounter of the w(x, t) profile with infinity in both its "good" v(x, t) ≡ (u(x, t) − 1) and its "bad" w(x, t) incarnations, in the spirit of Figures 2a and 2b. The approach to infinity for w(x, t) is indeed asymptotically self-similar, as explained in the Appendix. As we approach the event, an inverted bell-shaped profile comes close to, touches, and then starts crossing through r = 1 in the variable u, or crossing through 0 in the variable v ≡ u − r, or equivalently crossing through ∞ in the variable w. Recall that our goal is to seamlessly carry out the computation without our numerical code ever realizing that (some part of) the solution is becoming indefinitely large. To achieve this, as the bad PDE solution grows towards infinity, it is adaptively tested, with a user-defined threshold for (local, asymptotic) self-similarity, i.e., for growth according to a (potentially approximate) power law. When this is numerically confirmed, a suitable power law transformation is devised with the numerically estimated similarity exponent; in the above example the detected exponent is −1 and so the transformation is v = w −1 . The easiest way to realize the right observable in this case is to consider uniform initial conditions -then the PDE reduces to an ODE that asymptotically explodes as theẇ = w 2 , suggesting the v = w −1 change of observables. Importantly, the transformation has to be performed -and the "good" solution sought-over an entire spatial interval(s) surrounding the approaching singular point(s). This suggests the following procedure, illustrated schematically in Fig. 3: • Upon detection of approach to infinity -as the "tip" of the collapsing waveform grows beyond a sufficiently large value-at a given point or points inside the computational domain, we split the domain in 3 regions: (a),(c) regular ones to the left and to the right of the growing tip, where the original "bad" equation for w is being solved; and (b) a new "singular" one, in the middle, where instead of solving the equation for w, now the equation for its singularly transformed variant, the "good" equation for v = w −1 is solved instead. The latter transformation is selected to comply with either the self-similar analysis on the theoretical side, or the identified power law of amplitude growth on the numerical side. These equations are linked by continuity of the (transformed) observables at the domain boundaries, and standard domain decomposition numerical techniques are used [29]. The good equation simply crosses zero rather than crossing infinity, as in the ODE case. • Once zero is crossed, the initial single crossing point in the family of case examples under consideration (in one spatial dimension) "opens up" into two infinity crossings (one can visualize two waves that propagate in opposite directions, one to the left and one to the right) -two zero-level-set crossings for the "good" equation. These crossings are quantified, for our example, in Fig. 3. They are bordered by the computed locations of a "high enough" absolute level (here 10 4 ) for asymptotic self-similarity. To deal with the two new crossings computationally over time, the central region is subsequently split into 3 regions. The two outer ones are our "singular 9/20 buffers", surrounding, and in some sense "masking" the infinity crossings to the left and to the right. But now they are separated by another, inner "regular" interval, where we can again solve the original "bad" equation since in here it is again sufficiently far from infinity. Thus, post-collapse, we partition the domain into five regions, three regular ones -the two outer ones, and the innermost, for the "bad equation"-and then two "singular buffers" for the transformed "good" equation, one around the left zero crossing and one around the right zero crossing of v, that correspond to the two infinity crossings of the bad equation for w. A nontrivial aspect of the computation is the "gluing" between the regular regions and the singular buffers. Our numerical scheme here is a simple one following [29]: for a finite difference discretization in space, (a) an explicit forward Euler time step is performed at the interface points, which provides the interior boundary conditions for the next time step (the next "computational era"), while (b) an implicit Euler time step is adopted to solve the "good" and "bad" PDE within the three (or the five) domains. The scheme can be modified to allow for different space and time steps in the different domains till the next computational era, when the new interface points will be detected, the new "interior" BCs will be computed, and the new set of BVPs resulting from the implicit timestepping in each domain will be solved. To recap the essence of the algorithm, by solving the "good" equation inside the buffer regions (and following the motion of the buffers on the fly), we ensure that the numerical simulation is never plagued by the indeterminacy associated with approaching/touching/crossing infinity. We now explore two more notions that were also examined in the ODE context. The first is compactification: self-similar PDE dynamics, although crossing through infinity, can simply be compactified as evolving over a circle or -better-on a sphere. We perform this step for a general 1D PDE (self-similar) solution of the form -assuming independence from τ (see the MN formulation in the Appendix)-: where ξ is a self-similar variable, e.g. ξ = x/(t ⋆ − t) q . If the max(f ) > 1, we can definẽ f = f / max(f ) and subsequently drop the tilde in Eq. (24) ensuring that |f | ≤ 1. We can then rewrite Eq. (24) as: Then, upon suitable definition of the variables, we can have X = ((t ⋆ − t) r − u) / ((t ⋆ − t) r + u) and Y = 2 |f (ξ)/ ((t ⋆ − t) r + u), in which case X 2 + Y 2 = 1. In these variables, at every moment in time the trajectory can be thought of as compactified along a circle. However, as the circle itself represents an invariant shape, in this representation we cannot straightforwardly visualize the trajectory's dynamics; for this reason, we next compactify the dynamics on a sphere. We define g 2 = 1 − f 2 and we can then write using the above variables (gX) 2 + (gY ) 2 = g 2 = 1 − f 2 , which can be reshuffled to read: Choosing the three terms of the left hand side of Eq. (26) as (t ⋆ −t) r +u , f ), we observe that the dynamics can be seen as evolving along the surface of a sphere. This compactification once again underscores the possibility to think of infinity as a regular circle (rather than point, as is the case 10/20 The left inset is more involved since it encompasses two computational "eras". The first (1), takes place after we have crossed our selected high level (here 10 4 ) for w defining the buffer region boundary, but have not yet crossed infinity; both the bad (w) and good (v) solutions are shown. The second (2) takes place after w crossed infinity, but has not (in absolute values) receded below the high level defining the buffer regions. Again both the bad (w) and the good (v) solutions are shown. The link to a computational movie of this evolution can be found at https://www.dropbox.com/s/1vag91zuwhjku1a/PDEPassInfinity.avi?dl=0 for ODEs), a level set that is crossed by the PDE solution evolving along the surface of the sphere. Finally, as in the ODE case, we discuss the possibility of complexifying the model in order to understand, as a limiting case, how infinity is crossed for purely real initial data, while it may be avoided (regularized) upon initialization with complex initial data. Using the complex decomposition for w(x) = a(x) + ib(x) in Eq. (21), one can obtain the pair of real and rather elaborate looking equations: 11/20 We expect that the presence an imaginary part in the initial data may avoid collapse in analogy with Figure 1. Given the quadratic nature of the nonlinearity, the quadratic ODE example is especially relevant; we expect here to observe something similar but in a PDE form, having space as an additional variable, over which the profile is distributed (around the crossing "tip"). Again, the tractability of our example allows us, via the solution of Eq. (23), to perform the relevant calculation analytically since at the level of the equation for v the complex model can be fully solved. Then, assuming v = c + id, the variable w = a + ib = 1/v = 1/(c + id) leads to a = c/(c 2 + d 2 ) and b = −d/(c 2 + d 2 ), and obtaining (c, d) explicitly, the same can be done for (a, b). This program is carried out in Fig. 4; see also the relevant movie at: https://www.dropbox.com/s/rxekr7umxa5b3ko/complexdynamics.avi?dl=0. We reconstruct analytically the spatial profile of the real and imaginary parts of w at different moments in time provided in the caption. While the profile tends towards collapse in the real part of the variable (and would go all the way to collapse and the dynamics of a Figure like 2 for purely real initial data), the imaginary part, in analogy to the dynamics of Fig. 1, but in a distributed sense around the tip, eventually takes over. As it does so, it forces the solution filament to "turn around on itself" in a spatially distributed generalization of the ODE of Fig. 1. Finally, the solution appears to re-emerge from the other side, practically extinguishing its imaginary part, and having avoided the crossing of infinity. This illustrates how complexification, even in the case of the PDE, results in the avoidance of collapse and the regularization of the model; the collapsing real case is a special limit case of the more general complex one. Concluding remarks and future challenges We attempted to address here a few prototypical cases of a spectrum of problems arising in both ordinary and partial differential equations, so as to deal with the emergence of infinities during the evolution of the relevant models. In a number of cases the model at hand will become physically inaccurate, and will need to be suitably modified as these singular points are approached; if not, our questions may be relevant for the physical realm. In any event, the questions are of particular relevance towards the mathematical analysis and numerical computation of the models at hand. In that light, we argued that it is possible in our context to perform singular transformations on demand, that may sidestep -through the help of a "good equation"the computational difficulties associated with infinities, rendering them tantamount to the crossing of a regular point such as zero. For ordinary differential equations, once the crossing has transpired, one can safely return to the original "bad" equation and continue the dynamics from there (until possibly a new infinity is approached). In the case of partial differential equations, the scenario at hand is more complex. There, the solution is distributed in space, and hence we assume and have analyzed the setting where a (generically assumed to be parabolic; see the relevant discussion in the Appendix) tip of a waveform approaches infinity. We have discussed in detail a scenario of initially touching infinity and then crossing it. Suitable computational "buffers" need then to be devised, where the detected singular transformation allows us to locally re-interpret (for computational purposes) the crossing of infinity as the crossing (in a transformed space) of a regular point, such as zero. These buffers need to be in constant and consistent communication, through appropriate continuity conditions, with the rest of the computational domain (the "rest of the world"). Typically, the buffers are defined by the location at which the solution takes on a sufficiently large (absolute) value -say 10 4 to the left and right of the growing tip in the pre-crossing regime, or, say, 10 4 till −10 4 on the left, and −10 4 till 10 4 on the right in the post-crossing regime. Fig. 2. The link to a computational movie of this evolution can be found at https://www.dropbox.com/s/rxekr7umxa5b3ko/complexdynamics.avi?dl=0 12/20 Our computational findings were complemented by a compactification approach, supporting the argument that infinity can be addressed in the same way as a regular point or a regular level set along the orbit. At the same time, a complexification of the model was observed to provide a regularization of the original real dynamics, avoiding the collapse of the latter and offering insight on how collapsing orbits can be envisioned as limiting scenarios of nonlinear dynamical systems within the complex plane. Naturally, there are numerous directions of interest for potential future studies. Clearly, exploring additional examples and examining whether the ideas can be equally successfully applied to them is of particular relevance. In the context of ODEs, this is especially relevant as regards vector/multi-dimensional systems. A related, especially important part in the realm of ODEs is that of convergence of algorithms e.g. to fixed points (or extremizers) of functions. Recall that in such cases, a concern always is whether the code may diverge along the way, rather than reach a root (or an extremum). Our approach can be used to devise algorithms with the ability to systematically bypass infinities during the algorithmic iterations-and such a "boosted" algorithm may be useful towards achieving enhanced, possibly global convergence to the roots (or extrema) of a function. This is particularly interesting now that continuous time versions of time-honored discrete algorithms like Newton or Nesterov 13/20 iteration schemes have become a research focus; see the related discussions in [30][31][32]. On the PDE side, we are envisioning (and currently starting to explore) a multitude of emerging aspects. For instance, when a distributed waveform reaches infinity at a single point in space-time, different post-collapse outcomes are possible. For example, an alternative possibility to the infinity-crossing presented here has been argued to be that the solution may "depart" from infinity without crossing (the transient blowup in [21]), as in the case of the standard collapsing NLS equation discussed extensively in textbooks [5,6]. There, the crossing through infinity is precluded by the existence of conservation laws. Past the initial point, it is argued in [6,17] that the solution will return from infinity incurring a "loss of phase". At the bifurcation level, the work of [20] offers a suggestion of how the return from infinity manifests itself: there, a solution with a positive growth rate was identified, that was dynamically approached during the collapse stage. Yet a partial "mirror image" of that, with negative growth rate, which presumably is followed past the collapse point in order to return from infinity was also identified; see, in particular, Fig. 1 and especially Fig. 2 of [20]. It is also possible that such a "touch and return" from infinity may occur without the loss of phase as, e.g., in the recent work of [33]. In examining a nonlocal variant of NLS (motivated by PT -symmetric considerations, i.e., systems invariant under the action of parity and time-reversal), Ref. [33] identified a solution that goes to infinity in finite time that can be theoretically calculated; subsequently this solution returns from infinity and then revisits infinity again, in a periodic way, always solely touching it and never crossing. This solution is analytically available in Eq. (22) of [33] and the collapse times are given by Eq. (23) therein; perhaps even more remarkably, the model itself is integrable. In this case, infinity is reached, subsequently returned from and then periodically revisited. Such an observation would arise in our context if the "original" PDE for u (i.e., a variant of Eq. (20)) had a spatiotemporal limit cycle that attained somewhere in space an extremal value r. Then, w(x, t) ≡ 1 u(x,t)−r would feature the above phenomenology. Such cases where infinity is reached but not crossed merit separate examination. The same is true for solutions exhibiting entire intervals at infinity, whose support progressively grows (or anyway remains finite), bordered by moving "fronts"; here one may envision that the "good" equation develops compacton-like solutions [22]. A related issue that may be worth exploring with such techniques is the possibility of bursting mechanisms (e.g. [34,35] involving heteroclinic connections with entire invariant planes at infinity) and the associated emergence of extreme events in nonlinear PDEs. Generalizations of the techniques developed herein to settings where, rather than u(x, t), u x (x, t) → ∞ (or this happens for other quantities associated with the dependent variable), as is, e.g., the case during the formation of shocks, should also be interesting to explore. Effectively, our considerations here can be thought of as identifying and numerically evolving the infinity level set of the solution. Thus, a related interesting direction for future work could be to try to connect the considerations herein with ones of level set methods [36,37], adapting the latter towards capturing, e.g., the regions of the singular buffers. Equally relevant are explicit examples similar to the one herein where multiple collapses may occur and propagate. An intriguing such case is the defocusing scenario of the nonlinear Schrödinger equation, which, in fact, has been shown in [38] to possess solutions such as u(x, t) = 1/x, or 14/20 with propagating singularities at x = ±12 1/4 t 1/2 , and It is obvious that to follow such dynamical examples, a methodology bearing features such as the ones discussed above is needed in order to bypass the continuously propagating singular points. In turn, generalizing such notions to higher dimensions (e.g., a two-dimensional variant of the analytically tractable example herein) and addressing collapsing waveforms both at points, as well as in more complex geometric examples such as curves [39] is of particular interest for future studies. It is tempting to explore whether the tools developed here may have something to add in the way we analyze collapse in well-established PDEs like the Navier-Stokes, or even singularities arising in a cosmological context. Several of these topics are under active consideration and we hope we will be able to report on them in future publications. By L here we designate the operator involving derivatives (not necessarily a linear operator -see also the example below), while by N we designate the local nonlinearity bearing operator. Using the ansatz we introduce a new scaled system of coordinates, intended to be suitably adjusted to the self-similar variation of the PDE solution. ξ is a rescaled spatial variable (taking into consideration the shrinkage -or growth-of the width), while τ is a rescaled time variable, not a priori tuned, but which will be adjusted so that in this "co-exploding" frame, we factor out the self-similarity, in the same way in which when going to the co-traveling frame, we factor out translation. This way, the self-similar solution resulting in this dynamical frame will appear to be steady. Direct substitution of Eq. (36) inside of Eq. (35) yields: where a and s are powers tailored to the particular problem (linear and nonlinear operators) of interest. In order to match the scalings of the two terms of the right hand side of Eq. (37), as is required for self-similarity, we demand that: Thus, the model can now be rewritten as: Demanding then that the time transformation be such that there is evolution towards a stationary state in this co-exploding frame, we remove any explicit time dependence by necessitating that τ t = A s−1 ∼ L −a . Then, the stationary state in this frame will satisfy: It should be mentioned here that this analysis already provides an explicit estimate for the growth/shrinkage of amplitude and width over time, given that we assume that A τ /A = G =const. In particular, A t = A τ τ t = A τ A s−1 = GA s , which in accordance to the considerations of the previous section leads to the evolution of A ∼ (t ⋆ − t) 1/(−s+1) . A similar analysis can be performed for L such that As a result of this analysis, our pulse-like entity touching (and potentially) crossing infinity will do so in a self-similar manner. For the specific example of Eq. (21), using w = Af (ξ, τ ), we obtain that It is then evident that the dynamics is not directly self-similar (due to the different scaling of the two terms within N ), but only asymptotically self-similar. When w (and A) is small, the exponential growth associated with the linear term is dominant. However, as the amplitude increases, eventually the quadratic term takes over, leaving the linear term as one of ever-decreasing-significance "offending" to the exact self-similar evolution. When the linear term becomes negligible, the self-similar evolution requires that A/L 2 = A 2 , providing the scaling of A ∼ 1/L 2 , i.e., in this case s = 2 and a = 2 for the general formulation above. From there, all the scalings associated with self-similarity can be directly deduced as explained previously. Supporting Information The linear ODE case The mapping of the dynamics onto a circle can also be performed for the case of the simple exponential (rather than the power law self-similar, finite-time collapse) arising from the simple linear ODE of the form: with the standard solution Here the dynamics can be written in hyperbolic form as and the variables can be defined so that X 2 + Y 2 = 1. In fact, substituting the exact solution of Eq. (43), it is straightforward to realize that X = tanh(∓(t − t ⋆ )) and Y = sech(t − t ⋆ ), resulting in the circular dynamics being a realization of the simple identity tanh 2 +sech 2 = 1. An asymptotically self-similar ODE case We so far focused on genuinely self-similar examples; the corresponding ideas can also be extended to asymptotically self-similar cases [1] that are not genuinely self-similar in that they possess "offending" terms, yet upon approaching the singularity the self-similar terms dominate, with the offending ones playing a progressively less important role. Our approach can easily be adapted to this case. Our simple example variant here will be of the form: Direct integration again can yield the exact solution in the form: It can be seen (when integrating Eq. (47)) that in this case the observable log(x/(x + 2) is the one that linearly crosses through 0 (as 2(t − t ⋆ )). For x large, this quantity becomes Hence, indeed at large times, it is the quadratic term that takes over since the dominant behavior of x(t) is like 1/(t ⋆ − t). However, as t → t ⋆ , the relevant asymptotics reads: enabling one to observe the explicit (lower order) contribution of the terms offending to the self-similarity. the collapse time, denoted by t ⋆ , is still determined by the initial data as t ⋆ = −(1/2) log(x(0)/(x(0) + 2)). Nevertheless, in this case as well, our computational prescription can be carried out. Eq. (47) can be integrated until x becomes large. we then revert to y = 1/x which has the straightforward ODE dynamics: (using the transformation to obtain the initial condition y(0)) and the equally simple solution y(t) = −1/2 + (y(0) + 1/2)e −2t . The solution of the latter problem of Eq (51) crosses 0 en route to its approach of the asymptotic value of −1/2. Finally, once the infinity has been bypassed, we return to the simulation of Eq. (47), as before. Mapping the dynamics to a circle. The solution of Eq. (47) can be rewritten as: Using the compactification the exact same way as Eqs. [3] and [4] of the main text and only replacing t * − t with: (e 2(t * −t) − 1)/2, the compactification scheme carries through. a) In this case, if t → t * , we Taylor expand and retrieve (from the first term) the limit of exactly Eqs. [3]- [4]. This is the contribution that stems from the x 2 term in the ODE. b) In the case of t → 0 (or anyway far from t * ) the exponential dominates and the (-1) coming from the x 2 term is irrelevant. This is the contribution that stems from the 2x term in the ODE. Time to transition Following numerous works including [26,27],we consider a radial contour along the complex plane i.e., the arc of a circle from the real to the positive imaginary axis. Then, along this arc (denoted by C),we have for T , the elapsed time: Bearing in mind the radial nature of the contour (which renders R constant), factoring out 1/R 2 and taking the limit as R → ∞, we obtain a vanishing result, even though the angular integral amounts to unity. I.e., interestingly, it takes a finite time to reach from everywhere along the real axis an equidistant point along the imaginary axis, yet this time vanishes as we approach infinity, in line with the analytical result forẋ = x 3 . In the case ofż = z 2 , there is a similar result justifying the infinitesimal time of return there from the positive to the negative real axis. 18/20
10,035.2
2017-01-01T00:00:00.000
[ "Mathematics" ]
Evolutionary Trend Analysis of Research on Immunotherapy for Brain Metastasis Based on Machine-Learning Scientometrics Brain metastases challenge cancer treatments with poor prognoses, despite ongoing advancements. Immunotherapy effectively alleviates advanced cancer, exhibiting immense potential to revolutionize brain metastasis management. To identify research priorities that optimize immunotherapies for brain metastases, 2164 related publications were analyzed. Scientometric visualization via R software, VOSviewer, and CiteSpace showed the interrelationships among literature, institutions, authors, and topic areas of focus. The publication rate and citations have grown exponentially over the past decade, with the US, China, and Germany as the major contributors. The University of Texas MD Anderson Cancer Center ranked highest in publications, while Memorial Sloan Kettering Cancer Center was most cited. Clusters of keywords revealed six hotspots: ‘Immunology’, ‘Check Point Inhibitors’, ‘Lung Cancer’, ‘Immunotherapy’, ‘Melanoma’, ‘Breast Cancer’, and ‘Microenvironment’. Melanoma, the most studied primary tumor with brain metastases offers promising immunotherapy advancements with generalizability and adaptability to other cancers. Our results outline the holistic overview of immunotherapy research for brain metastases, which pinpoints the forefront in the field, and directs researchers toward critical inquiries for enhanced mechanistic insight and improved clinical outcomes. Moreover, governmental and funding agencies will benefit from assigning financial resources to entities and regions with the greatest potential for combating brain metastases through immunotherapy. Introduction Brain metastases pose a significant concern given their prevalence, which is expected to impact one-third of people with solid tumors [1], with the highest incidences being observed in melanoma (28%), lung cancer (26%), renal cell carcinoma (11%), and breast carcinoma (7.6%) [2].Patients diagnosed with cancer and brain metastases are faced with a dismal prognosis, including increased morbidity and mortality [3], as well as a heavier financial burden [4].The two and five survival rates for cancer patients suffering from different primary tumors with brain metastases are estimated to be 8.1 and 2.4%, correspondingly.Nonetheless, the cause of death in over half of these cases is attributed to the CNS metastases [5].Whilst the combination of treatments has been conducive in ameliorating clinical outcomes [3], it often results in long-lasting adverse effects, such as hearing loss and neurocognitive impairment.These adverse events are predominantly observed in cases where patients receive chemotherapy and radiation therapy, regardless of the improvement in tumor-specific survival. research.Furthermore, it serves as an essential reference tool for researchers striving to achieve a comprehensive and in-depth understanding of research on immunotherapy for brain metastases. Literature Selection Strategy and Conceptual Design of the Entire Study According to our retrieval strategy, publications in research on immunotherapy for brain metastases were obtained from the Web of Science (WoS) Core Collection electronic database.Retrieved articles were assessed by two researchers independently to avoid bias and remove duplications.The search and selection strategy resulted in a total of 2164 publications, including 1368 original articles and 796 reviews.After preliminary analysis, these publications involved 70 countries, 3334 institutions, 12,991 authors, 2963 keywords, 545 journals, and 228 funding agencies (Figure 1). Distribution and Cooperation of the Contributing Countries/Regions First, we analyzed the trend of publications and calculated total/average citations in the research of immunotherapy for brain metastasis over time (Figure 2A).A regression model was used to depict the time curve of cumulative publication.The number of papers Distribution and Cooperation of the Contributing Countries/Regions First, we analyzed the trend of publications and calculated total/average citations in the research of immunotherapy for brain metastasis over time (Figure 2A).A regression model was used to depict the time curve of cumulative publication.The number of papers published in this field started to surge in 2014, but it was also from this year onwards that the average citations per year began to decrease annually (Supplementary Figure S1).In the past 22 years, a total of 70 Countries/Regions and 3334 institutions have published papers in regard to immunotherapy for brain metastasis.The top 10 countries with the most publications include the United States (USA), China, Italy, Germany, France, Japan, Australia, Canada, the United Kingdom (UK), and Spain, among which demonstrate extensive collaboration (Figure 2B,C).Notably, the USA and China exhibit the closest collaboration and dominate the field, collectively contributing to over half of global publications (Table 1).Additionally, the links between countries/regions are primarily concentrated between North America and Europe, with strong connections between Oceania and Europe (Figure 2D).A cluster visualization map depicted the distribution of countries/regions and the co-operation relations (Supplementary Figure S2A,B). published in this field started to surge in 2014, but it was also from this year onwards that the average citations per year began to decrease annually (Supplementary Figure S1).In the past 22 years, a total of 70 Countries/Regions and 3334 institutions have published papers in regard to immunotherapy for brain metastasis.The top 10 countries with the most publications include the United States (USA), China, Italy, Germany, France, Japan, Australia, Canada, the United Kingdom (UK), and Spain, among which demonstrate extensive collaboration (Figure 2B,C).Notably, the USA and China exhibit the closest collaboration and dominate the field, collectively contributing to over half of global publications (Table 1).Additionally, the links between countries/regions are primarily concentrated between North America and Europe, with strong connections between Oceania and Europe (Figure 2D).A cluster visualization map depicted the distribution of countries/regions and the co-operation relations (Supplementary Figure S2A,B). Contributing Institutions and Funding Agencies Next, we conducted a systematic analysis of productive institutions and funding agencies.According to the results, eight of the top 10 productive institutions in terms of publication volume are from the United States, followed by Germany and Austria (Table 2).ranked first in publications with 90 articles, while Memorial Sloan Kettering Cancer Center ranked first in citations with 3457 times.In addition, the top 3 productive institutions with the highest TLS (Total Link Strength) are M.D. Anderson Cancer Center from The University of Texas (TLS = 62,170), Harvard Medical School (TLS = 48,859), and Memorial Sloan Kettering Cancer Center (TLS = 32,713).From the process via VOSviewer, institutional cooperation forms are divided into eleven closely related clusters (Figure 3A).A density visualization map of institutional cooperation was also displayed (Figure 3B).The 10 most influential funding agencies that support the investigation of immunotherapy for brain metastases are the National Institutes of Health NIH USA, the United States Department of Health Human Services, the National Natural Science Foundation of China NSFC, NIH National Cancer Institute NCI, Bristol Myers Squibb, Merck Company, Roche Holding, Novartis, AstraZeneca, and Pfizer (Figure 3C). Active Authors and Co-Citation Analysis Totally, 12,991 authors contributed to the research in the field of immunotherapy for brain metastasis.A visualized cluster map depicted the analysis of author and co-citation (Figure 4A).The scholar who has published the most articles is Ascierto PA (Istituto Nazionale Tumori IRCCS), and the highest cited scholar is Kluger HM (Yale School of Medicine) in the field of immunotherapy for brain metastasis (Figure 4B,C).Furthermore, the five authors with the most publications are displayed in Figure 4D.The top 10 most cocited authors are Long GV (University of Sydney), Robert C (Gustave Roussy and Paris-Saclay University), Goldberg SB (Yale School of Medicine), Sperduto PW (Duke University Medical Center), Hodi FS (Dana-Farber Cancer Institute), Brown PD (Mayo Clinic), Reck Active Authors and Co-Citation Analysis Totally, 12,991 authors contributed to the research in the field of immunotherapy for brain metastasis.A visualized cluster map depicted the analysis of author and cocitation (Figure 4A).The scholar who has published the most articles is Ascierto PA (Istituto Nazionale Tumori IRCCS), and the highest cited scholar is Kluger HM (Yale School of Medicine) in the field of immunotherapy for brain metastasis (Figure 4B,C).Furthermore, the five authors with the most publications are displayed in Figure 4D 3). Keywords Analysis Regarding Co-Occurrence, Burstiness, Vicissitude, and Clustering Keywords can be used to analyze the frontiers of immunotherapy for brain metastasis research by providing an overview of the article's core content.We identified 2963 keywords in total from these publications.Among them, the top 20 keywords with most co-occurrence are displayed in Table 4. According to the results, "Immunotherapy", "Brain Metastases", and "Melanoma" are the top 3 keywords, which occur more than 300 times.In addition, we further clustered all the co-occurrence keywords through the timeline view of CiteSpace.All the keywords can be divided into six subclusters with excellent homogeneity (Figure 5A).The citation burst of keywords, which is a method used to identify frequently mentioned keywords during a specific period, is analyzed using CiteSpace (Figure 5B).Among keywords of the strongest citation bursts ranking in the top 30, "malignant melanoma (strength 24.42, 2001-2015), "metastatic melanoma' (strength 14.07, 2001-2016), "phase 2 trial" (strength 13.04, 2015-2018) are the top three showing the strongest burstiness.Furthermore, we perform a visualized overlay map of keywords together with the analysis of co-occurrence (Figure 5C).Based on the average year of occurrence, keywords are colored accordingly."Open-label", "Survival", and "ipilimumab" are the top three co-occurrence keywords plus.We show the occurrence frequency of these keywords through word cloud analysis (Figure 5D).Next, we analyze the occurrence frequency of keywords over time.Articles published between 2000 and 2005 focused on the risk of metastasis and prognostic factors.Articles published between 2006 and 2010 focused on treating specific types of tumors with brain metastases.The papers published between 2011 and 2018 focus on novel approaches to immunotherapy for tumor brain metastases.Recent publications have focused on clinical trials of the efficacy of various immune checkpoint inhibitors against brain metastases of various tumors (Figure 6A).We re-cluster the keywords in each stage and find that they can be divided into six categories (Figure 6B).We analyzed the theme through the decision tree algorithm and found that these keywords could be distinguished according to their occurrences and centrality (Figure 6C). Impactful Journals and Co-Citation Analysis We next performed a systematic analysis of influential journals and co-cited journals.There are 545 journals regarding immunotherapy for brain metastasis.The ranking of journals with the most published articles is displayed (Table 5).Journals with the most productions and co-citations are Front Oncol (IF = 4.7, Publication number = 97) and J Clin Oncol (IF = 45.3,Total Citations = 7780), respectively.New Engl J Med, with the highest IF of 158.5, ranked first as the most co-cited journal, while Front Oncol ranks showed the highest H-index of 56.Furthermore, the top 10 most influential journals or co-cited journals are classified as Q1/2 according to the Journal Citation Reports (JCR) in 2022.The co-citation analysis of journals was depicted via a cluster visualization map.We found that J Clin Oncol, N Engl J Med, and Lancet Oncol are at the core of the co-citation network (Figure 7A,B).The network visualization for the most productive journals revealed that Front Oncol, Int J Radiat Oncol Biol Phys, and J Immunother Cancer are at the central position of the publication network (Figure 7C,D).A dual map overlay revealed the correlation of research disciplines and the citation relationships among the influential journals related to immunotherapy for brain metastases (Figure 7E). Impactful Journals and Co-Citation Analysis We next performed a systematic analysis of influential journals and co-cited journals.There are 545 journals regarding immunotherapy for brain metastasis.The ranking of journals with the most published articles is displayed (Table 5).Journals with the most productions and co-citations are Front Oncol (IF = 4.7, Publication number = 97) and J Clin Influential References and Co-Citation Analysis Finally, we analyzed the most influential references in the field of immunotherapy for brain metastasis.Articles with the highest citations ranked in the top 10 are summarized in Table 6.The most influential literature in the field of immunotherapy for brain metastasis was contributed by Sarah B Goldberg et al. in 2016 to Lancet Oncol, with a total citation count of 342 times.The network visualization for the most co-cited references revealed that dudley (2005), goldberg (2016), and zeng (2013) are at the core of the publication co-citation network (Figure 8A).Hot-cited literature has been explored in recent years using references with citation bursts, an evaluation method that can reflect the relationship between citation volume.Therefore, we also analyzed the top 25 references with the strongest citation bursts (Figure 8B).For articles published within 2010-2012, there has been an explosion of citations to articles on immunotherapy for brain metastasis, which started in 2011.Generally, over the past ten years, most articles are still cited frequently, indicating that immunotherapy for brain metastasis research continues to flourish.Clin Oncol, N Engl J Med, and Lancet Oncol are at the core of the co-citation network (Figure 7A,B).The network visualization for the most productive journals revealed that Front Oncol, Int J Radiat Oncol Biol Phys, and J Immunother Cancer are at the central position of the publication network (Figure 7C,D).A dual map overlay revealed the correlation of research disciplines and the citation relationships among the influential journals related to immunotherapy for brain metastases (Figure 7E).Sperduto PW, 2012 [32]; Wolchok JD, 2013 [24]; Silk AW, 2013 [42]; Mathew M, 2013 [43]; Hauschild A, 2012 [44]; Kiess AP, 2015 [45]; Borghaei H, 2015 [46]; Larkin J, 2015 [30]; Brahmer J, 2015 [47]; Robert C, 2015 [23]; Twyman-Saint Victor C, 2015 [48]; Ahmed KA, 2016 [49]; Goldberg SB, 2016 [29]; Yamamoto M, 2014 [50]; Tawbi HA, 2018 [35]). General Information Distinct from standard review articles or meta-analyses, bibliometric analysis enlists distinctive merits in comprehensively encapsulating the progression trajectory of particular research domains as well as pinpointing pertinent research avenues.The present research constitutes a pioneering effort in conducting a knowledge structure analysis and identifying the plausibility of forthcoming research frontiers concerning immunotherapy for brain metastasis via the bibliometric methodology.Additionally, completed and ongoing clinical trials to evaluate the effectiveness of immunotherapy for brain metastases are summarized in Tables 7 and 8, respectively.North America, Europe, and Asia, as outlined in Figure 2, dominate research in immunotherapies for brain metastases, with the United States emerging as the leading contributor, with 80% of the top 10 institutes publishing related studies, a higher number of publications, TLS, and H-index than other countries/regions.Encouragingly, China, the second most productive country in this domain, has registered a remarkable rise in its research output since 2019, suggesting that developing countries' interest has contributed positively to the rapid advancement of immunological research in brain metastases.Collaborative initiatives between leading players such as the US, at the forefront of cutting-edge research, and developing countries like China, with significant clinical and experimental cases, could optimize the potential efficacy of immunotherapy in managing brain metastases.Moreover, collaborative networks play a crucial role in enhancing research quality by facilitating the exchange of knowledge, resources, and expertise.Through such collaboration, researchers can pool their intelligence and practical insights to address complex research inquiries that may surpass the capabilities of a single institution.Consequently, this fosters the production of more impactful research with the potential to shape policies and practices.Additionally, cooperation among elite institutions contributes to the broader dissemination and visibility of research outcomes.These institutions often possess well-established networks and partnerships with other organizations, enabling wider dissemination of research findings.As a result, this heightened visibility leads to increased recognition and impact of the research, while also creating more opportunities for future collaborations and funding. Of note, Dr. Manmeet Singh Ahluwalia of the Miami Cancer Institute led the authorship ranks with 24 publications and 881 citations, followed by Dr. Harriet Kluger of Yale School of Medicine and Dr. Matthias Preusser at the Medical University of Vienna, notable for their contributions on melanoma.In light of Frontiers in Oncology being the journal with the most pertinent articles and Lancet Oncology containing the most cited paper, these journals are being suggested for future reference in practice and research. Keywords and Emerging Hotpots In light of clusters and timeline views illuminating the cardinal themes and key topics of immunotherapy for brain metastases, the identified leading hotspots can be succinctly distilled into the following themes.These include elucidating the mechanisms of immune evasion in brain metastases, optimizing treatment strategies for patients with brain metastases, and identifying reliable biomarkers that can predict response to immunotherapy in these patients.Furthermore, researchers should aim to investigate the potential of combination therapies that can synergistically enhance the efficacy of immunotherapy in treating brain metastases. Immunotherapy, as the primary keyword, has fueled substantial recent growth in publications related to brain metastases, with a significant emphasis on immune checkpoint inhibitors (ICIs) [16].This intersection between the immune system and brain metastases is a fascinating and expanding area with potential clinical significance, particularly when regarding the lymphatic structure [57,58].The majority of ongoing clinical trials have strongly favored ICIs as a viable therapeutic strategy for brain metastases.Evidence suggests that immunomodulatory factors like PD-1, PD-L2, and other cytokines are regularly expressed in brain metastases originating from breast and lung carcinoma, as well as melanoma [28,59,60].A noticeable discrepancy between paired primary tumors and brain metastases is significant in the inflammatory microenvironment of patients with melanoma [61,62].Analysis of lung cancer patients reveals that tumor cell PD-L1 expression differed in 14% of cases, while TIL PD-L1 expression exhibited differences in over one-fourth of cases.Interestingly, some brain metastases lack TIL infiltration, PD-L1 expression, or both, which are found in the primary sites of lung cancer, despite their origins [63]. Encouragingly, a phase II trial investigating the CTLA-4 inhibitor ipilimumab for patients with brain metastases has produced satisfactory findings [40].Additionally, over one-fourth of the patients experienced OS after two years, indicating the prolonged benefit of immunotherapy to a specific subset of patients.However, patients who showed symptoms and were on steroid treatment at the beginning of follow-up had bleak outcomes generally, but still, one in ten of those patients survived for more than two years.There are ongoing studies of PD-1 inhibitors for brain metastases, which have already demonstrated encouraging and long-lasting activity in several cancers, including those that originated from the skin, bladder, and lung.In a recent phase 2 trial that involved participants with progressing but yet asymptomatic brain metastases from non-small-cell lung cancer (NSCLC) or melanoma, pembrolizumab (a PD-1 inhibitor) exhibited prominent and enduring activity in CNS for both malignancies [29].Among the 18 participants diagnosed with melanoma, four of them showcased an intracranial response.On the other hand, four out of the 18 NSCLC participants exhibited a complete response in the brain malignant regions.During the data analysis, a significant proportion of patients exhibited long-lasting and persistent responses to the treatment.Moreover, the impact of nivolumab, a PD-1 inhibitor, on NSCLC patients with untreated CNS metastases was also evaluated.Encouragingly, positive intracranial responses were observed in two out of the twelve participants who received nivolumab [64].Furthermore, nivolumab, another ICI of PD-1, has been administered to participants with lung cancer and untreated CNS metastases.The preliminary report indicates that one-sixth of the patients displayed a complete response within the CNS area, one of whom also achieved a complete response after nearly one year [64]. From the perspective of clinical practice, though immunotherapy has shown positive, albeit restricted, outcomes in patients with brain metastases, fundamental queries persist.Additional mechanistic investigations into the therapeutic action of the brain area are required to address the question of whether they function locally or react systemically to immune stimulation.Augmenting preclinical observations in mechanic research to further comprehend the potential anti-tumor impacts of immunotherapies is crucial to enhancing the benefits of this therapy and developing more efficient treatments. From the perspective of molecular biology, recent studies have investigated the microenvironmental characteristics of primary and metastatic brain tumors via transcriptomic and proteomic approaches.Although stromal cell composition was uniform across studies, discrepancies in the constitution and expression of immune cells were observed amongst various brain tumors [65].Enrichment in distinct immune cell types was observed in different metastatic tumors, suggesting that CNS metastases shape their microenvironment differently from their extracranial origins [66].These investigations prime the knowledge of the microenvironment's unique cell composition for different diseases at the same anatomic site and highlight the inadequacy of the current generalized therapies used to mediate the tumor microenvironment.Furthermore, these studies demonstrate that the microenvironment is not static or uniform [67]; even though the homeostatic niche of organs may be similar at the beginning, cancer cells infiltrate and cause the local evolution in a synergic pattern, leading to the recruitment of immunologic components with regard to the specific type of disease and related cells. It is probable that further insights into the principles of neurology-immunologyoncology crosstalk will be the trend to come.One potential avenue of exploration is the potential role of neuroscience drugs in ameliorating the immune-suppressive tumor microenvironment and enhancing immuno-oncology strategies.For instance, it is conceivable that by targeting the signaling pathway of neurological modulators or transmitters, the immunologic microenvironment could be altered to facilitate an anti-tumor immunologic response [68].A comprehensive advancement in the mechanisms behind the interactions between the CNS, the immune system, microbes, and cancer could offer valuable new perspectives in developing immuno-oncology strategies [69]. Beyond the investigation of immunotherapy for brain metastases via clinical trial and molecular research alone, these ongoing studies propose a possible pathway to align them for better therapeutic practice: to implement targeted immunotherapies customized to the genomics of brain metastasis.It is possible that immunotherapies for metastases may differ from primary tumor reactions due to significant molecular pathway alterations.However, by conducting genomic profiling of the metastatic compartment, new therapeutic strategies can be devised, clinical response predictions can be made, and new intervention targets can be identified.Currently, there has been a surge in the utilization of single-cell transcriptomics and computational systemic biological techniques, which have enabled the comprehensive characterization of microenvironmental changes and clonal dynamics in unprecedented detail and scale.New and emerging techniques have opened up fresh avenues for analyzing epigenetic markers, proteins, and metabolites at a single-cell and spatial level [70,71].Recent developments in DNA-editing technology have led to the creation of inducible lineage recording functions with high fidelity, which enable accurate state transition of cells over time [72,73].Together with existing methodologies, such as mitochondrial analysis related to mutation and real-time clonal tracking based on liquid biopsy; these systems are increasingly being used in clinical settings to elucidate the sequence of tumor evolution [74].Clinical trials utilizing ex vivo models are being considered as potential pointers prior to treatment, aiming at the prediction of patient-specific responses to treatment and providing guidance for clinical decision-making, as evidenced by current prospective studies [75].Last but not least, the use of artificial intelligence is expected to revolutionize the field of clinical trial design, expediting biomarker identification and drug development [76]. Limitations and Future Direction Immunotherapy has been a game-changer in cancer treatment; however, its impact on patients with brain metastases is yet to be comparable.While present knowledge can aid both research and clinical practice in improving cancer patients' chances, the unique immunological and clinical features of brain metastases present significant challenges.These features include distinctive genetic and epigenetic alterations from the primary tumors and unique immune microenvironments likely to have an impact on the response to immunotherapies [77,78].It is thus critical to focus on developing improved preclinical models, rational assays, and intensive early phase clinical trials to advance immunotherapy for brain metastases and understand neurological toxicity. With the rapid evolution of personalized therapies, innovation must be embraced with flexible designs that incorporate biomarkers and robust decision-making [79].A collective effort from all stakeholders, which includes philanthropic organizations, governmental bodies, and other funding bodies, is crucial to addressing the existing funding gaps.The primary objective of this collaborative research is to boost patient-oriented basic and clinical investigation, educate patients and society, as well as facilitate interdisciplinary collaboration among scientists and physicians to improve patient outcomes.Given the complex outcomes observed in immunotherapeutic trials for cancer patients with brain metastases, close academic collaboration among all disciplines is gaining importance.Progress at the interface of these key participants' interactions is necessary for significant advancements. In spite of the comprehensive landscape presented in this study on immunotherapy for brain metastases, the analysis was exclusively conducted with the WoS Core Collection electronic database because of its emphasis on high-quality, peer-reviewed research while excluding extraneous and quasi-experimental studies.However, it is worth considering the potential merits of exploring additional databases that encompass a broader range of biomedical research, including conference proceedings and non-peer-reviewed papers.Such an exploration could offer supplementary insights with clinical implications, warranting further investigation in these domains. Database and Study Collection The WoS Core Collection electronic database (Clarivate Analytics, Philadelphia, PA, USA) was used to retrieve related literature published between 1965 and 2023, according to the following search strategy: #1: Topic = ("Brain metastasis" OR "Brain metastases" OR "Central nervous system metastases" OR "Intracranial metastasis" OR "Cerebral metastasis"); #2: Topic = ("Immunotherapy" OR "Immune checkpoint therapy" OR "Immune checkpoint inhibitor"); #Final data source: #1 AND #2.Only publication in forms of article and review, present in English, were included for further analysis.Research bias was avoided by conducting the literature search independently by two researchers scrutinizing relevant articles and reviews on 20 April 2023.There was a restriction on language to English only.A flowchart for this study is presented in Figure 1. Visualization and Statistical Analysis Bibliometric visualization is commonly conducted with VOSviewer software to create maps that portray knowledge structures and networks [80].The three most prominent visualization maps offered by VOSviewer include maps of network visualization.VOSviewer (Version 1.6.16)was applied in this study to perform an analysis of the co-authorship (regarding authors, countries, and institutions) and the co-citation of journals.Keywords occurring more than 20 times were utilized in co-occurrence network analysis to identify the prevailing terms in research on immunotherapy for brain metastases. CiteSpace (Version 6.2.R2), a prominent visualization tool created by Professor Chaomei Chen [81], was employed to generate analysis maps for the co-citation of references and authors, as well as to identify the keywords that exhibit the most substantial citation spikes in research on immunotherapy for brain metastases.An overlay dual-map of journals was also generated via CiteSpace.Parameters used in CiteSpace were as follows: Year of slice, 1; Selection criteria, Top 50; Link retaining factor, 3; Look back years, 8; e for top N, 2; Pruning, Pathfinder.Additionally, both the online bibliometric platform (website: http://bibliometric.com/) and the "Bibliometrix" package for R-software (Version 4.2.3) were utilized to execute an analysis on international collaboration.The graphical representation of the data was predominantly performed with the VOSviewer and CiteSpace visualization tools. Conclusions In summary, the treatment options available for brain metastases have been substantially broadened by blending clinical insights with innovative biological research.The adoption of multimodal, interdisciplinary approaches that enhance treatment outcomes will tremendously benefit patients suffering from cancer-related brain metastases. Figure 1 . Figure 1.Literature selection strategy and conceptual design of the study. Figure 1 . Figure 1.Literature selection strategy and conceptual design of the study. Figure 2 . Figure 2. Distribution of countries/regions and the co-operation relations.(A) Analysis of annual publications and citation trends from 2000 to 2022.(B) The network map visualizing international collaborations across countries.(C) The changing trend of the annual publication number in the top 10 countries from 2000 to 2022.(D) The world map that visualizing the distribution of Figure 2 . Figure 2. Distribution of countries/regions and the co-operation relations.(A) Analysis of annual publications and citation trends from 2000 to 2022.(B) The network map visualizing international collaborations across countries.(C) The changing trend of the annual publication number in the top 10 countries from 2000 to 2022.(D) The world map that visualizing the distribution of countries/regions worldwide and their collaborations, presented in a network format.Red lines indicate the strength of collaboration.This map was downloaded from "Bibliometrix" public online website. Figure 3 . Figure 3. Contribution of productive institutions and funding agencies.(A) The VOSviewer visualization map shows institution co-authorship analyses overlaid.The nodes of different colors represent the institutions with different clusters, and the size of the nodes indicates their node sizes.(B) Institutions were mapped according to their spectral density.The deeper colors of the nodes represent the higher the number of documents published by the institution.(C) The top 10 funding agencies sponsored the highest number of studies in the field of immunotherapy for brain metastases. Figure 3 . Figure 3. Contribution of productive institutions and funding agencies.(A) The VOSviewer visualization map shows institution co-authorship analyses overlaid.The nodes of different colors represent the institutions with different clusters, and the size of the nodes indicates their node sizes.(B) Institutions were mapped according to their spectral density.The deeper colors of the nodes represent the higher the number of documents published by the institution.(C) The top 10 funding agencies sponsored the highest number of studies in the field of immunotherapy for brain metastases. Figure 4 . Figure 4. Contribution of active authors.(A) Visualization map of authors investigating immunotherapies for brain metastases.The nodes denote authors, with bigger circles representing more publications.Lines between the nodes denote the relationship between authors on the same article, with wider lines representing more frequent collaborations.(B) Bubble diagram displaying the most published authors in the field of immunotherapy for brain metastases (related to Table 3 summarizing total citations and h-index of these authors).(C) Bubble diagram displaying the most cited authors in the field of immunotherapy for brain metastases.(D) Top 5 authors' production over time is displayed. Figure 4 . Figure 4. Contribution of active authors.(A) Visualization map of authors investigating immunotherapies for brain metastases.The nodes denote authors, with bigger circles representing more publications.Lines between the nodes denote the relationship between authors on the same article, with wider lines representing more frequent collaborations.(B) Bubble diagram displaying the most published authors in the field of immunotherapy for brain metastases (related to Table 3 summarizing total citations and h-index of these authors).(C) Bubble diagram displaying the most cited authors in the field of immunotherapy for brain metastases.(D) Top 5 authors' production over time is displayed. Figure 5 . Figure 5. Analysis of keywords co-occurrence and burstiness.(A) Visualization map of timeline view of keywords analysis by CiteSpace.(B) Timeline distribution of cluster analysis of the top 30 keywords.(C) Keywords PLUS analysis with network visualization map of via VOSviewer.(D) Keywords representation with word cloud. Figure 5 . 24 Figure 6 . Figure 5. Analysis of keywords co-occurrence and burstiness.(A) Visualization map of timeline view of keywords analysis by CiteSpace.(B) Timeline distribution of cluster analysis of the top 30 keywords.(C) Keywords PLUS analysis with network visualization map of via VOSviewer.(D) Keywords representation with word cloud. Figure 6 . Figure 6.Analysis of keywords vicissitude and clustering.(A) The Sankey diagram illustrated the occurrence frequency of keywords over time.(B) The keywords in each time period can be divided into six categories.(C) Decision tree algorithm revealed that the keywords could be distinguished according to the occurrences and centrality. Figure 7 . Figure 7. Analysis of influential journals and co-cited journals.(A) The network visualization maps of the most influential journals produced with VOSviewer.(B) Bubble diagram displaying the most Figure 7 . Figure 7. Analysis of influential journals and co-cited journals.(A) The network visualization maps of the most influential journals produced with VOSviewer.(B) Bubble diagram displaying the most influential journals in the field of immunotherapy for brain metastases.(C) The network visualization maps of the most co-cited journals produced with VOSviewer.(D) Bubble diagram displaying the most co-cited journals in the field of immunotherapy for brain metastases.(E) A biplot overlay of journals with research on immunotherapy for brain metastases.(Left side depicts research fields covered by citing journals, right side shows research fields covered by cited journals). Table 1 . Top 10productive countries in research of immunotherapy for brain metastasis. Table 2 . Top 10institutes with most publication related to immunotherapy for brain metastasis. . The top 10 most co-cited authors are Long GV (University of Sydney), Robert C (Gustave Roussy and Paris-Saclay University), Goldberg SB (Yale School of Medicine), Sperduto PW (Duke University Medical Center), Hodi FS (Dana-Farber Cancer Institute), Brown PD (Mayo Clinic), Reck M (German Center for Lung Research), Berghoff AS (Medical University of Vienna), Tawbi HA (University of Texas MD Anderson Cancer Center), and Margolin K (Providence St. John's Cancer Institute) (Table 3).German Center for Lung Research), Berghoff AS (Medical University of Vienna), Tawbi HA (University of Texas MD Anderson Cancer Center), and Margolin K (Providence St. John's Cancer Institute) (Table Table 3 . Top 10 productive authors and co-authors in research of immunotherapy for brain metastasis. Table 4 . Top 20 co-occurrence keywords on research of immunotherapy for brain metastasis. TLS, Total Link Strength. Table 5 . Top 10 journals with most publication and co-citation in research of immunotherapy for brain metastasis. IF, Impact Factor; JCR, Journal Citation Reports. Table 6 . Top 10 co-cited research articles regarding immunotherapy for brain metastases. IF, Impact Factor; TLS, Total Link Strength. Table 7 . Key clinical trials assessing immunotherapy for cancer patients with brain metastases. Table 8 . Clinical trials underway to evaluate the effectiveness of immunotherapy for brain metastases.
7,467.6
2024-06-28T00:00:00.000
[ "Medicine", "Computer Science" ]
A new species of the catfish Neoplecostomus ( Loricariidae : Neoplecostominae ) from a coastal drainage in southeastern Brazil A new species of loricariid catfish is described from the rio Perequê-Açú and surrounding basins, Parati, Rio de Janeiro State. The new species has the accessory process of ceratobranchial 1 more slender than the main body of the ceratobranchial, and a very large sesamoid ossification, markedly greater in size than the interhyal. Additionally, the new species presents a distinct dorsal color pattern consisting of a conspicuous horseshoe shaped light blotch with a central dark area posterior to the supraoccipital. Neoplecostomus currently comprises 15 species, with most occurring in streams draining the Brazilian Crystalline Shield.The present work describes a new species endemic to four small coastal drainages in the State of Rio de Janeiro. Material and Methods Measurements were taken point to point with digital calipers to the nearest 0.1 mm, on the left side, following Zawadzki et al. (2008).Specimens smaller than 50.0 mm of SL, were not included in the morphometric and meristics analysis and are assigned as not measured.Plate counts followed Langeani (1990) and Zawadzki et al. (2008), with additional counts of the lateral plate series, following Schaefer (1997).Plates just below the pteroticsupracleithrum, surrounded by a naked area, were not included in the counts of the lateral series.Plates were counted on the left side in both cleared and stained (c&s) and alcohol-preserved specimens.The former were prepared according to Taylor and Van Dyke (1985).Measurements are presented as percentages of standard length (SL), head length (HL) or other measurements (e.g.snout length/orbital diameter, interorbital lenght/orbital diameter, interorbital lenght/mandibullary width, predorsal length/first dorsalfin ray length, caudal peduncle length/caudal peduncle depth, pelvic-fin length/caudal peduncle depth, lower caudal-fin spine/caudal peduncle depth).The presentation of measurements includes minimum, maximum, mean and standard deviation.Counts are presented as ranges with modes in parentheses.Vertebral counts included five from the Weberian apparatus and one from the hypural plate.Osteological analysis followed Pereira (2008). Fragments of the mitochondrial cytochrome oxidase c subunit I (COI) gene were sequenced from N. microps from rio Paraíba do Sul basin at Silveiras, São Paulo State (four specimens), Cunha, São Paulo State (one), the rio Guapi-Açu basin, Rio de Janeiro State (seven), and from rio Macaé basin, Rio de Janeiro State (five).The same region was sequenced from three specimens of the new species from rio Pereque-Açú, Parati in Rio de Janeiro State.Total DNA was extracted from muscle tissue using the salting out method adapted from Miller et al. (1988).The extracted DNA was precipitated with sodium acetate and ethanol, re-suspended in 50 μL of ultrapure water, and stored at -20°C. The polymerase chain reaction (PCR) profile was as follows: one initial cycle of 4 min at 94° C, followed by 35 cycles of 1 min at 94° C, 1 min at 46° C, 1 min at 72° C, and a final extension at 72° C for 5 min.The PCR products were purified and both strands were sent for sequencing by MACROGEN Korea. The vouchers of the new species used in molecular analyses are deposited at DZSJRP under catalog numbers 13914-1, 13914-2 and 20433-1.Mitochondrial DNA sequences from this work were deposited in GenBank (accession codes KU550707 for N. paraty from rio Perequê-Açú basin, KU608294 and KU608295 for N. microps from rio Paraíba do Sul basin, KU608296 for rio Guapi-Açu basin, and KU608297 to KU608298 for those from the rio Macaé basin). Fig. 1 Neoplecostomus sp.Pereira et al., 2003: 8 (rio Head wide and moderately depressed.Head and snout weakly rounded in dorsal view.Interorbital space slightly convex in frontal view.One median ridge from snout tip to area between nares, another one from posterior naris to anterior margin of orbit.Snout convex in lateral profile.Eye moderately small (7.8-10.7% of HL), dorsolaterally placed.Iris operculum present.Lips well developed and rounded, covered by papillae.Lower lip not reaching pectoral girdle.Two or three irregular rows of papillae posterior to dentary teeth; papillae large, conspicuous and transversally flattened.Maxillary barbel short, coalesced with lower lip, and generally bifurcated in free portion (some specimens only with fold of skin instead).Teeth long, slender and bicuspid; mesial cusp longer than lateral.Dentary rami forming an angle of approximately 120°.Dorsal-fin ii,7; origin posterior to vertical passing through pelvic-fin origin.Nuchal plate not covered by skin.Dorsalfin spinelet generally present, half-moon shaped and wider than dorsal-fin first ray base, absent in some specimens; dorsal-fin locking mechanism absent.Dorsal-fin posterior margin slightly falcate, reaching or surpassing vertical through end of pelvic-fin rays when adpressed.Adipose fin present and well developed, preceded by none, one or two azygous plates.Pectoral-fin i,6; with depressed and inward curved unbranched ray, shorter than longest branched ray.Pectoral-fin posterior margin slightly falcate, reaching or nearly reaching half pelvic-fin length when adpressed.Pelvic-fin i,5; posterior margin nearly straight, reaching or nearly reaching anal-fin insertion when adpressed.Pelvicfin unbranched ray ventrally flattened, with dermal flap on dorsal surface in males.Pectoral and pelvic-fin unbranched rays with odontodes on lateral and ventral portions.Analfin i,5; posterior margin nearly straight.Anal-fin unbranched ray only with ventral odontodes.Caudal-fin i,7,7,i; bifurcate; lower lobe longer than upper.Vertebrae 31-32 (32). Coloration.Dorsal surface ground color yellowish with light or dark brown blotches.Head with straight yellowish line from snout tip to anterior nares.Another large, less conspicuous and more laterally placed line running from snout border to slightly posterior of nares transverse line.Three other light areas between and around eyes and posterior to opercle and preopercle.Body with four transverse dark brown stripes at anterior portion of dorsalfin base and a little posterior; at posterior portion of dorsalfin base; from vertical through posterior portion of anal-fin base to adipose-fin spine; and at caudal peduncle. Posterior to supraoccipital, a conspicuous light and horseshoe shaped spot with dark center (Fig. 4a), slightly faded in some specimens (Fig. 4b).Between dorsal dark stripes, three other lighter areas with dark blotches.Juveniles lacking dark blotches or with blotches only slightly demarcated (Fig. 4a).Ventral surface of head and body yellowish medially; light brown laterally from snout tip to region just anterior of anus; light brown posterior of anus to the caudal peduncle.All fins with irregular dark brown areas; sometimes forming inconspicuous transverse stripes.Adipose fin with dark brown spine and hyaline membrane.Etymology.The specific epithet refers to "Paraty", the original spelling of the municipality of Parati, Rio de Janeiro.Paraty (or Paratii) derived from the Tupi "parati" (the mullet Mugil curema Valenciennes, 1836) and "i" (river).A noun in apposition. Conservation status.Neoplecostomus paraty occurs in four small and independent coastal drainages which run through unprotected land and also some conservation areas, such as Serra da Bocaina National Park and Serra do Mar State Park.The extent to which these conservation areas protect this species is unknown.Outside these parks (and to some extent also within these parks) streams and rivers are threatened by a range of anthropogenic threats associated with urbanization and the growth of the tourist industry, including Parati-Cunha road construction (Avena, 2003).Although N. paraty does not meet the criteria for any category of threat (IUCN, 2016), its highly endemic status demands special attention regarding conservation actions, and highlights the importance of the maintenance and even expansion of the protected areas in the region. Molecular analyses.Partial sequences of the COI gene were taken from 21 specimens, resulting in a matrix with 663 base pairs (bp) from which 559 sites were invariant, 97 were variable, and 35 were parsimony informative.The nucleotide frequencies were, on average, 23.3% adenine, 25.7% thymine, 30.7% cytosine and 20.3% guanine.A single haplotype was found in both N. paraty and the population of N. microps from rio Guapi-Açu basin, whereas two haplotypes were found in both populations of N. microps from rio Paraíba do Sul and rio Macaé, totaling 6 haplotypes throughout the specimens sequenced.Molecular distances among all assessed populations are given in Table 2. Considering the specimens from each basin as separate species or evolutionary units, the intraspecific distance values were zero for N. paraty and N. microps from Guapi-Açu and 0.15% for both N. microps from rio Paraíba do Sul and rio Macaé, due to a single nucleotide different on one specimen each. A maximum-likelihood analysis showed that haplotypes from each basin are separate lineages well-suported by high bootstrap values (Fig. 7).Lower bootstrap values occur for the clades including more than one lineage within N. microps, suggesting that it may be a species complex.The analysis indicates, however, that N. paraty is the sister taxon to a clade containing all populations of N. microps. Discussion The new species possesses rare osteological features for the genus Neoplecostomus.The slender condition of the accessory process of ceratobranchial 1 is otherwise observed only in N. corumba, N. jaguari and N. ribeirensis, and the very large sesamoid ossification only in N. ribeirensis and N. yapo.Despite being internal, both characteristics are easily observable and distinguishable, and the latter shows a small intraspecific variability.Furthermore, the dorsal color pattern of N. paraty is very conservative and conspicuous, and not seen in any other species within the genus.Other Neoplecostomus species have the general pattern proposed by Langeani (1990), or show a diffuse pattern of smaller blotches, both of which are completely different patterns from that displayed by N. paraty.Thus, the color pattern is a reliable characteristic which easily distinguishes N. paraty from all other species of Neoplecostomus. Neoplecostomus paraty (as Neoplecostomus P sp.n.) and N. microps were proposed as sister species by Pereira (2008).Although some uncertainty remains regarding the relationships between these species (and the overall phylogenetic relationships within Neoplecostomus fall beyond the scope of the present paper), the molecular analyses performed herein strongly support that N. paraty is a unique evolutionary lineage (Fig. 7).The genetic distance (>2%) between N. paraty and the remaining populations of N. microps (Table 2) provides evidence for a long isolation time and supports N. paraty as a new species.Additional phylogenetic analyses are necessary to refine our understanding about the relationships and historical biogeography of N. paraty and the N. microps species complex. Fig. 5 . Fig. 5. Southeastern Brazil showing geographic distribution of Neoplecostomus paraty (red circle) and Neoplecostomus microps (white circle).Type locality indicated by yellow circle.Symbols can represent more than one locality.Ecological notes.Neoplecostomus paraty was found in streams draining the Serra da Bocaina mountain range in the Parati Municipality, with clear and cold water, direct sunlight, fast flow, large rocks at the bottom, depths between 10 to 50 cm and moderate riparian vegetation (Fig. 6).Some sites in the Parati Municipality are probably among the lowest altitudes (e.g.38 m) at which Neoplecostomus species have been collected, since the genus commonly occurs at altitudes higher than 700 m.Other loricariids collected together with N. paraty are: Kronichthys heylandi (Boulenger 1900), Pareiorhina sp., Hemipsilichthys nimius Pereira, Reis, Souza & Lazzarotto 2003 and Schizolecis guntheri (Miranda Ribeiro 1918). Description.Measurements and counts given in Table1.Body elongated and depressed.Greatest body width at cleithrum, narrowing to caudal peduncle.Dorsal body profile gently convex, elevating from snout tip to dorsalfin origin and descending to first caudal-fin ray.Greatest body depth at dorsal-fin origin.Trunk and caudal peduncle dorsally rounded in cross section.Body ventrally flattened to anal-fin origin, flattened to slightly ascending towards caudal fin.Dorsal surface of body completely covered by dermal plates, except for naked area around dorsal-fin base.Snout tip with small naked area.Ventral head surface naked except by canal plate ahead of gill openings.Abdomen with conspicuous and small dermal platelets between insertions of pectoral and pelvic fins, forming thoracic shield (heptagonal-or hexagonal-shaped), surrounded by naked areas.One or two plates, arranged linearly and surrounded by naked areas (rarely with one to three small plates in front of them), between humeral process of cleithrum and first plate of lateral series (only exception in left side of holotype -four plates in line and two small ones in front of them). Table 1 . Morphometric and meristic data of Neoplecostomus paraty.H= holotype, n= number of specimens and SD = standard deviation.
2,713.2
2016-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
BLADE LIFT HYDRAULIC SYSTEM TROUBLESHOOTING ON KOMATSU D70-LE BULLDOZER (TROUBLESHOOTING SISTEM HIDROLIK BLADE LIFT PADA BULLDOZER KOMATSU D70-LE) A bulldozer is one type of heavy equipment used for cutting, pushing, and spreading material. It has a blade on the front side whose movement is controlled by a hydraulic system. The movement of the blade, combined with the weight and travel momentum of the bulldozer, creates the mechanical power used in the bulldozer's function. Problems occurring in the hydraulic system could render the bulldozer useless. In this study, a Komatsu D70-LE bulldozer unit's hydraulic system is experiencing low power and cylinder drift, where the hydraulic cylinder cannot maintain its floating position and tends to return to its full-length rod position, thus causing the blade to drop on itself. This problem is then traced through the troubleshooting method to find the cause and solution to the problem. The cause of the low power problem in the lift cylinder is that the components of the lift cylinder have been worn out and damaged, resulting in a pressure leak in the lift cylinder, which causes the blade to drop by itself (hydraulic drift). The troubling components are the wear ring seal, piston ring seal, o-ring, dust seal, and rod seal. The way to solve the low power problem in the lift cylinder is to replace the lift cylinder seal kit with a new set of wear ring seal, piston ring seal, o-ring, dust seal, and rod seal. Introduction Heavy equipment is a mechanical device to assist humans in carrying out heavy material moving work (Peurifoy, Schexnayder, & Shapira, 2005).Initially developed for the construction sector, heavy equipment is then widely used in the mining industry (Haycraft, 2011).It is widely used to support the mining process, from opening mines, building roads, excavations, and even transporting mining materials to the next process (Mononen & Matilla, 2022). One type of heavy equipment that is often used in the mining industry is a bulldozer.It is a chain tractor that is useful for digging, digging, pushing soil or material and pulling logs or portable camps that can be operated in various fields (UT School, 2008).Bulldozer is equipped with a blade on the front side.The movement of this blade is controlled with a hydraulic system.Bulldozers, equipped with a standard blade for pushing materials, can be enhanced with additional tools on the rear side.According to UT School (2008), one such attachment is the ripper, a spur-shaped device designed to break rock and hard earth into manageable chunks that can be subsequently pushed.Another useful addition is the winch, employed for pulling materials and commonly utilized in the forestry industry, particularly for the movement of timber logs.These supplementary attachments significantly expand the versatility and functionality of bulldozers in various applications. For the bulldozer to work optimally it is necessary for the entire system to function properly.One system that supports bulldozer performance is the blade lift hydraulic system that raises and lowers the blade.The operator controls the blade position by lengthening or shortening the rod of the blade lift hydraulic cylinders through control levers in the bulldozer cabin (Ito, 1991) (Thariq, 2022).Blade position determines the amount of material pushed by the bulldozer, also an important features when working on an uneven terrain or when spreading material over an area.Because the blade is the main component for the bulldozer in carrying out work, interference with the blade control system can cause the bulldozer to become unusable (Uzny & Kutrowski,2019) (Zhou, Lingyu & Xiaoming, 2022). One of the Komatsu D70 LE bulldozer units operated by PT Primanuka Nunukan experienced problems with the blade lift cylinder operations.The operator has difficulty maintaining the blade in a floating position, so the operator has to adjust the blade position repeatedly during work.This problem affects the work quality of the bulldozer because it is hard for the operator to maintain a smooth and steady blade position. The purpose of troubleshooting in machinery maintenance is to localize various possible causes of machine trouble, as well as carry out repairs and prevent the same machine troubles from happening again (Prabowo, 2008) (Doddannavar & Barnard, 2005).Therefore, to restore the optimal performance of the bulldozer and prevent the same trouble from happening again, it is necessary to troubleshoot the blade lift hydraulic system (Yang, Xiao, & Ze, 2014). Research Method This research was carried out at PT Primanuka site Nunukan.The method of data collection carried out is as follows: 1. Literature Study The literature study stage aims to collect additional information relevant to the problem.Literature study helps the researcher to achieve a better understanding of the unit and the problem itself.The source of information comes from the shop manual and part book of the studied bulldozer unit, previous research articles, and the internet. Field study In the field study stage, direct observation o the troubleshooting process for problems with the Komatsu D70 LE hydraulic system blade lift is conducted.The activities carried out in the field study were observation, interviews and documentation. Observation The observation stage aims to directly observe the bulldozer unit that is suspected of having a problem, while having the unit operating in the field.Through this stage, the information and data about the symptoms of the disorder that occurs can be obtained.At this stage a visual inspection was also carried out on the parts suspected to be related to the problem that occurred in the bulldozer unit.Furthermore, during this stage, observation of the preliminary testing, disassembly, inspection, part repair and/or replacement, assembly, and testing are conducted (Arifin, 2018). Interview Interviews were conducted with the operator in charge of the bulldozer unit that was suspected of having a problem.The aim of the interview is to obtain facts about the problem experienced by the bulldozer. Documentation The data and information obtained from field study are documented to make it easier in the data processing and analysis stages as well as writing reports later. When working in the field, all workers and researchers are required to wear personal protective equipment such as a safety helmet, safety shoes, gloves, protective goggles, and a high-visibility vest. Tools were also needed to carry out the removal, disassembly, and inspection of components suspected of being the source of the trouble.The tools that are prepared includes spanners, socket wrenches, pipe wrenches, hammer, chisel, set of slotted screwdriver and special tools.The research flow chart is shown in Figure 1. Results and Discussion Observation When carrying out field observations, the first thing to do is a visual inspection of the bulldozer unit and its hydraulic system components.Visual inspection is carried out on the following components: 1. Oil tank; this inspection aim to check the quality and quantity of hydraulic oil. During the inspection it is found that the unit has sufficient amount of hydraulic oil Blade Lift Hydraulic System Troubleshooting on Komatsu D70-LE Bulldozer (Troubleshooting Sistem Hidrolik Blade Lift pada Bulldozer Komatsu D70-LE) and good quality.2. Hydraulic pump; the visual inspection on the hydraulic pump shows the pump working properly and there are no oil leakage occurred.3. Control valve; here are no leaks found on the control valve body and the control lever are functioning properly.4. Hydraulic hoses and fitting; the inspection on the hydraulic hoses along the path for the blade lift cylinder shows no leakage, wheter from the hoses or the fittings.5. Hydraulic cylinder; the inspection shows no leaks found around the cylinder.The cylinder rod is straight and smooth on the surface.The fittings and hoses for the hydraulic cylinders are also in good condition. Interviews Interviews were conducted with the operator who was working with the bulldozer when trouble occurred.The operator is also the one who reported the problem with the unit when it happened the first time.The results of interview is shown in Table 1.When the blade lift cylinder lever is pulled, the blade will automatically go up but slowly the blade will go down by itself even when the lever is in floating position 3 What steps do you take when the unit has trouble? Stop the unit in a safe place and immediately report to the mechanic Preliminary Testing The preliminary testing was carried out to more precisely determine where the troubles in the Komatsu D70-LE bulldozer occured.Based on the report from the operator on duty, where there was a problem for him maintaining the bulldozer blade in floating position during the operation.The mechanic then checks and tests by operating the blade drive hydraulic system.The hydraulic control system functions properly, the actuator follows the input from the control levers.The problem appears when the control lever is left in the floating position, the blade will drop due to its own weight.This symptom shows that the lift cylinder cannot hold the blade position according to the command of the control lever.From the test result, it turned out that the problem was in the blade lift cylinder component.where the lift cylinder cannot maintain it's pressure, which resulted in the blade going down by itself (hydraulic drift).From the preliminary testing and the results of visual inspection it is concluded that the source of the trouble is in the blade lift cylinder.To correct this problem it is necessary to dismantle the blade lift hydraulic cylinder (Anhar & Faisal, 2021) (Simanjuntak & Novan, 2019) (Yang, Xiao, & Ze, 2014). Tools Preparation Tools that are prepared for the disassembly work are spanners, socket wrenches, pipe wrenches, hammer, chisel, set of slotted screwdriver and special tools. Disassembly Disassembly aims to separate the components into smaller parts so as to facilitate the inspection stage.Disassembly must be done carefully so as not to cause new damage.Some components of the hydraulic cylinder are reusable and some are not which have to be replaced disregarding the condition of the hydraulic cylinder.Before removing the hydraulic cylinder, the system pressure must be relieved.The blade is rested on a solid base, the hydraulic fittings are released and the cylinder pins are removed.The hydraulic cylinder then removed from the blade and disassembled to reach the inner components of the hydraulic cylinder.The disassembled hydraulic cylinder parts were then inspected, and it was found that the cylinder seal kit components were damaged.The problem started with the delay in performing hydraulic oil changes.The service life of hydraulic oil that exceeds the standard service life will result in a decrease in the quality of quality, especially with regard to lubrication capabilities (Anhar & Faisal, 2021) (Simanjuntak & Novan, 2019).Hydraulic oil also serves to reduce friction and wear and tear, dissipate heat, and drain debris (cleaner) on the parts that mutual motion sliding (ASM International, 1992).This Lift cylinder Blade Lift Hydraulic System Troubleshooting on Komatsu D70-LE Bulldozer (Troubleshooting Sistem Hidrolik Blade Lift pada Bulldozer Komatsu D70-LE) matter corresponds to the results of visual observations internal elevator cylinder that is experiencing wear.Friction between internal components of the lift cylinder becomes excessive due to not maximizing lubrication.Periodic Service (PS) plays an important role in maintaining the performance of the unit, especially the hydraulic system and oil.More than 50% of hydraulic system problems hydraulic system are related to hydraulic oil (Doddannavar & Barnard, 2005) (Wen & Chuan, 2014). The seal kit consists of a wear ring seal, piston ring seal, o-ring, dust seal, and rod seal.The damage to these components causes internal leakage in the lift cylinder, thus causing the lift cylinder to be unable to maintain the hydraulic pressure required to maintain the piston position.The mechanic made a part recommendation list to replace the damaged components.The recommended part should fulfill the manufacturer standard.The part number reference is shown in Figure 4 Assembly Assembly is the process of assembling components into a ready-to-use form, returning to its initial form.After the hydraulic cylinder is assembled, it is then reinstalled on the bulldozer body and blade.The hydraulic hoses and fittings also reconnected.This process is done by following the manufacturer shop manual (Prabowo, 2008). Performance Test After the installation is complete, a performance test is carried out on the repaired components.The bulldozer engine is started and allowed to reach its working temperature.Then, with the bulldozer stationary, the operator raises the blade until it hangs above the ground.This blade position is observed for approximately five minutes to see if the blade still descends on its own, this test indicates whether the internal leakage in the blade lift hydraulic cylinder is still occurring or not.After the results of this first test are considered satisfactory, the second test is conducted.The bulldozer is run forward and backward while maintaining the blade in a floating position.During the bulldozer's travel movement, the position of the blades is observed.If the blade remains in its designated position, then it is known that the internal leakage has been repaired.The result of the second test are also considered satisfactory, so the troubleshooting process is considered complete. Conclusion The research findings indicate that the low power in the lift cylinder is attributed to worn and damaged components within the lift cylinder seal kit, leading to an internal leak and hydraulic drift.The failing components include the wear ring seal, piston ring seal, o-ring, dust seal, and rod seal, primarily affected by wear, tear, and age.To address the low power issue, the recommended solution is the replacement of the lift cylinder seal kit with a new set.The author expresses gratitude to the Director of Politeknik Negeri Nunukan for their support in the paper's development and acknowledges family and colleagues for their assistance in completing the research. Figure 3 . Figure 3. Dust seal removal Inspection and Part RecomendationThe disassembled hydraulic cylinder parts were then inspected, and it was found that the cylinder seal kit components were damaged.The problem started with the delay in performing hydraulic oil changes.The service life of hydraulic oil that exceeds the standard service life will result in a decrease in the quality of quality, especially with regard to lubrication capabilities (Anhar & Faisal, 2021)(Simanjuntak & Novan, 2019).Hydraulic oil also serves to reduce friction and wear and tear, dissipate heat, and drain debris (cleaner) on the parts that mutual motion sliding(ASM International, 1992).This (D70 LE Lift Cylinder Part Number, n.d.). Figure 4 . Figure 4. Part number reference Figure 5 . Figure 5. New wear seal on cylinder piston
3,280.2
2024-01-12T00:00:00.000
[ "Engineering", "Materials Science" ]
Title: Impact of confinement on the dynamics and H-bonding pattern in low- molecular weight poly(propylene glycols) Herein, we explored thermal properties, dynamics, wettability, and Hbonding pattern in various poly(propylene glycols) (PPG) of Mn = 400 g/mol confined into two types of nanoporous templates: silica (d = 4 nm) and alumina (d = 18 nm). Unexpectedly, it was found that the mobility of the interfacial layer and the depression of the glass transition temperature weakly depend on the pore size, surface functionalization, and wettability. However, interestingly, we have reported strengthening of the hydrogen bonds in samples confined in silica pores. Further, the unique annealing experiments on PPG-OH with the use of Fourier transform infrared spectroscopy revealed the reorganization of oligomers close to the interface and the formation of three distinct fractions, interfacial, intermediate, and bulk-like, in the infiltrated samples. These experiments might shed new light on the variation of the segmental/structural relaxation times due to annealing of materials of different molecular weights infiltrated into pores or deposited in the form of a thin layer. INTRODUCTION The behavior of soft materials under nanoscale spatial restriction conditions became an attractive field of research over the past decades. 1,2 A special attention was focused on understanding in detail the molecular mechanisms governing the variation in the physicochemical properties of confined compounds. 3,4 Interestingly, as shown by numerous theoretical and experimental studies, these fluctuations arise mostly because of the competition of three major effects: surface interactions, finite size, and free volume. 5−8 In fact, most investigations carried out to date demonstrated that the deviation in the molecular dynamics and phase/glass transition temperatures of confined materials is directly correlated with the size reduction realized by the decrease in both thicknesses of thin films and the pore diameter, d, of porous templates. 9 What is important, the positronium annihilation measurements carried out for liquids infiltrated into porous templates revealed that along with the change in the diameter of the nanocavities, a strong fluctuation in the free volume is noted. 10−12 At this point, it is worth quoting recent work by White and Lipson, who derived the cooperative free volume (CFV) model to describe general temperature and free volume-dependent structural relaxation behavior 10−12 of the bulk and confined materials. Briefly, one can stress that in this model, a group of molecules cooperate to obtain enough space for a rearrangement, and hence, the number of cooperating particles is inversely proportional to the free volume. This model was applied by Napolitano et al. 13 to explain enhanced dynamics of poly(4-chlorostyrene) and poly (2-vinylpiridine), prepared in a form of thin layers on the substrates differing in roughness. Moreover, recently, the CFV approach was used to describe segmental and chain dynamics of amino-terminated poly(propylene glycol) (PPG-NH 2 ) confined into native and silanized silica pores. 6 In this context, it is worthwhile to stress that a weak/negligible variation of the free volume below temperature connected to the vitrification of the interfacial molecules served as a base to formulate hypothesis that negative pressure controls dynamics of liquids confined in pores. 8, 14−16 Along with the development of the concepts relying on the variation in the finite size and free volume, an increasing number of papers highlighted the impact of the surface effects, as a dominant factor governing behavior of confined liquids. Especially, an increasing attention was paid lately to the processes occurring at the interface, 17−19 surface interactions, 20,21 and roughness. 13,22 One can recall that there were successful attempts to predict the direction and the magnitude of the confinement effect for various glass formers both deposited as thin films 20,23,24 and infiltrated within alumina [anodic aluminum oxide (AAO)] membranes 21,25 based on the interfacial energy, γ SL . As shown, the higher γ SL , the greater deviation from the bulk behavior is expected. 21,25 Nevertheless, it should be noted that although finite size and surface effects on the dynamics of nanospatially confined liquids are very often considered separately, recent works indicated some kind of entanglement between them. In this context, it is worthwhile to remind recent atomic force microscopy measurements performed for glycerol incorporated within alumina membranes of d = 10 nm, 26 demonstrating that surface tension and most likely interfacial tension γ SL varies with the surface curvature of applied nanochannels. 27−29 Moreover, a close correlation between both effects is also well illustrated in the case of the dielectric studies on PPG infiltrated within controlled porous glasses of various pore diameters (d = 2.5−20 nm). 30−32 As shown, the glass transition temperature, T g , of infiltrated PPG revealed a nonlinear (parabolic) dependence, where the rise of T g noted for smaller pore diameter (d < 3 nm) was discussed in terms of the adsorption effects overcoming the confinement one. On the other hand, in the case of various PPG derivatives within silica pores characterized by a little larger pore sizes (d = 4−8 nm), 3,5,6 a linear reduction of T g with increasing degree of confinement was observed for both native and silanized templates. This might indicate that the impact of the surface effects should be often discussed together with the finite size (especially the pore curvatures) which might induce changes in the surface interactions and also should be taken into account to predict the behavior of the confined liquids. 33−36 In this article, we probe the counterbalance between the confinement and surface effects on the chemically modified poly(propylene glycols) (PPG) derivatives of molecular weight M n = 400 g/mol incorporated into silica (native and silanized) and alumina (AAO) membranes of various pore diameters (d = 4 nm and d = 18 nm) measured using dielectric and infrared (IR) spectroscopy as well as differential scanning calorimetry (DSC). These investigations allowed to get insight into the variation in wettability, dynamics, and H-bonded pattern in the spatially restricted samples infiltrated into both kind of membranes. MATERIALS AND METHODS 2.1. Materials. Poly(propylene glycol) (PPG-OH) and poly(propylene glycol) bis(2-aminopropyl ether) (PPG-NH 2 ) of M n = 400 g/mol with purity higher than 98% were supplied by Sigma-Aldrich. The nanoporous alumina oxide membranes used in this study (supplied from InRedox) are composed of uniaxial channels (open from both sides) with well-defined pore diameter, d ∼ 18 ± 2 nm, thickness ∼ 50 ± 2 μm, and porosity ∼ 12 ± 2%. Details concerning pore density, distribution, and so forth can be found on the Webpage of the producer. 37 The preparation of native and functionalized silica templates are presented in the Supporting Information file. Finally, it should be mentioned that density of the confined materials is assumed to be approximately the same as the bulk material at room temperature (RT). 2.2. Methods. 2.2.1. Broadband Dielectric Spectroscopy (BDS). Isobaric measurements of the complex dielectric permittivity ε*(ω) = ε′(ω) − iε″ (ω) were carried out using the Novocontrol Alpha dielectric spectrometer over the frequency range from 10 −1 Hz to 10 6 Hz at ambient pressure. The temperature uncertainty controlled by Quatro Cryosystem using the nitrogen gas cryostat was better than 0.1 K. Dielectric measurements of native silica and silanized membranes (of d = 4 nm) filled with PPG-OH, PPG-NH 2 , and PPG-OCH 3 were placed between stainless steel plates having 5 mm diameter. Moreover, dielectric measurements on empty membranes were also carried out to evaluate their contribution which turned out to be negligible, to the measured loss spectra (see Figure S1 in the Supporting Information file). 2.2.2. Differential Scanning Calorimetry. Calorimetric measurements were carried out using a Mettler-Toledo DSC apparatus (Mettler-Toledo International, Inc., Greifensee, Switzerland) equipped with a liquid nitrogen cooling accessory and an HSS8 ceramic sensor. Temperature and enthalpy calibrations were investigated using indium and zinc standards, and the heat capacity, C p , calibration was performed using a sapphire disc. After placing crushed templates in the aluminum crucibles, they were sealed and measured over a wide temperature range with cooling and heating rates equal to 10 K/min. Each experiment was repeated three times. In addition, measurements on empty membranes were carried out (please see Figure S1 in the Supporting Information file). It was found that in the range of the studied temperature, there is no change in heat flow, indicating no contribution from the silica and alumina to the heat capacity jumps detected for the membranes filled with studied PPGs. 2.2.3. Fourier Transform Infrared Spectroscopy (FTIR). The Nicolet iS50 FTIR spectrometer (Thermo Scientific) was used to measure the FTIR spectra of the bulk and the confined PPGs samples. FTIR spectra were recorded in the 4000−1300 cm −1 frequency region with a spectral resolution of 4 cm −1 . The limited spectral range resulted from the detector saturation in the region of Si−O and Al−O stretching vibrations. Each spectrum was obtained by averaging 32 scans. The measurements were carried at RT (293 K) and glass transition temperature (T g ) determined from the BDS measurements. The low-temperature IR spectra were obtained by using a liquid nitrogen-cooled Linkam THMS 600 stage (the temperature accuracy of ±0.1°C), which was adapted to the Nicolet spectrometer. The measurements were performed at a cooling rate of 10°C/min in a nitrogen atmosphere. The time-dependent IR spectra were measured at equal intervals, that is, every 1 min after the temperature stabilization at T = 183 K. The FTIR spectra of native, silanized silica, and alumina membranes were measured and are presented in Figure S1. The −OH stretching band decomposition was performed using the MagicPlot 2.9.3 software (MagicPlot Systems, LCC). The band occurring between 3050 and 3950 cm −1 was decomposed with the use of several Gaussian functions adjusting the intensity and the width of the fitting curves. The two wavelength intervals at ca. 2400−2500 and 3050− 3950 cm −1 (excluding the spectral region of the CH stretching band) were used for fitting. All spectral parameters were left free during the fitting procedure. The "best fit" was considered when the statistical parameter R was the lowest. One can stress that although the amount of −OH groups on silica surface can vary with the surface preparation method, 38 the peak position connected to the vibration of this moiety is located at ν Si−OH ∼3748 cm −1 . Interestingly, at this region, we did not observe any contribution from the PPG infiltrated into pores. Thus, there is no need to consider the influence of Si−OH groups from the silica membrane during the deconvolution process. In the case of alumina templates, the maximum of the OH band is found at 3640 cm −1 in the range of weak H-bonded bulk-like PPG-OH molecules. However, the analysis of the OH vibration in this frequency regime does not affect a discussion Tensiometer, GmbH Germany. The description of the instrument and procedures has been presented previously. 39,40 The measuring procedure at T = 298.2 K for all substances has been repeated dozen or more times. The temperature measurement uncertainty was ±0.1 K. The precision of contact angle measurements was 0.01°, and the estimated uncertainty was ±1.5°, whereas the uncertainty of surface tension was ±0.1 mN·m −1 . Density, ρ, required for the surface tension experiment was measured with an Anton Paar DMA 5000M densimeter with the uncertainty not worse than 0.0001 g·cm −3 . For the surface energy estimation of native and silanized silica, some of the following liquids were considered: water, ethylene glycol, diiodomethane, and glycerol. The dispersive and nondispersive part in the surface tension for these substances were taken from ref 21 RESULTS AND DISCUSSION Dielectric loss spectra of studied poly(propylene) glycols (PPG) terminated by three different groups, −OH, −NH 2 , and −OCH 3 , incorporated into native and silanized silica templates (of d = 4 nm) are shown in Figure 1. Note that dielectric spectra for bulk substances, taken from ref 41, are presented in the Supporting Information file. In all cases, dielectric data revealed the presence of the dc conductivity related to the charge transport and the segmental (α) relaxation at higher frequencies reflecting the cooperative motions of the molecules and responsible for the liquid-to-glass transition. Herein, one can stress that because of low molecular weight, M n < 1000 g/ mol, of studied PPG, 16,25 the additional mobility related to the fluctuations of the end-to-end vector of the chain ends called usually as the normal mode process cannot be observed (see Figure 1). One can also mention that often for the materials confined within silica membranes, the appearance of the interfacial process, reflecting reorientational motions of the polymers adsorbed at the surface of the pore walls, are widely reported. Interestingly, this specific relaxation is not observed in the case of studied PPG, independently to the applied porous matrix and functionalization (see Figure 1). This finding agrees with the data published by Arndt et al., 1 who studied salol and glycerol, differing in the number of hydroxyl units, infiltrated in silica pores of d = 4−8 nm. They found that while a reorientation of the adsorbed molecules can be detected for the former system (salol with one −OH group), it is not visible for the latter alcohol (glycerol with three −OH moieties). It was proposed to link this experimental observation to the balance between timescales of the exchange process between The Journal of Physical Chemistry C pubs.acs.org/JPCC Article core and interfacial molecules and experiments. 1 Note that if the exchange between both fractions is either fast or slow with respect to the time of the experiments, the interfacial process can be detected or not in the loss spectra respectively. However, we suppose that this phenomenon might be also somehow related to the number of hydroxyl units within the sample. Once the number of this particular moieties is equal to unity, the interfacial process is well visible as a separate loss peak, while in the case of materials with two or more −OH groups (i.e., glycerol or PPG-OH), this mode vanishes. This pattern of behavior is similar to what was previously found in the case of bulk alcohols, where for monohydroxy alcohols, an additional Debye relaxation (related to the formation of the hydrogen bonding supramolecular structure) is observed; while in polyacohols, there is no trace of this kind of mobility in the collected dielectric spectra. 26 Nevertheless to explain if there is any relationship between the appearance of the Debye process in monohydroxyalcohols and interfacial process in liquids confined in pores, further studies are required. In Figure 2, α-loss peaks of the bulk and confined samples were superimposed at the constant segmental relaxation times, τ α , for all investigated compounds. As shown, the shape of the α-loss peaks significantly broadens with increasing confinement because of the increased dynamical heterogeneity induced by the additional interactions with the pore walls. 15,42 However, it should be pointed out that the α-loss peak of PPGs within the functionalized (silanized) silica templates is, interestingly, narrower than the one recorded for PPG infiltrated into native silica. In general, this might suggest a change in the surface interactions between hydrophobic (silanized) and hydrophilic (native) templates. Note that the broader α-loss peaks within native templates might also be a result of the dynamical perturbation introduced by stronger interactions between the substrate and adsorbed molecules. It is worth mentioning that similar observations were made for other low and high molecular weight samples infiltrated in pores. 1,25,31,42 To explore in more detail, the molecular dynamics of confined materials and collected dielectric spectra were analyzed using Havriliak−Negami (HN) function with the conductivity term 43 where α HN and β HN are the shape parameters representing the symmetric and asymmetric broadening of given relaxation peaks, Δε is the dielectric relaxation strength, τ HN is the HN relaxation time, ε 0 is the vacuum permittivity, and ϖ is an angular frequency (ϖ = 2πf). Note that τ α was estimated from τ HN accordingly to the equation given in ref 44. Determined segmental relaxation times were plotted as a function of inverse temperature and shown in Figure 3. As illustrated, the τ α (T)dependences of all confined PPG is a bulk-like at a hightemperature region. However, below the temperatures denoted as T g,interfacial , they start to deviate from the bulk behavior irrespectively of the sample and porous template. As widely reported, this phenomenon is related to the vitrification of the materials adsorbed to the pore walls. 1,6,15,45 In this context, one can mention about "two-layer" or "core−shell" models often used to discuss/interpret results of molecular dynamics simulations or quasi-elastic neutron scattering investigations. 46 In view of these simplified approaches, liquids infiltrated in pores are considered as consisted of the molecules located in the center of the pores ("core") and adsorbed to the walls ("interfacial"). They are characterized by different densities and mobilities because of additional interactions with the solid substrate. 1,60 Analysis of the data presented in Figure 3 unexpectedly revealed that the bifurcation of τ α (T)-dependences of the confined PPGs occurs at similar τ α (log τ α ∼ −5.5) independently to the terminal groups, applied porous template, and functionalization (silanized or native). It is worth adding that the deviation of τ α (T)-dependences of PPGs infiltrated into alumina templates also occurs at comparable τ α , the same as in the case of material confined in silica pores (see insets of Figure 3a,b). Note that data for PPG derivatives within AAO membranes of d = 18 nm were taken from ref 41. In this context, it should be mentioned that the similar finding has been recently reported for a primary and secondary monohydroxy alcohols incorporated within alumina and silica pores. It was shown that the temperature dependences of the Debye relaxation times, τ D (reflecting the mobility of supramolecular self-assemblies 47,48 ), deviate from the bulklike behavior at approximately the same τ D irrespectively of the porous template, chemical structure, and architecture of the supramolecular structures formed in the studied systems. 49 Moreover, recently Tu et al. 50 found the same scenario in ionic liquids confined in the native and functionalized alumina pores. This indicates that the change in the porous template, hydrophobicity, or hydrophilicity of the pore surface does not affect segmental dynamics of the interfacial layer to much because the bifurcation of τ α (T)-dependences of the core PPGs occurs at similar τ α (log τ α ∼ −5.5). Interestingly, this agrees with the molecular dynamics simulations showing that although functionalization of the pore walls influences on the dynamics of the interfacial molecules, this effect is not significant. 51 To estimate the glass transition temperature, T g , obtained, data presented in Figure 3 were fitted using the Vogel− Fulcher−Tamman (VFT) equation 52 where τ ∞ is the relaxation time at finite temperature, D T is the fragility parameter, and T 0 is the temperature, where τ goes to infinity. It should be mentioned that the two VFT functions were applied for the confined systems because of the observed deviation in the slope of τ α (T)-dependencies. The first one (high temperature VFT) was used only for an accurate determination of a point (temperature), at which the slope changes (related to the vitrification of the interfacial layer and denoted as T g,interfacial ), while, the glass transition temperatures of the confined samples (in this case of the core polymers, T g,core ) were estimated from the second, low-temperature VFT fits. The values of all calculated glass transition temperatures are listed in Table 1. Note that T g,core is defined as a temperature at which τ α = 100 s. As observed, irrespective of the terminal group and applied template, estimated values of T g,interfacial are comparable. Moreover, the same scenario can also be observed in the case of T g,core (within the experimental uncertainty). This implies that the performed surface modification (silanization) along with the variation in the porous matrix and pore diameter have for some reason a marginal impact on the segmental dynamics of infiltrated PPG derivatives. This observation seems to be quite surprising taking into account that to date, a simple silanization of silica, leading to a prominent change of the polarity, specific interactions (H-bonds), hydrophilicity, and hydrophobicity, generally induced difference in the behavior of materials incorporated within native and functionalized templates. 30 To better understand reported findings, we further measured contact angles, θ, and surface tensions, γ L , and calculated the interfacial tension, γ SL , prior and after surface modification. 55,56 The estimated values of contact angles and surface tensions for all studied materials and surfaces are listed in Table 2. As observed, the contact angles of investigated samples change significantly depending on their chemical structure and type of matrices. Both PPG-OH and PPG-NH 2 are characterized by comparable contact angles on native silica surface (θ ∼ 25°), which indicates good wettability. However, their θ increases because of surface modification even up to θ ∼ 37°for PPG-OH on the silanized silica surface. The increase of θ suggests, in fact, the reduction of wettability on the modified interface with respect to the native one, more likely because of the strong change in the interactions between host and guest materials. In this context, one can also add that in the case of the alumina surface, the contact angle of both PPG-OH and PPG-NH 2 is extremely low (θ ∼ 6°indicates they wet this surface perfectly). On the other hand, PPG-OCH 3 surprisingly seems to wet all surface in a similar manner as its contact angle remains comparable for all surfaces (θ ∼ 6−7°). Next, we estimated interfacial energy, γ SL , according to the Young equation (γ SL = γ S − γ L cos θ, where γ S is the surface energy). 55 Calculated values of γ SL are listed in Table 2. Interestingly, for the silanized matrices, we noted low values of interfacial energies for all examined compounds. This indicates a clear change in the interfacial interactions between materials and functionalized silica in the vicinity of the pore walls, where the dispersive interactions prevail. 56,57 In this context, one can recall studies on the thin films that revealed that the higher value of γ SL , the greater deviation from the bulk behavior because of the reduced mobility at the interface. 20,23,24 This approach was further applied to the porous materials by Alexandris et al. 21 for several polymers infiltrated within AAO membranes. 21,58 As shown, the increase of the difference between the glass transition temperature, T g , of the bulk and confined samples, ΔT g , enlarges with the rise of the interfacial tension. This relationship was well quantified by the systematic measurements of the wettability, allowing us to calculate the interfacial energy, γ SL , and T g of the spatially restricted polymers. 21,25 Further studies indicated that higher γ SL implies reduced mobility of the interfacial layers that consequently leads to greater depression of T g,core in pores. 25 In Figure 4, we have plotted the estimated values of γ SL versus ΔT g,core and ΔT g,interfacial calculated for PPGs infiltrated into alumina (d = 18 nm) and silica (d = 4 nm) templates. Note that ΔT g,core is the difference between T g,core and T g of bulk (ΔT g,core = T g,core − T g ), while ΔT g,interfacial is the discrepancy between T g,interfacial and T g of bulk (ΔT g,interfacial = T g,interfacial − T g ). As shown in Figure 4, both ΔT g,core and ΔT g,interfacial of all investigated PPG incorporated in porous matrices are similar (within experimental uncertainty). Surprisingly, despite a clear variation in wettability, interfacial energy and pore size (d = 18 nm vs d = 4 nm), no differences between studied systems infiltrated within alumina and silica templates can be observed. In this context, one can remind that recent studies on various PPG derivatives infiltrated into AAO membranes revealed that γ SL weakly depends on both their terminal groups and molecular weight, M n , whereas ΔT g,interfacial changes with both these factors. 59 The finding discussed above clearly indicates that although the interfacial tension is a very useful parameter to predict depression of the glass transition temperature of the polymers infiltrated into porous media, it is not sufficient to understand the complex dynamics of such heterogeneous systems. Therefore, the contribution of other factors, possibly related to the variation in the density packing and roughness of the pore walls must be considered as well. In the next step, we have performed additional DSC measurements to support/confirm results of dielectric investigations. Thermograms recorded for all studied PPG derivatives incorporated within silica templates are presented in Figure 5. As illustrated, all samples exhibit the presence of the two endothermic processes, related to the vitrification of the interfacial (denoted as T g,interfacial ) and "core" (labeled as T g,core ) molecules located above and below T g of the bulk material, respectively (so-called the double glass-transition phenomenon). 42,60,61 It should be pointed out that even for PPGs infiltrated into silanized silica templates characterized by the extremely low value of the interfacial energy, (γ SL ∼ 2 mN· m −1 ), double glass transition was detected. The value of T g,interfacial and T g,core obtained from DSC measurements for confined materials and also T g of the bulk samples were added to Table 1. Although, there are some discrepancies between the value of T g determined from calorimetric and dielectric measurements, which can be due to the difference in the The Journal of Physical Chemistry C pubs.acs.org/JPCC Article heating/cooling rate applied in both methods 41 they are similar. Furthermore, we also determined the length scale of the interfacial layer, ξ, which also can be obtained directly from DSC measurements 60 where d is the pore diameter; ΔC p,core and ΔC p,interfacial are the changes of the heat capacity at T g,core and T g,interfacial . Note that the application of eq 3 requires the following assumptions: (i) the volume of the material in the surface layer is proportional to the step change of its heat capacity, (ii) the density of the incorporated material does not change along the pore radius, and (iii) the shape of the pore is cylindrical. The values of the heat capacity and calculated thickness of the interfacial layer are listed in Table 1 and Figure 5. As observed, the estimated ξ reaches similar values for all PPGs when infiltrated into native silica membranes (within experimental uncertainty), which are comparable to those ones reported earlier for 2E1H 8 or for monohydroxy alcohols, 49 where the value of ξ oscillated around ∼1 nm. However, after the silanization, ξ decreases for PPG-OH and PPG-NH 2 because of change in the interfacial interaction (i.e., suppression of H-bonds). In contrast, in the case of PPG-OCH 3 , the length scale of the interfacial layer increases after the surface modification, most likely due to increased surface-dispersive interactions. 56,57 One can add that according to the literature, ξ varies dependently to the type and strength of interactions, including hydrogen bonds. 1 It is also worthwhile to add that ξ of low-molecular-weight PPG incorporated within the alumina templates of d = 18−150 nm increases with the pore size but was relatively independent to the terminal end-groups of studied oligomers. In addition, we compared and plotted the thickness of the interfacial layer estimated for the PPG oligomer infiltrated into pores, made of silica and alumina having different pore sizes; please see the inset in Figure 5. Data for PPG infiltrated in AAO were taken from ref 41. This graph clearly illustrates that there is a linear relationship between thickness of the interfacial layer and pore diameter, d, which indicates some entanglement between both parameters. Moreover, it is worthwhile to stress that the interfacial layer estimated from calorimetry for the samples infiltrated into larger pores barely agree with those calculated for the thin films. In this particular case, ξ is around ∼2−3 nm and weakly depends on the film thickness. 62 Hence, one can state that the interfacial layer determined from calorimetry for the infiltrated systems is related to the length scale obeying molecules/polymers of much slower dynamics with respect to The Journal of Physical Chemistry C pubs.acs.org/JPCC Article the core material because of the perturbation introduced by interactions with the substrate. Taking advantage of the fact that both glass transition temperatures were well visible in the thermograms collected for the PPG confined in silica and alumina pores, we decided to find out whether this material will behave in a similar way as entangled cis-1,4-polyisoprene (PI) infiltrated in AAO membranes. 58 Just to mention that in this particular case, Politidis et al. have demonstrated that T g,interfacial is conditional and can be detected only for the samples cooled down below T g,core ; this observation allowed them to hypothesize that T g,core is a spinodal temperature. Therefore, as a subsequent point of our studies, we have performed additional calorimetric measurements using several temperature protocols to investigate the behavior of low-molecular-weight PPG derivatives infiltrated into silica templates of d = 4 nm to check how does various thermal histories influence the existence of the double glass-transition phenomenon. For this purpose, we carried out three cooling scans followed by heating in accordance to the following protocols: (1) cooling down to 184 K (deep below T g,core ), (2) cooling to 174 K (between both detected T g s), and (3) again cooling down to 184 K. Representative DSC thermograms obtained for infiltrated PPG-OCH 3 are shown in Figure 6a. Interestingly, in contrast to the data reported in ref 58, a prominent T g,interfacial appears in all registered thermograms questioning assignments of the low glass transition temperature as the spinodal temperature. To explain the discrepancy between results reported herein and the ones presented in ref 58, one should consider (i) different molecular weights of studied polymers (we focused only on PPG of M n = 400 g/mol), (ii) various porous templates (characterized by different finite size and surface interactions), and (iii) significantly different wettabilities and interfacial tension of PI and PPG on the alumina and silica surfaces. Furthermore, we also carried out a series of DSC measurements with different heating rates, 5−20 K/min. Selected thermograms recorded for PPG-OH incorporated within native and silanized templates of d = 4 nm are presented in Figure 6b,c. As expected, both T g s shift toward higher temperatures with the increasing heating rate. Additionally, we also observed that the length scale of the interfacial layer, ξ, increases with lowering the heating rate in the case of PPG-OH infiltrated into native silica templates ( Figure 6b); whereas for PPG-OH within the silanized silica templates, ξ seems to remain constant independently to the applied heating rate (Figure 6c). This simple experiments indicated that although the dynamics of the interfacial layer is not so much different in the vicinity of the functionalized pore walls, the length scale of the molecules adsorbed to the pore walls is affected in a more significant way. As a final point of our investigations, we have carried out additional FTIR measurements to gain information about the H-bonded pattern in the samples confined within silica and alumina nanopores. Figure 7 shows the comparison of FTIR spectra of bulk PPG-OH, PPG-NH 2 , and samples infiltrated in alumina and silica (native and functionalized) templates at RT It should be noted that the FTIR spectral data, especially obtained for alumina pores, were difficult to interpret because of additional strong contribution of the stretching vibrations of the −OH groups of the pore surface to the measured spectra. An informative band for the analysis of hydrogen bond interactions is that connected to the X−H stretching vibrations of the proton donor groups (the ν X−H ). This spectral feature occurs in the range of 3700−3000 cm −1 in the FTIR spectra of the studied systems. The band observed between 3000 and 2800 cm −1 is responsible for the stretching vibrations of the C−H groups of the carbon skeleton. The position and frequency of the ν X−H bands for the bulk samples differ between PPG-OH and PPG-NH 2 (see Figure 7). At 293 K (room temperature, RT) ν O−H band of bulk PPG-OH occurs as a single broad peak located at 3453 cm −1 , whereas the ν N−H band of PPG-NH 2 consists of three peaks at 3369, 3298, and 3201 cm −1 . Thus, the H-bonds in both PPG derivatives are of medium strength. The different profiles of the ν X−H band of PPG-OH and PPG-NH 2 correspond to the various types of Hbonded aggregates. Note that it seems that the H-bonded oligomeric structures dominate in PPG-OH, while probably a more complex H-bonding network (the ring-or chain-like structures) can be formed in PPG-NH 2 . The spectral modifications of the X−H stretching bands of PPG-OH and PPG-NH 2 accompanying the temperature drop are consistent with the trend reported in the literature and discussed in the Supporting Information file. In the next step, the FTIR approach was applied to investigate the effects of the nanoconfinement of PPG samples after their incorporation into different nanopore templates (silica and alumina). Representative spectra measured at different temperatures for the sample infiltrated in porous matrices are presented in Figures S4 and S5 in the Supporting Information file. The interaction of PPGs molecules within silica or alumina membranes causes noticeable changes in their IR spectra that represent the change in the hydrogen-bonded pattern in the investigated systems. At RT, the IR spectra of the confined liquids exhibit a significant redshift of the ν X−H peak frequency values, relative to the bulk samples (Figure 7). This spectral effect is associated with the existence of stronger H-bonds in PPGs under nanoconfinement. Similar results are observed in IR spectra measured after the temperature drop. In detail, the larger redshift of the O−H stretching vibrations for PPG-OH molecules is observed for the native pores (14 cm −1 ) compared to the silanized pores (11 cm −1 ) or the alumina ones(12 cm −1 ) at T g . In the case of PPG-NH 2 , the most intense peak of the ν N−H band at 3359 cm −1 is shifted by 4 cm −1 in native pores and 3 cm −1 in silanized pores relative to the bulk at T g . On the other hand, the ν N−H peak in alumina membranes shows the same position as that in the bulk. Thus, the PPG molecules within silanized silica pores exhibit the smallest spectral changes (i.e., the redshift of the ν X−H peak frequency value) relative to other systems. This is because of the weaker interactions between the host and guest material. It is also observed that the ν X−H bands measured for the confined samples are much broader than those measured for the bulk materials. This indicates that PPG molecules also exhibit greater variability in the size of the H-bonded aggregates in a confined environment. In order to address this issue more carefully, we performed additional analysis relying on the deconvolution of the spectra measured in the 3000−3800 cm −1 region for PPG-OH; please see Figures 8 and S6. This oligomer was selected and described in detail because it interacts the most with the native pores. To evaluate the contribution of specific components in this complex spectral regime, data collected for bulk and confined PPG-OH were fitted to the combination of several Gauss functions. The procedure of the fitting of the −OH stretching bands was . For the sample confined in pores, an additional Gaussian component is required to describe the FTIR spectra in the 3198−3026 cm −1 region. This additional band is assigned to the ν OH in the molecules adsorbed (AM) at the interface. The comparison of the deconvoluted IR spectra in the −OH stretching vibration region for bulk and confined PPG-OH at RT and at T g is shown in Figures 8 and S6 in the Supporting Information file. Figure S7 illustrates the temperature variations of the spectral parameters such as peak position and integrated areas obtained for each component (fraction of molecule) from the fitting IR spectra of bulk and confined PPG-OH to the superposition of several Gauss functions. All these parameters are also listed in Table S1. On the other hand in Table S2, the percentage areas of the deconvoluted profiles are shown, which give information on the different populations of H-bonded structural arrangements in PPG-OH. From the analysis of the results obtained at T g (Table S1), one finds a redshift of the maximum of the all fractions of H-bonded oligomers in confined sample with respect to the bulk material. This indicates the enhancement of the H-bonding interactions under confinement. As shown in Figure 8, the largest peak area in bulk sample corresponds to the fully H-bonded PPG-OH molecules. On the other hand, quite a large variety of adsorption behavior of PPG-OH on the different pore walls is observed (Figures 8 and S6). In the native pores, the adsorption process is the most dominant because the AM component has the largest percentage in the OH band profile in selected temperatures. The silanized membranes exhibit similar behavior at higher temperatures; however, below 263 K, partially H-bonded PPG-OH structures predominate. In the case of alumina templates, the AM component also dominates at 293K−233 K, whereas at low temperatures, the interactions between FBM are the strongest. Next, the change in contribution of various Gaussian components to the overall spectrum upon temperature drops was monitored. As shown in Table S1, the positions of FBM and IMBM substructures are shifted to lower wavenumbers with decreasing temperature, indicating the strengthening of the interactions between "fully" and "partially" H-bonded PPG-OH molecules. On the contrary, the blueshift of the WBM components is observed because of the growth of the degree of association of PPG-OH at T g . Simultaneously, the integrated area of the FBM and IMBM components increases and the WBM area decreases as the temperature is lowered indicating the growing organization of PPG-OH toward a fully H-bonded network. However, it should be pointed out that the FBM contribution in confined PPG-OH within the silanized silica templates exhibits opposite temperature effect, that is, it decreases with the lowering temperature. More detailed analysis based on the percentage contribution of the Gaussian component areas shows that in bulk PPG-OH, the FBM-type molecules are the dominating population, their proportion steadily increases from 48 to 62% as the temperature decreases from RT to T g . Interestingly, in all-confined PPG-OH, the AM sub-band has the largest percentage at RT. As the temperature is lowered, the adsorption process is rather reduced in favor of the fully (alumina templates) and partially (silica and alumina templates) H-bonded interactions. The greatest effect of temperature on the adsorption process is observed for the alumina membrane in which the AM contribution decreases from 51 to 23%, while the FBM and IMBM populations increases 20−42 and 24−30%, respectively. Thus, the variations of the respective populations illustrate that for confined materials, the adsorbed PPG-OH molecules are always the dominating population at RT. At T g , the interfacial H-bonded species are prevailing only in the native silica pores. As a final point of our investigations, we have performed the time-dependent FTIR measurements on the PPG-OH confined into silica and alumina membranes to verify whether the annealing at T g,core will influence on the H-bonding network. The analysis of these spectra in the ν O−H band region shows both changes in the shape of the band profile and its broadening as a function of time ( Figure 9). The peak located around 3400 cm −1 , originating from the bulk-like core The Journal of Physical Chemistry C pubs.acs.org/JPCC Article molecules, essentially shows no variation over time (the light blue area in Figure 9). Simultaneously the strong intensity growth of the sub-band at the lower wavenumber (approximately 3250 cm −1 ) in both kind of pores (indicated as a light yellow area in Figure 9) was clearly detected. Interestingly, this band has been assigned herein to the vibration of the hydroxyl moiety in FBM. Moreover, IR spectra recorded during annealing also revealed appearance of the sub peak at ν OH ∼ 3120 cm −1 connected to the adsorbed molecules. This band is probably because of the formation of the strongest H-bonds between PPG molecules and hydroxyl units attached to the silica and alumina templates (highlighted as light pink area in Figure 9). Growing intensity of the band observed at 3250 cm −1 during annealing can be assigned to the formation of an intermediate layer of the molecules located between core and interfacial ones. However, because of overlapping of this band with that originating from the fraction of FBM of the bulk material, it was not possible to detect it in the FTIR spectra measured at different temperatures. Moreover, analysis of the position of the new band at 3120 cm −1 indicated stronger interactions between the PPG-OH molecules and the silica surface with respect to the alumina. It is worthy to mention that this oligomer wets the alumina surface much better with respect to the silica one. Hence, it is a clear indication that enhanced wettability does not have to necessarily mean stronger interactions between host and guest material at least in the case of associating H-bonded liquids. Moreover, various strengths of the H-bonds between PPG and hydroxyl moieties attached to the silica and alumina pores suggest different chemical characters of this functional group that affects their different tendencies in formation of these specific interactions in both kinds of materials. It must be also stressed that above discussed results obtained for confined PPGs differ from those obtained for water or alcohols infiltrated in nanoporous templates. For primary alcohols incorporated to the native and silanized silica pores, we have found that the strength of H-bonds in confined samples was weaker compared to those in bulk systems which was manifested in the IR spectra as the blueshift of the ν O−H peak frequency. 49 In the case of water confined in zeolites, the O−H bands were slightly redshifted with respect to bulk water, indicating, that under confinement, molecules are relatively strongly hydrogen-bonded. 64 Moreover, the IR spectra of water confined in controlled pore glasses proved that this fluid is perturbed on very large scales (more than 10 nm), even in pores of greater diameter (d = 55 nm). The position of the connectivity band (∼150 cm −1 ) increased when the pore size decreased, suggesting stronger H-bonding interactions between neighboring water molecules. Additionally, an important decrease of the FWHM of the connectivity band was found for the spatially restricted sample (70%) which was related to a different orientation dynamics of water (up to 55 nm) as compared to bulk liquid. 65 Besides, Baum et al. 66 highlighted the predominant effect of the pore size, the kosmotropic properties and the surface ions excess on the dynamics, and the structure of water molecules in the pore and within the interfacial layer. The molecular dynamics simulations of the IR spectrum of isotopically dilute HOD in D 2 O in 2.4 nm hydrophilic, amorphous silica pores 67 showed that −OH groups are involved in weaker H-bonds to the silica oxygen acceptors than to water, leading to blueshifts in their frequencies, although this spectral effect was not observed in the measured IR spectra. This fact was explained by the smaller transition dipole moments of these −OH groups. Similar results were reported for HDO in H 2 O in Aerosol-OT reverse micelles of varying sizes. 68 Although, in this case, the ν O−D bands exhibited a significant blueshift relative to the bulk liquid, indicating a weakening of hydrogen bonds between the OD groups because of the presence of the interface between the water pool and the surfactant headgroups. 67,69 Herein, it is also worth to mention about work by Zanotti et al. for water molecules in Vycor (a hydrophilic porous silica glass). 70 They have found the O−H stretching sub-band at 3230 cm −1 that was interpreted as a result of the existence of a monolayer of water molecules H-bonded to the silanol (Si−OH) groups of the Vycor surface. The position of this peak revealed that the H-bonds in this system are significantly stronger than in bulk liquid water (around 3400 cm −1 ). A further more detailed understanding of the molecular interactions of water confined in mesoporous silica was presented by Knight et al. 71 The authors fit the O−H stretch region using three Gaussian curves, representing unique water populations, described as network water (NW), intermediate water (IW), and multimer water (MW). However, these studies also showed a systematic blueshift in the IR peak locations of NW, IW, and MW in confined water. The authors suspected that water in pores is congregating around surface hydroxyl groups to form islands of highly coordinated localized regions. Hence, results of Knight et al. are similar to those reported herein. 71 However, it is worthwhile to point out that FTIR measurements on PPG confined in either silica or alumina membranes clearly revealed that although there are at least three fractions of molecules differing in the H-bond pattern in these conditions, the strength of these specific interactions is much stronger with respect to the bulk materials. At the first sight, this experimental finding questions interpretation of the calorimetric data suggesting that there are two fractions of molecules differing in mobility and glass transition temperature discussed in terms of the "two-layer" (or "core−shell") model. However, it must be stressed that a change in the H-bonded pattern does not have to influence dynamics of confined systems to the extent allowing registration of the third glass transition temperature related to the vitrification of the intermediate layer. In this context, one can recall papers devoted to polymer thin films discussing the occurrence of the third (intermediate) layer. 23 As reported, T g of ultrathin poly(methyl methacrylate) (PMMA) films depends on the type of applied substrates, that is, increasing for silicon oxide (polar surface) because of the hydrogen bonding and decreasing for the nonpolar surface, together with thickness reduction. It was observed that the dynamics and T g of each layer differ from the corresponding bulk substances. This indicates that the nature of the interaction between materials and interface is one of the dominant factors in determining the glass-transition temperatures that strongly depends on the thickness of the film and the interfacial energy between the polymer and the substrate. Additionally, the formation of the third layer between the adsorbed layer and core volume was investigated for nanopores. 72 Using calorimetric measurements, it was revealed the existence of three T g s for PMMA confined into AAO nanopores of d = 300 nm, where (i) molecules near interfaces (of T g higher than for the bulk), (ii) molecules interacting at the center of nanopores (T g lower than for the bulk), and (iii) fraction located between abovementioned (characterized by intermediate T g in a nonequilibrium state). Interestingly for PMMA restricted The Journal of Physical Chemistry C pubs.acs.org/JPCC Article into d = 80 nm of AAO, only double T g s was noted. 61 Generally, the formation of the third layer was considered as being strictly connected with the weakening of the interfacial effect between polymer matrices together with increasing pore diameter. 23,72 Besides, it was indicated that the existence of the third layer and also the shift of T g strictly depends on the thermal history (heating/cooling rate, aging/annealing procedure) of the samples and might be strictly correlated with the exchange effects between the adsorbed layer, interlayer (two layers are trapped in a non-equilibrium state), and the core volume. 23,72 Note that recently, the presence of the additional interlayer under confinement was also reported for poly-(methylphenylsiloxane) infiltrated into AAO templates, where interestingly, the third T g was also detected for small pore size, d = 18 nm. 73 One can also add that the resolving of the intermediate layer within the confined PPGs with time might, in fact, help us to understand better the reported recent shift of the segmental/ structural relaxation process of various incorporated materials upon the annealing experiments performed at following temperature conditions, T g,interfacial > T anneal > T g,core . 62,74,75 Note that at the studied range of temperatures, the examined systems are highly heterogeneous (the interfacial fraction of molecules is vitrified and the core ones are not). Briefly, as the annealing experiment proceeds, the shift of the α-relaxation peak toward lower frequencies was observed 74,75 resulting in completely different τ α (T)-dependences of confined materials with respect to the measurements performed prior to annealing (see Figure S8 in the Supporting Information file). As assumed, the density packing of core and interfacial molecules was out of equilibrium, which was recovered upon sample annealing. Consequently, the confined system moves from the one isochoric condition to the other, characterized by different densities and dynamics. 15,41,76 Nevertheless, as indicated by complementary FTIR measurements, the observed variation of structural/segmental dynamics might be also related to the variation of the H-bonding pattern or processes ongoing at the interface. It looks that during the annealing, there is a strong rearrangement of the interfacial layer leading to the formation of strong H-bonds between PPG and either silica or alumina pore walls. Moreover, the enhancement of these specific interactions is accompanied by the formation of the intermediate layer being in some distance from the pore walls. 23,41,72 Thus, in the annealed samples, we can clearly distinguish into three fractions, namely, interfacial, intermediate, and bulk-like fractions of molecules in the liquids infiltrated into pores. CONCLUSIONS In this work, we investigated the behavior of three various poly(propylene glycol) derivatives of M n = 400 g/mol characterized by different abilities to form H-bonds and incorporated into alumina (d = 18 nm) and silica (native and silanized of d = 4 nm) templates. These studies enabled us to explore the impact of finite size and surface interactions on the overall behavior of substances in spatially restricted systems. The observed deviation of segmental relaxation times from the bulk-like behavior indicates that independently on the pore diameter and applied templates, they deviate at similar τ α . Interestingly, we observed that although previous reports suggested that the shift in glass transition temperatures correlates with the interfacial energy and wettability, such relationship does not hold for PPGs infiltrated within alumina (d = 18 nm) and native and functionalized silica (d = 4 nm) templates. It indicates that besides the surface interactions and finite size (curvature), also other factors, that is, specific interactions, and density packing, possibly surface roughness, should also be taken into account when discussing the behavior of the confined liquid. Additionally, for the investigated PPGs, we observed that T g,int occurs even if T g,core has not been reached before, which indicates that T g,core is not a spinodal temperature. Finally, using FTIR spectroscopy, it was also revealed that the strength of hydrogen bond interactions between the incorporated material and interface differs dependently to the applied templates. Moreover, the timedependent IR spectra recorded during annealing around the T g,core revealed rearrangement in the interfacial layer leading to the formation of very strong H-bonds between PPG and alumina or silica pore walls. Consequently, this simple experiment allowed us to visualize and distinguish an intermediate layer (rarely reported in literature) between interfacial and bulk-like fraction of molecules. This completely new finding might shed new light and allow us to better understand the shift of the structural/segmental relaxation upon annealing below T g,interfacial . It seems that variation in the H-bonds and density packing between molecules attached to the interface and the intermediate ones is responsible for this phenomenon. Preparation of porous silica templates, DSC thermograms, dielectric loss, and FTIR spectra of empty silica templates, nitrogen adsorption/desorption isotherms measured for native silica templates, dielectric loss spectra of bulk PPGs, dielectric loss spectra collected upon annealing of confined samples, FTIR spectra measured for bulk and confined PPGs, and their decomposition as well as temperature evolution (PDF)
12,068
2020-07-30T00:00:00.000
[ "Chemistry", "Materials Science" ]
Ubiquitin-specific Peptidase 10 (USP10) Deubiquitinates and Stabilizes MutS Homolog 2 (MSH2) to Regulate Cellular Sensitivity to DNA Damage* MSH2 is a key DNA mismatch repair protein, which plays an important role in genomic stability. In addition to its DNA repair function, MSH2 serves as a sensor for DNA base analogs-provoked DNA replication errors and binds to various DNA damage-induced adducts to trigger cell cycle arrest or apoptosis. Loss or depletion of MSH2 from cells renders resistance to certain DNA-damaging agents. Therefore, the level of MSH2 determines DNA damage response. Previous studies showed that the level of MSH2 protein is modulated by the ubiquitin-proteasome pathway, and histone deacetylase 6 (HDAC6) serves as an ubiquitin E3 ligase. However, the deubiquitinating enzymes, which regulate MSH2 remain unknown. Here we report that ubiquitin-specific peptidase 10 (USP10) interacts with and stabilizes MSH2. USP10 deubiquitinates MSH2 in vitro and in vivo. Moreover, the protein level of MSH2 is positively correlated with the USP10 protein level in a panel of lung cancer cell lines. Knockdown of USP10 in lung cancer cells exhibits increased cell survival and decreased apoptosis upon the treatment of DNA-methylating agent N-methyl-N′-nitro-N-nitrosoguanidine (MNNG) and antimetabolite 6-thioguanine (6-TG). The above phenotypes can be rescued by ectopic expression of MSH2. In addition, knockdown of MSH2 decreases the cellular mismatch repair activity. Overall, our results suggest a novel USP10-MSH2 pathway regulating DNA damage response and DNA mismatch repair. Mismatch repair (MMR) 3 is a mutation avoidance mechanism that corrects DNA replication errors, and controls homo-logous recombination (HR) by aborting strand exchange between divergent DNA sequences (1). The MMR activity begins with mismatch recognition, which is carried out either by MutS␣ (MSH2-MSH6) or by MutS␤ (MSH2-MSH3). MutS␣ recognizes DNA single base pair mismatches and small deletions/insertions, whereas MutS␤ recognizes large deletions/insertions (2,3). MutS␣ or MutS␤ then recruits MutL␣ (MLH1-PMS2), proliferation cellular nuclear antigen (PCNA), and replication protein A (RPA) to form a complex leading to the recruitment of exonuclease 1 (EXO1) to the strand break. EXO1 then cut the nascent DNA from the nick forward and beyond the mismatch to generate a single strand gap, which is filled by polymerases ␦ using the parental DNA strand as a template. The repair is accomplished by filling the nick using DNA ligase I. Deletion or mutation of key DNA mismatch repair proteins, such as MSH2 and MLH1, can cause genomic instability (4). ⌱n addition to recognizing DNA mismatches MutS␣ can recognize certain DNA-damaging agents, such as 6-thioguanine (6-TG)-, N-methyl-NЈ-nitro-N-nitrosoguanidine (MNNG)-, and cisplatin-induced DNA adducts to trigger apoptosis (5). Thus, the level of MutS␣ controls cellular sensitivity to DNA damage. It has been reported that the level of MSH2 can be modulated by multiple means. First, when cells are treated with ultraviolet (UV) or phorbol ester (TPA), the mRNA level of MSH2 will be increased (6,7). Second, MSH2 protein is more stable as a heterodimer with MSH6 than it exists as a monomer (8). Third, protein kinase C (PKC) phosphorylates MutS␣ and protects it from proteasome-dependent degradation (9). Fourth, histone deacetylase 6 (HDAC6) was recently identified as an E3 ligase of MSH2 to promote its degradation (8). However, the deubiquitinating enzyme (DUB), which counteracts HDAC6 to stabilize MSH2, has remained unknown. Currently about 100 DUBs have been identified in the human genome and are classified into five families based on their sequence similarity and mechanism of action (10 -14). They are 1) the ubiquitin C-terminal hydrolases (UCHs), 2) the ubiquitin-specific proteases/ubiquitin-specific processing proteases (USPs/UBPs), 3) the ovarian tumor proteases (OTUs), 4) the Josephin or Machado-Joseph disease protein domain proteases (MJDs), and 5) the Jab1/MPN domain-associated metalloisopeptidase (JAMM) domain proteins. The first four families are cysteine peptidases, while the last one comprises of zinc metalloisopeptidases. Here, we report MSH2 as a new USP10-interacting protein and a new USP10 substrate. We also reveal a novel USP10-MSH2 pathway regulating MSH2 homeostasis, DNA damage response, and DNA MMR. Experimental Procedures Cell Culture and Transfection-All cell lines were grown in Dulbecco's modified Eagle's Medium (DMEM) with 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 g/ml), except H1299, which was grown in RPMI 1640 medium. Cells were incubated at 37°C with 5% CO 2 . The plasmids were transiently or stably transfected into cells, except MEFs, using Lipofectamine 2000 (Invitrogen). The plasmids were transfected into MEFs by electroporation with a Nucleofector™ device (Lonza). The anti-USP10 antibody (ab72486) was purchased from Abcam. The anti-MSH2 antibody was purchased from Calbi-ochem. The anti-HA antibody was purchased from Covance. The anti-Flag M2 antibody and agarose beads, the anti-␤-actin antibody, MG132, cycloheximide, 6-TG, imidazole, urea, guanidine-HCL, ATP, and MTT were purchased from Sigma. MNNG was purchased from Pfaltz &Bauer, Inc. Ni-NTA resin was purchased from Clontech. Rabbit reticulocyte lysate was purchased from Promega. HA-UB was purchased from Boston Biochem. MNNG and 6-TG Treatment-MNNG was diluted in water immediately before use. Cells were treated with MNNG in FBSfree medium. Cells were then washed with medium and incubated in fresh medium at 37°C for different time periods as indicated in the figures. 6-TG was first dissolved in DMSO to make a stock solution and then dissolved in medium directly before use. Immunoprecipitation and Immunoblotting-For immunoprecipitations, cells were lysed in the LS buffer (PBS, pH 7.5, 10% glycerol, 0.1% Nonidet P-40, protease inhibitor mixture). Lysates were incubated with protein A-or protein G-agarose for 2 h for pre-clearing prior to incubation with the indicated primary antibodies for 12 h at 4°C. Immunocomplexes were collected, washed four times in TBST buffer (0.1% Tween-20 in TBS), and resolved by SDS-PAGE. For immunoblotting, samples were transferred to nitrocellulose membranes then probed with the indicated antibodies. Bound antibodies were detected using a Chemiluminescent Detection Kit (Pierce). Establishment of USP10-knockdown Stable Clones-For Figs. 2B and 3B, the pGIPZ vector containing shUSP10 -2 was transfected into A549 and H1299. One day after transfection, cells were split into 3 dishes. After 24 h, 1 g/ml puromycin was added into the medium to select positive cells. Ten days later, stable cell lines were subcloned to 60-mm dishes, 0.5 g/ml puromycin was added to the medium to maintain the stable clones in the subsequent culture. Establishment of the USP10-knockdown A549 Pool-Addgene's protocol was followed to produce lentiviral particles and infect A549 cells. Briefly, pGIPZ vector containing shUSP10 -1 or shUSP10 -2 was co-transfected with psPAX2 and pMD2.G into HEK-293T cells. Lentiviral particles in the HEK-293T cell medium were harvested 24 and 36 h after transfection. Then, the lentiviral particle solution was added into A549 cells. About 24 h after infection, puromycin was added to the medium at the concentration of 1 g/ml for A549 cells. After a 2-week selection, Western blot analyses were used to determine the knockdown efficiency. GST Pull-down Assay-GST fusion proteins were purified as previously described (22). For in vitro binding assays, glutathione Sepharose-bound GST-MSH2 proteins were incubated with cell lysates. After washing extensively with PBST (0.1% Tween 20 in PBS), the proteins bound to GST-MSH2 were resolved by SDS-PAGE and immunoblotted with indicated antibodies. Whole Cell Extract Preparation-Whole cell extracts were prepared from 6 ϫ 10 7 cells as described (24). Cells were first washed with buffer A (20 mM Hepes pH 7.5, 5 mM KCl, 0.5 mM MgCl 2 , 0.1% PMSF, 0.5 mM DTT, 1 g/ml leupeptin, and 0.2 M sucrose) and lysed in the same buffer without 0.2 M sucrose by passage through a 27-gauge needle. Proteins were precipitated by 65% of ammonium sulfate, collected by centrifugation, resuspended in lysis buffer, followed by dialysis to equilibrium in a buffer containing 20 mM Hepes (pH 7.5), 5 mM KCl, 0.1 mM EDTA, 0.1% PMSF, 0.5 mM DTT, and 1 g/ml leupeptin. Heteroduplex Preparation and MMR Assay-DNA substrates used in this study are circular heteroduplex DNA containing a unique G-T mismatch and a strand break 3Ј to the mismatch (Fig. 6). The substrate was prepared from M13mp18-UKY phase series as described previously (25). MMR assays were performed in 20-l reactions containing 100 ng heteroduplex DNA, 75 g of whole cell extracts, 10 mM Tris-HCl (pH 7.5), 5 mM MgCl 2 , 1.5 mM ATP, 0.1 mM dNTPs, 1 mM glutathione and 110 mM KCl at 37°C for 15 min, as described (24) in the presence or absence of purified human recombinant MutS␣ (26). Reactions were terminated by addition of Proteinase K. DNA samples were extracted with phenol and recovered by ethanol precipitation. Repair products were digested with PstI, NsiI (repair-scoring enzyme), and BglI, fractionated by polyacrylamide gel electrophoresis, and detected by Southern blot hybridization with a 32 P-labeled probe. DNA products were visualized by phosphorimager. Results USP10 Interacts with MSH2-We previously identified HDAC6 as an ubiquitin E3 ligase of MSH2 (8). To identify novel MSH2-interacting proteins associated with the ubiquitin-proteasome pathway, we overexpressed HA-MSH2 in 293T cells for 48 h followed by treatment with 50 M MG132 for 4 h. The HA-MSH2 protein was immunoprecipitated (IP-ed) with anti-HA agarose beads, and the unique bands existing in the anti-HA, but not in the control anti-IgG, immunoprecipitate were excised and analyzed by the liquid chromatography-tandem mass spectrometry (LC-MS/MS). The mass spectrometry analysis identified 14 peptide sequences of USP10, a ubiquitinspecific peptidase (data not shown), suggesting that USP10 exists in the anti-MSH2 immunoprecipitate. Reciprocally, endogenous USP10 was IP-ed by anti-USP10 antibodies in HeLa S3 cells. The immunoprecipitate was resolved by SDS-PAGE, and the bands were excised and examined by LC-MS/ MS. MSH2 was identified in the USP10 immunoprecipitate (data not shown), suggesting that USP10 is associated with MSH2. We next confirmed the interaction between USP10 and MSH2 by performing co-IP assays with anti-USP10 antibodies using 293T and mouse embryonic fibroblasts (MEFs) cell extracts. As shown in Fig. 1A, MSH2 was only detected in the anti-USP10 immunoprecipitates (lanes 3 and 6), but not in the anti-IgG controls (lanes 2 and 5). In a reciprocal fashion, the anti-MSH2 antibody, but not anti-IgG, specifically IP-ed USP10 in HeLa and MEFs (Fig. 1B). Therefore, USP10 and MSH2 indeed interact with each other in vivo. To determine whether USP10 directly interacts with MSH2 or through other associated proteins, the in vitro GST pulldown assay was carried out. As shown in Fig. 1D, GST-MSH2, but not GST, was able to pull down His-USP10 (lanes 1 and 2). This result strongly indicates a direct interaction between USP10 and MSH2. We next attempted to map which region of MSH2 binds to USP10. MSH2 can be divided into five domains according to its crystal structure, namely mismatch binding, connector, level, clamp, and ATPase (Fig. 1C) (27). As shown in Fig. 1D, the N terminus of MSH2 (1-378), but not the middle region (200 -700) or C terminus of MSH2-(624 -934), binds to USP10 directly (lanes 3, 4, and 5). Thus, the major region responsible for the interaction with USP10 is located in the mismatch binding and connector domains of MSH2. We then examined which region of USP10 binds to MSH2. Human USP10 contains an ubiquitin C-terminal hydrolase domain (412-792) for its deubiquitination activities (Fig. 1E). The N terminus of USP10 has been reported to interact with p53 (15) and G3BP (18). By contrast, as shown in Fig. 1F, the C terminus of USP10 interacts with MSH2 (lane 3). We also examined whether USP10's enzymatic activity influences its binding to MSH2. As shown in Fig. 1F, the USP10 catalytically dead mutant (18), USP10 (C424A), interacted with MSH2 to the same extent as the wild type did (lanes 4 and 5), suggesting that the interaction between USP10 and MSH2 is independent of USP10 enzymatic activity. USP10 Stabilizes MSH2-We previously showed that HDAC6 serves as an ubiquitin E3 ligase to promote MSH2's degradation (8). Here we explored whether USP10 can counteract HDAC6 to stabilize MSH2. As shown in Fig. 2A, in 293T cells, the level of HA-MSH2 increased significantly by overexpressing USP10; in MEFs, the level of endogenous MSH2 increased dramatically by overexpressing of USP10, as well. To verify whether USP10 regulates the level of MSH2, we knocked down USP10 by shRNA against USP10 in A549 and H1299 cells. As shown in Fig. 2B, the level of MSH2 was significantly decreased in the USP10 knockdown A549 and H1299 cells compared with that of MSH2 in the control cells. We then examined whether USP10 influences the stability of MSH2 by measuring the half-life of MSH2. As shown in Fig. 2, C and D, MSH2 half-life in the USP10-knockdown A549 cells is around 6 h, while MSH2 half-life in control cells is more than 12 h, suggesting that USP10 can prolong MSH2's half-life. We next used a USP10 inhibitor, known as specific and potent autophagy inhibitior-1 (Spautin-1) to treat A549 cells. As shown in Fig. 2E, Spautin-1 was able to decrease the level of MSH2, but not USP10, in a time-dependent manner, suggest-ing that suppressing the deubiquitinating enzyme activity of USP10 reduces the protein level of MSH2. We next surveyed a panel of lung cancer cell lines to determine whether there is a positive correlation between USP10 and MSH2 in these cell lines. As shown in Fig. 2, F and G, the expression of MSH2 is positively correlated to the expression of USP10, indicating that USP10 stabilizes MSH2 in lung cancer cells. USP10 Deubiquitinates MSH2-We then determined whether MSH2 is a substrate of USP10. As shown in Fig. 3A, overexpression of USP10 reduced the ubiquitination of F-MSH2 in A549 and H1299 cells. Conversely, knockdown of USP10 significantly increased the ubiquitination of MSH2 in A549 and H1299 cell lines (Fig. 3B). To directly examine the deubiquitination activity of USP10 toward MSH2, we utilized a cell-free system. We prepared ubiquitinated GST-MSH2 (Ub-GST-MSH2) as described in "Experimental Procedures" and then performed the deubiquiti-nation assay. As shown in Fig. 3C, F-USP10 efficiently deubiquitinated MSH2 in vitro (compare lanes 2 and 3). We then tested whether USP10 is able to deubiquitinate MSH2 in vivo. As shown in Fig. 3D, USP10 WT, but not USP10 (C424A), reduced His-MSH2 ubiquitination. To ensure that the ubiquitination signal was indeed from His-MSH2 and not from its associated proteins, His-MSH2 was washed under the denaturing conditions and the ubiquitination status was exam- ined by the anti-Ub Western blotting analysis. Overall, we demonstrated that MSH2 is a substrate of USP10. Depletion of USP10 Decreases Cellular Sensitivity to MNNG and 6-TG and Decreases DNA Mismatch Repair Activities in A549 Cells-The MSH2-MSH6 heterodimers recognize DNAdamaging agents, such as 6-TG, MNNG, cisplatin, carboplatin, doxorubicin, and etoposide, which form DNA adducts that cannot be removed by MMR (5). It has been well documented that the levels of MSH2 are inversely correlated with 6-TG or MNNG resistance (5). Because USP10 is able to up-regulate the level of MSH2, we set out to test whether USP10 plays a role in regulating MNNG-or 6-TG-mediated cell killing. We depleted the expression of USP10 using two different shRNAs (shUSP10 -1 and shUSP10 -2) in A549 cells and measured the cytotoxicity of MNNG and 6-TG by MTT assays. As shown in Fig. 4, A and B, knockdown of USP10 using both shUSP10 -1 and shUSP10 -2 in A549 cells significantly increased cell sur-vival compared with the control cells. To determine whether this increased survival is due to the decrease of apoptosis, we detected the cleavage of poly ADP-ribose polymerase 1 (PARP-1) by Western blotting analyses. As shown in Fig. 1C, knockdown of USP10 with either shUSP10 -1 or shUSP10 -2 decreased the PARP-1 cleavage upon MNNG and 6-TG treatment compared with the control. To ensure that the increased cell survival is due to the reduction of MSH2, HA-MSH2 was introduced to the A549 pools transduced by shUSP10 -1 or shUSP10 -2 lentiviruses, and the resulting cells displayed decreased cell survival and increased apoptosis upon 6-TG or MNNG treatment (Fig. 5). Therefore, our data suggest a USP10-MSH2 pathway governing cellular sensitivity to 6-TG and MNNG. In addition, we explored whether depletion of USP10 affects DNA MMR activities. As shown in Fig. 6, knockdown of USP10 in A549 cells significantly reduces the MMR activities, while the addition of the MutS␣ complex partially restores MMR activity. This result indicates that USP10 regulates the cellular MMR activities via modulating the level of MSH2. Discussion In this study, we have identified a novel MSH2-interacting protein, USP10, which stabilizes and deubiquitinates MSH2 in vitro and in vivo. The USP10-MSH2 axis regulates the MSH2 and MutS␣ homeostasis and cellular sensitivity to DNA damage. We previously demonstrated an unexpected E3 ligase activity of HDAC6, which regulates MSH2 proteasome-dependent degradation (8). We have now identified a deubiquitinating enzyme USP10, which counteracts HDAC6's activity. However, how HDAC6 and USP10 work in concert to regulate MSH2 stability is not clear. Based on our domain mapping data, HDAC6 and USP10 bind to different domains of MSH2. HDAC6 binds to MSH2 C-terminal region (8), while USP10 binds to MSH2 N-terminal region. Therefore, HDAC6 and USP10 may not compete with each other to bind to MSH2. Unlike one of USP10 substrates, p53, which has a very short half-life, MSH2 has a much longer half-life. We suspect that USP10-mediated deubiquitination of MSH2 may partially account for MSH2 stability. We previously identified four C-terminal lysines (Lys-845, Lys-847, Lys-871, and Lys-892) in MSH2, which can be either acetylated or ubiquitinated (8). We proposed that HDAC6 sequentially deacetylates at these four sites to dissemble the MSH2-MSH6 heterodimers and ubiquitinates the monomer of MSH2. Thus, USP10 may promote deubiquitination of the MSH2 monomer to facilitate MSH2 acetylation and the MSH2-MSH6 dimer formation. We are currently testing this hypothesis in the laboratory. We also found that USP10 interacts with MSH6 in vivo by the immunoprecipitation assay (data not shown). However, we failed to show that USP10 physically interacts with MSH6. So it is possible that USP10 influences MSH6's function through a direct interaction with MSH2. We previously showed that upon 6-TG and MNNG treatment, the level of MSH2 ubiquitination was decreased (data not shown and Ref. 8). Thus, it is likely that USP10 is more activated under the 6-TG or MNNG treatment and deubiquitinates MSH2 more efficiently. Yuan et al., reported that under stress conditions, such as ionizing radiation (IR), USP10 can be phosphorylated by ATM and translocate to the nucleus to stabilize p53 (15). We recently showed that USP10 phosphorylation was increased upon MNNG treatment. A MAPK/CDKs family kinase might be responsible for USP10 phosphorylation (data not shown). Further studies are needed to elucidate how MNNG-induced USP10 phosphorylation affects its enzymatic activity to regulate MSH2 stability. . Depletion of USP10 in A549 cells confers MNNG and 6-TG resistance. A, knockdown of USP10 increases cell viability to MNNG. A549 cells transduced with lentiviruses containing empty vectors, shUSP10 -1 or shUSP10 -2 were treated with MNNG of the indicated concentration for 2 days and 3 days. MTT assays were performed. B, knockdown of USP10 increases cell viability to 6-TG. A549 cells transduced with lentiviruses containing empty vectors, shUSP10 -1 or shUSP10 -2 were treated with 6-TG. MTT assays were performed. C, knockdown of USP10 reduces apoptosis upon treatment of MNNG and 6-TG. A549 cells transduced with lentiviruses containing empty vectors, shUSP10 -1 or shUSP10 -2 were treated with MNNG or 6-TG at indicated times and concentrations. Cells were lysed, and the anti-PARP1 and anti-␤-actin Western blotting analyses were performed. For A and B, the error bars stand for a standard error (S.E.). **, denotes p Ͻ 0.001 by a two-tailed Student's t test (n ϭ 6). To explore whether USP10 and MSH2 can serve as biomarkers for the sensitivity of chemotherapeutic drugs in lung cancer patients, we examined the mRNA levels of USP10 and MSH2 by qRT-PCR in a cohort of non-small cell lung cancer patients under clinical trials (28). We found that there was a positive correlation between USP10 and MSH2. 4 Future studies will determine the protein levels of USP10 and MSH2 in these patients to examine whether USP10 also stabilizes MSH2 in lung cancer patients. FIGURE 6. The USP10-knockdown A549 cells display a reduced MMR activity compared with the control cells. DNA substrate (left) used in this study contains a G-T mismatch (red) placed into two overlapping restriction endonucleases and a strand break 172-bp 3Ј to the mismatch. Restriction enzymes used for scoring the repair are shown. MMR activity of A549 cells with or without shUSP10 was determined by incubating the circular DNA heteroduplex with 75 g of whole cell extracts. Repair products were identified by Southern blot analysis using a 32 P-labeled oligonucleotide probe (red bar) complementary to the nicked strand at the indicated location. MMR activities in HeLa extracts and heat-inactivated HeLa extracts (HI-HeLa) were used as a positive control and a negative control, respectively. The repaired bands were quantified by densitometry. FIGURE 5. Overexpression of MSH2 in USP10-knockdown A549 cells restores the 6-TG-and MNNG-mediated growth inhibition and apoptosis. A, A549 stable pools were transduced with vectors, shUSP10 -1 and shUSP10 -2, and the latter two pools were transfected with HA-MSH2. Then the above five groups of cells were treated with vehicle, 10 M 6-TG or 10 M MNNG for 3 days. MTT assays were performed. B, five groups of cells as shown in 5A were treated with vehicle, 30 M 6-TG, or 30 M MNNG for 3 days. Cells were then harvested and lysed. The anti-PARP1, anti-MSH2, anti-USP10, and anti-␤-actin Western blotting analyses were performed. For A, the error bars stand for S.E. **, denotes p Ͻ 0.001 by a two-tailed Student's t test (n ϭ 6).
4,950.4
2016-03-14T00:00:00.000
[ "Biology", "Medicine" ]
Charge exchange of slow highly charged ions from an electron beam ion trap with surfaces and 2D materials Electron beam ion traps allow studies of slow highly charged ion transmission through freestanding 2D materials as an universal testbed for surface science under extreme conditions. Here we review recent studies on charge exchange of highly charged ions in 2D materials. Since the interaction time with these atomically thin materials is limited to only a few femtoseconds, an indirect timing information will be gained. We will therefore discuss the interaction separated in three participating time regimes: energy deposition (charge exchange), energy release (secondary particle emission), and energy retention (material modification). Introduction While photons and electrons can be tuned in terms of their kinetic energy, using ions brings one additional degree of freedom: the charge state of the projectile.By ionising more and more electrons from an atom, its potential energy, i.e. the binding energy of all missing electrons, can reach several tens of keV.The potential energy of these highly charged ions (HCIs) increases quickly with charge state, e.g. for low charge states q = 1, 2, xenon only possesses a potential energy on the order of ∼10 eV.For fully ionised xenon it can reach more than 200 keV [1].Using such projectiles in ion-interaction studies thus introduces a new regime of processes triggered by deposition of this potential energy rather than solely by the kinetic energy of the projectile. In the last decades there has been enormous progress in development of sources for HCIs [2][3][4][5][6][7][8][9].Most of them rely on subsequent removal of electrons via electron impact ionisation processes [10] that require trapping the ions until they have reached their final charge state.Examples for these sources are electron cyclotron resonance ion sources (ECRIS), where electrons are heated via microwave injection to ionise atoms/ions trapped in a magnetic bottle [11] and electron beam ion traps/sources (EBIT/EBIS) [12].In the latter a highdensity electron beam collides with gas atoms placed in a set of drift tubes forming the ion trap. In recent years, following the discovery of stable 2D materials, new insights could be gained: Using these ultimately thin surface-only samples, the interaction of HCIs with a sample can be limited to a well-defined area.This allows the exclusion of sub-surface secondary interaction contributions from measured results and thus renders studying the primary interaction of an HCI impacting on a material surface possible. Beyond that, the limited sample thickness also restricts the interaction time to a few femtoseconds.This allows indirect access to timing information of HCI-solid interactions, which are summarised in the timeline shown in figure 1: Primary ion impact and energy deposition happen within femtoseconds, followed by energy release, e.g. in the form of emission of secondary particles.Finally, energy retention leads to permanent material modification. Within this work we will review the interaction of highly charged ions with 2D materials with regard to these time regimes.The three main chapters will focus on charge exchange, secondary particle emission, and nanostructuring.In this context, we will discuss why HCIs are an ideal tool to study ultra-fast processes on a nanoscale. Methods Studying freestanding 2D materials brings many challenges, as in contrast to bulk materials, a defect in the layer might expose atoms of a completely different support substrate.This can distort experimental results.Hence, it is necessary to make sure to exclude unwanted contributions in experimental scattering or emission spectra and to focus on the material layer itself.One method to do this, is to perform coincidence measurements between a quantity of interest on the one hand, and a quantity that allows separation of 2D and support layers on the other. At TU Wien, for example, we employ a Dreebit EBIS-A [64] with three drift tubes operated at room temperature and a miniaturised EBIS-C1 from D.I.S Germany GmbH [65].Using electrostatic extraction optics, HCIs are guided from the source towards an ion spectrometer discussed in detail in [66].It hosts a multichannel plate (MCP) detector allowing to detect the HCIs after interacting with samples.In addition to that, we use two electron detectors: a passivated implanted planar silicon (PIPS) detector biased at ∼30 kV placed behind an extraction grid of a few hundred Volts and an electrostatic hemispherical energy analyser (HEA).The former is used for electron emission statistics measurements, i.e. to measure the total number of emitted electrons per incident ion [67] and the HEA to determine the electron energy distribution [68].The whole spectrometer is operated in a coincidence mode correlating ion and electron signals.This gives access to the ion time of flight and therefore the particles' energy loss-a quantity permitting the differentiation of 2D sample and support structures.A detailed explanation of the coincidence technique as well as its application can be found in [69,70].Similar setups (with and without coincidence options) are also used in [71][72][73][74][75][76][77], where some also include x-ray detectors [35,78] or secondary neutral mass spectrometers for sputtered particles [79]. HCI-surface interaction Before going into detail let us review the general current understanding about HCI-solid interaction: An HCI with its high potential energy influences the electronic landscape of a sample while it is approaching.According to the classical over-the-barrier model proposed by Burgdörfer et al [80], one can estimate that several Å above the surface there is a critical distance r c at which a resonant and classically allowed electron transfer (over the barrier) leads to a population of the projectile's high n-shells.This forms a hollow atom [81,82], where n ∼ q, the initial ion charge state [80].While further approaching the surface, this hollow atom starts to de-excite via radiative and non-radiative channels.The ultra-fast de-excitation measured in experiments [32,33,83,84] presented a bottleneck problem for a long time, since it could not be explained by decay rates of common mechanisms like Auger-Meitner neutralisation, autoionisation or sidefeeding [85].Only recently the interatomic Coulombic decay (ICD) [86][87][88][89][90][91] was proposed to be the dominant process in the de-excitation of HCIs [85]: Within this two-center Auger-Meitner process (related to Auger-Meitner deexcitation [92]), the hollow atom's de-excitation energy is transferred to the sample.This can lead to the emission of electrons or also to material modification.A schematic of the chronology of the interaction of a HCI with a sample surfaces is depicted in figure 2. Energy deposition: charge exchange As already discussed by Niels Bohr over a century ago [93], particles traveling in a material accommodate to an equilibrium charge state defined by balanced electron capture and loss cross sections.For slow highly charged ions, with velocities v smaller than the Bohr velocity v 0 , this equilibrium charge state is q eq → 0. Reaching this equilibrium charge marks the first part of the HCI-solid interaction shown in figure 1, namely charge exchange and energy deposition. However, using thick bulk samples restricts access to the ion charge state after the impact, except one uses special geometries like grazing incidence scattering [32,94].This challenge may be overcome by studying thin foils or the thinnest possible solid samples, 2D materials.For these samples, projectiles can be detected after the interaction in a transmission geometry. Experiments with thin foils (and fast projectiles) already indicated that the equilibrium charge state is reached within only a few layers of material thickness [33,83].Studies with freestanding carbon nano membranes (CNMs) [95] and singlelayer graphene (SLG) [96] were conducted to perform a systematic investigation of charge neutralisation times in a material.Figure 3(a) shows an exemplary spectrum of 130 keV Xe 30+ ions transmitted through SLG.Three different areas can be distinguished: (i) There is a fraction of ions transmitting through cracks/uncovered areas of the sample leading to random coincidences.These particles impinge on the detector with the initial narrow angular distribution provided from the EBIT and the original charge state q in of the highly charged projectile ion.(ii) Ions transmitting through clean sample areas will form an intermediate charge state distribution with mean value q sample , which is affected by the ion parameters.E.g., the distribution shifts up/down when increasing/decreasing the projectile velocity.Scattering of the particles in the sample also leads to a broadening of the distribution in horizontal direction.An in-depth analysis of Creutzburg et al [97] showed that for higher scattering angles the exit charge state decreases, i.e. more electrons are captured by the projectile.This is related to the trajectory of the ion: Closer collisions lead to larger scattering angles and increased deexcitation rates and thus a smaller exit charge state.(iii) Some particles going through support or contaminated areas neutralise completely, i.e. q support ∼ 0. There, multiple scattering events occur, which lead to an even broader scattering distribution than in the previous case (ii). Variation of ion velocities and charge states, respectively, showed an exponential charge decay of the form [33,96,98,99] where n e is the number of captured electrons by the ion, q in is the incident charge state, and q out is the mean exit charge state.τ is the interaction time and τ n a charge-state-dependent neutralisation time constant.Depending on the charge state, for SLG, τ n in the range of 1 − 5 fs were found [96].A consecutive study using freestanding multilayers of graphene (bilayer graphene = BLG, trilayer graphene = TLG) is summarised in figure 3(b).We could show that by taking into account a prolonged interaction time due to an increased number of material layers, the charge exchange with 1-3 layers of graphene can be universalised: In the figure, data for three xenon charge states (q = 20, 30, 40) is shown for various inverse velocities 1/v scaled with the number of material layers n L .The data again follows the exponential behaviour from (1) and suggests that the neutralisation depends solely on the interaction time the ion spends in close proximity to the sample [100]. As soon as porous materials come into play, another distribution occurs: Creutzburg et al found a fourth distribution at higher charge states by recording charge exchange patterns of highly charged ions transmitted through monolayers of MoS 2 .This distribution could be explained by the ion transmitting through nano-sized pores in the material, so that the interaction (time) with the material itself is limited.Then only a smaller amount of electrons may be stabilised (i.e.de-excite by ICD into orbitals with high binding energy preventing their peel-off or autoionisation) and higher exit charge states are found [75].Three distinct distributions can be seen: the incident beam with charge state q in , the distribution for the sample with q sample ∼ 15.7 and the support structure q support ∼ 0. In (b) the number of captured electrons ne = q in − qout is given in dependence of the inverse projectile velocity v scaled with the number of material layers n L : data for SLG, bilayer graphene (BLG) and trilayer graphene (TLG) show a universal exponential behaviour for each incident charge state Xe q+ for q = {20, 30, 40}.Adapted from [100].CC BY 4.0. Similar results were discussed in [95] for CNMs.This effect, which is very sensitive to the sample structure, was discussed as possible analysis technique for material textures [101]. The first stage of the interaction of HCIs with solids is dominated by ultrafast electron transfer from the target material to the projectile and a subsequent deexcitation of the ion stabilising the captured electrons.Since this deexcitation of the HCI goes hand in hand with the deposition of its potential energy within the material, a lot of energy is pumped into the electronic system of the target in a short time (∼fs).Additionally, Wilhelm et al could even find an increased kinetic energy loss with increasing charge exchange [95], further increasing the amount of excitation energy in the material.Several release mechanisms are possible for the material to relax after this excitation, which will be covered in the following sections. Energy release: secondary particle emission Common energy release mechanisms cover the translation of excitation energy to the emission of (possible high-energetic) secondary particles, i.e. electrons, sputtered target atoms and x-rays.For the latter two, there exist many literature data for studies including bulk samples, however, only scarce information is available for HCI interaction with (freestanding) 2D materials.There is solely a study by Skopinski et al using MoS 2 layers on Au (111) substrate revealing that sputtered Mo atoms by highly charged xenon ions have energies on the order of only 1 eV [102]. For the emission of x-rays, experiments with argon as well as xenon on SLG were performed [103].In the case of argon, H-like (q = 17) as well as bare (q = 18) projectiles were used.In general the spectra are very well comparable to literature data on bulk surfaces with similar K α to K β intensity ratios [104,105].In the case of bare argon, due to two vacancies in the n = 1 shell, additional hypersatellite lines (both K α and K β ) were measured.Their relative intensity is higher than the satellite line, implying that the first vacancy is filled (1s 0 → 1s 1 ) faster than the second, remaining vacancy (1s 1 → 1s 2 ).Using xenon ions (22 ⩽ q ⩽ 35) showed another interesting observation: Increasing the number of vacancies in the M-shell leads to a linear shift towards higher energies for the emitted x-rays from the 4f → 3d-transition resulting from the de-excitation.This was attributed to the fact that the average number of spectator electrons at the time of the transition decreases [106]. First experiments regarding the electron emission from freestanding SLG were performed by Schwestka et al [107].They could show that (depending on the kinetic energy) up to 100 electrons are being emitted by a single HCI impact (with a charge state of q = 40).A majority of these electrons have energies well below 15 eV, which is in good agreement with the proposed deexcitation scheme in [85].The interatomic Coulombic decay, namely, suggests that many low-energy electrons should be emitted within the deexcitation cascade of the HCI.Follow-up experiments published in [68] confirmed this finding of mainly low-energy electrons with complex coincidence measurements of the electron energy distribution using a hemispherical energy analyser in addition to the retarding field analysis performed in [107]. By comparing the electron yields of graphene and MoS 2 , we could also demonstrate that the high yield found for graphene in [107] is strongly dependent on the specific material used: Under the same conditions the electron yield of semiconducting MoS 2 amounts to only 1/6 when compared to SLG (cf figure 4).This was explained qualitatively using a charge patch building up in the material layer which stays longer in MoS 2 than in graphene due to the electronic material properties.This hinders low-energy electrons from escaping the material in MoS 2 leading to a reduced measured yield, especially in the low-energy regime [68].Calculations were performed based on a model presented in [108] to investigate the charge carrier dynamics quantitatively in a graphene and MoS 2 layer upon HCI impact, respectively.Results showed that even in the semiconductor the charge dissipates within ∼fs, which demonstrates electron emission to be prompt upon HCI impact and strongly influenced by the charge dynamics in the material.Studying electron emission can thus give access to ultrafast carrier dynamics in a probed material. To conclude this second phase of ion-solid interaction: The deexcitation of the hollow atom formed in step 1 (charge exchange discussed in section 3.1) and the accompanying potential energy deposition may trigger the emission of secondary particles through either sputtering [102], radiative decay [103] or non-radiative decay [68,107]. Energy retention: material modification Another release mechanism of deposited energy besides the emission of secondary particles is the permanent modification of the material due to energy retention.For bulk samples, when exceeding material-specific threshold combinations of kinetic and potential particle energies [30], a formation of hillocks or pits could be observed [30,[61][62][63]109].However, most studies focused on semiconducting and insulating materials, as for metals only few experiments showed successful nanostructuring using slow highly charged ions [110,111].As an explanation for this behaviour one typically uses charge mobilities in the respective materials: While in metals the deposited energy can be dissipated very fast, in semiconductors and insulators a confinement of the energy around the impact position leads to a translation of the excitation energy to the lattice system and further on to the formation of nanostructures.Only recently, experiments with gold nanoislands [60] and nanolayers [112] could demonstrate that in these limited volumes material modifications induced by HCI impact becomes possible. A similar behaviour was documented for ultrathin and twodimensional materials: On the one hand, Gruber et al irradiated freestanding layers of graphene with HCIs, a semimetal, without producing any damage [96].Note that for graphene layers placed on a substrate HCI-induced defects were discussed in [113,114] and for 2 in [115].On the other hand, Kozubek et al irradiated semiconducting freestanding MoS 2 layers with slow highly charged xenon ions.Using transmission electron microscopy they located ion-induced nanometresized pores.Just like for hillocks and craters in bulk materials, also the pore size in the MoS 2 layer increases with increasing potential energy [116].A similar behaviour was already found, prior to these experiments with 2D materials, using 1 nm thick carbon nano membranes.In these insulating layers, found pore diameters were even larger and tunable using the projectile potential energy up to 15 nm [117]. These experiments showed a clear trend that nanoscale material modification depends on the electronic properties of the investigated material.Creutzburg et al tested this hypothesis by fluorisation of single-layer graphene, which makes the material insulating.Indeed, thereby the material becomes susceptible to pore formation using HCIs [118]. In [119], we combined the findings discussed aboveboth in terms of pore formation susceptibility and neutralisation times-using van der Waals heterostructures: A stack of monolayers of graphene and MoS 2 was irradiated with HCIs, where the result strongly depends on the orientation of the material layers.If MoS 2 faces the ion beam first, pores can be located in the MoS 2 layer using a transmission microscopy afterwards (cf figure 5).The other way round, irradiating the graphene layer first, yields no pores in the MoS 2 layer.As the ion deposits almost 90% of its potential energy within the very first material layer, the graphene layer on top acts as a shield preventing the MoS 2 layer beneath from getting damaged.This surface sensitivity was also strengthened by experiments using up to three layers MoS 2 on graphene, where HCIinduced pores could only be found in the first two layers (with a decreased pore diameter for the second material layer).This emphasises the suitability of highly charged ions for material modification [120]. In summary, this final phase of the electronically-driven material modification is characterised by bond weakening due to excitations and atom removal by either Coulomb repulsion (charged surface atoms, high kinetic energy release) or desorption (neutral surface atoms, hyperthermal energies). Theoretical modelling Besides the experimental work summarised in the previous sections, a lot of effort was also put in finding theoretical descriptions to model the processes discussed above.Both empiric models as well as first principles models were used to try and describe the energy loss and charge exchange of the ions as well as the effect on the target material. Guo et al [121] proposed a semi-classical model allowing to study energy loss and charge exchange of slow highly charged Rainbow scattering, the classical 2D scatter pattern of highly charged ions in thin foils, was first discussed by Petrović et al [122] and then for graphene with proton projectiles by Ćosić et al [123]. In terms of first principles approaches, the GPAW package [124][125][126][127] is a time-dependent density functional theory (TDDFT) approach using the Ehrenfest dynamics scheme.It has been successfully applied to model energy transfer of lowcharged ions while transmitting through 2D materials [128] and should be, in principal, applicable also to highly charged ions (at least with empty subshells).The same applies to other TDDFT approaches [129,130]. However, TDDFT does not include any electronic transitions, e.g.inter-or intraatomic Auger-Meitner processes like ICD.To take that into account Wilhelm and Grande prepared the simulation package TDPot (Time-Dependent Potential) [131], which randomly chooses impact parameters in the 2D lattice.The deexcitation is subsequently modelled with the interatomic Coulombic decay using either extrapolated interatomic-distance-dependent experimental rates [131] or ab-initio calculated rates for the system of multiply charged ions transmitted through graphene [100].The inclusion of these additional effects allows to more accurately describe the final ion charge state after the interaction.The model was effectively tested for xenon ions interacting with freestanding layers of graphene and MoS 2 [75]. Charge-exchange and neutralisation is also included in the Green function approach hopping model prepared by Balzer and Bonitz [108], which shows good agreement with graphene transmission experiments.The model was further extended to simulate the charge dissipation in the material after the HCI impact to understand the influence on electron emission [68]. A related approach was taken by Grossek et al [132], who also introduce an excitation in a 2D material to mimic an HCIimpact: Here, a honeycomb lattice with graphene lattice constant is used, but other materials can be addressed using adjusted hopping times.A relaxation of the material is analysed, where atoms with kinetic energies above their binding energy are assumed to be emitted from the material.Therewith, pore formation susceptibilities of 2D materials with different electronic properties can be examined.Finally, Kozubek et al [133] further applied the two-temperature model to HCI-induced pores in hexagonal boron nitride, which showed good agreement with experimental results achieved using hBN samples on various substrates. Conclusion and outlook The results discussed above summarise how slow highly charged ion transmission experiments through freestanding 2D materials can be used to unravel time dynamics of ionsolid interaction.Transmission times are limited to only femtoseconds, nevertheless, almost complete neutralisation can be observed depending on charge state/velocity combinations.Neutralisation times on the order of femtoseconds could thus be derived.Following the ion impact, secondary particles are being emitted.So far, focus was laid on studying electron emission, which was found to happen prompt after the ion hits the material.On the long term, energy retention leads to modification of the material, i.e. nanopore formation in the case of monolayers.There, a strong dependence of the pore formation susceptibility of a material on its electronic properties could be observed: In experiments with xenon ions up to charge states of q = 40, pores were introduced in semiconducting and insulating layers but not in semimetals.Simulations, however, predict also pore formation in the latter, for the case of very high charge states and potential energies, respectively.These charge states exceed current possibilities of experiments with EBITs.Great efforts are currently being made to make these measurement parameters available in the future, e.g. at GSI Darmstadt, where the S-EBITs as well as the HITRAP facility with a complex deceleration system shall be able to produce slow ions up to bare uranium [18]. Figure 1 . Figure1.Timeline of (highly charged) ion interaction with a material. Figure 2 . Figure 2. Schematic representation of the interaction of a highly charged ion with a 2D material.(a) Shows the hollow atom formation, which sets in via resonant electron transfer several Å above the sample.(b) After interacting with the material, e.g.capturing electrons as well as emitting electrons from the sample, the de-excited ion leaves the material in a much lower charge state.Reprinted figure with permission from [68], Copyright (2022) by the American Physical Society. Figure 3 . Figure 3. Charge exchange spectroscopy.(a) shows the exit charge state distribution of 130 keV Xe 30+transmitted through single-layer graphene (SLG).Three distinct distributions can be seen: the incident beam with charge state q in , the distribution for the sample with q sample ∼ 15.7 and the support structure q support ∼ 0. In (b) the number of captured electrons ne = q in − qout is given in dependence of the inverse projectile velocity v scaled with the number of material layers n L : data for SLG, bilayer graphene (BLG) and trilayer graphene (TLG) show a universal exponential behaviour for each incident charge state Xe q+ for q = {20, 30, 40}.Adapted from[100].CC BY 4.0. Figure 4 . Figure 4. Electron yield from monolayers of graphene and MoS 2 induced by highly charged Xe ions.Reprinted figure with permission from [68], Copyright (2022) by the American Physical Society. Figure 5 . Figure 5.Nanostructure formation using highly charged ions.Van der Waals stack of an MoS 2 monolayer on top of single-layer graphene.After impact of 170 keV Xe 38+ a pore is visible in MoS 2 with graphene being still intact beneath.Reprinted with permission from[119].Copyright (2020) American Chemical Society.
5,656
2024-02-28T00:00:00.000
[ "Materials Science", "Physics" ]
A look‐locker acquisition scheme for quantitative myocardial perfusion imaging with FAIR arterial spin labeling in humans at 3 tesla Purpose A novel method for quantitative measurement of myocardial blood flow (MBF) using arterial spin labeling (ASL) in a single breath‐hold is presented, evaluated by simulations, phantom studies and in vivo studies and tested for reproducibility and variability. Methods A flow‐sensitive alternating inversion recovery (FAIR) ASL method with Look‐Locker readout (LL‐FAIR‐ASL) was implemented at 3 tesla. Scans were performed on 10 healthy volunteers and MBF measured in three slices. The method was investigated for reproducibility by Bland‐Altman analysis and statistical measures, the coefficients of reproducibility (CR) and variation (CV) are reported. Results The MBF values for the basal, mid, and apical slices were 1.04 ± 0.40, 1.06 ± 0.46, and 1.06 ± 0.38 ml/g/min, respectively (mean ± SD), which compare well with literature values. The CV across all scans, 43%, was greater than the between‐session and within‐session values, at 16 and 13%, respectively, for the mid‐ventricular slice. The change in MBF required for detection, from the CR, was 61% between‐session and 53% within‐session for the mid‐ventricle. Conclusion This study shows the feasibility of the LL‐FAIR‐ASL method for the quantification of MBF. The statistical measures reported will allow the planning of future clinical research studies involving rest and stress measurements. Magn Reson Med 78:541–549, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. INTRODUCTION Cardiovascular magnetic resonance (CMR) imaging is a powerful tool for the investigation of common pathologies of the heart such as congenital heart disease, coronary artery disease, and cardiomyopathies (1). One of the most useful processes that CMR allows us to investigate is the perfusion of blood in the myocardial tissue. In clinical practice, this is achieved with the use of an intravenous extracellular contrast agent, such as Gadolinium-DPTA (2), where the transit of the agent through the capillary bed of the myocardium can be observed. These first-pass techniques provide clinically applicable methods of imaging perfusion, but suffer from several drawbacks, such as significant problems with image artifacts (3), cost, the risk of the contrast agent itself to patients with renal conditions (4), and the difficulty of performing multiple serial evaluations owing to the lingering presence of the contrast agent. An attractive alternative to first-pass perfusion CMR techniques and other, nuclear based modalities, such as single-photon emission computed tomography (SPECT) (5,6) or positron emission tomography (PET) (7,8), would be to use arterial spin labeling (ASL). ASL is a noninvasive technique which uses magnetic labeling strategies to allow the proton spins present in the blood water to act as an endogenous contrast agent (9,10). In one of the simplest ASL experiments, known as flow-sensitive alternating inversion recovery (FAIR) (11,12), two images or sets of images are collected, one in which a globally selective (GS) inversion pulse is applied before the readout (the control image) such that the spins of the myocardial tissue of interest and those of the inflowing blood are both inverted. The second image set is preceded by a slice-selective (SS) inversion through a slice of the myocardium (the tag or label image) such that the myocardial tissue in the imaging slice is again inverted but the blood water spins flowing into the image slice remain in thermomagnetic equilibrium. The effect of the presence of these inflowing spins in the SS experiment is to cause an apparent shortening of the longitudinal relaxation time (T 1 ) of the tissue. It is the size of the difference in the observed T 1 values between the two measurements that allows us to quantitatively evaluate the blood flow to the myocardium. The measurement of T 1 values in the myocardium, often in the form of T 1 mapping techniques, has emerged as a useful tool in CMR (13). One established, robust method often used for T 1 mapping is the modified Look-Locker inversion recovery (MOLLI) (14)(15)(16) sequence and it's variants, which allow for the measurement and parametric mapping of the myocardial T 1 in vivo, in a single breath-hold. In this approach, an inversion pulse is applied and the relaxation of the longitudinal magnetization is sampled following an inversion time, TI, then again following the equivalent of one R-R interval, measured by electrocardiograph (ECG) -gating, and so on to build up a relaxation curve allowing the quantification of T 1 . Different variants of the technique use different inversion-readout schemes (3(3)3(3)5 for MOLLI, 5(1)1(1)1 for ShMOLLI and so on, indicating the number of looklocker images acquired and the duration of the gaps between acquisitions, all in units of heart beats). In this work, a method for measuring myocardial blood flow (MBF) noninvasively using a FAIR labeling scheme combined with a Look-Locker acquisition (LL-FAIR-ASL) (17) is presented. The described method allows for the acquisition of both FAIR inversion states (SS and GS) within a single breath-hold. The method is assessed by way of simulation, phantom study, and in vivo application. The method is further used for quantitative analysis of MBF in healthy volunteers. MBF is measured in three slices in humans with ASL for the first time and is assessed for reproducibility and variability. Sequence Design The LL-FAIR-ASL sequence was implemented on a 3 tesla (3 T) whole-body scanner (TIM Trio, Siemens, Germany). The sequence consisted of two HS8 adiabatic inversion pulses (18), one SS, one GS with a duration of 10 ms, time/ bandwidth product R ¼ 40 and b ¼ 3.45. Each inversion pulse was followed by a block of five cardiac triggered balanced steady-state free precession (bSSFP) readouts (19). The readout consisted of LISA excitation pulses, which are Gaussian-like, and included five initial ramp-up pulses and a final half-alpha "restore pulse." The two inversion blocks were separated by a gap of three heartbeats to allow for some recovery of the magnetization, within a manageable breath-hold, as shown in Figure 1, thus forming a MOLLI 5(3)5 regime. Both SS and GS inversions were collected in a single thirteen heartbeat breath-hold. The ordering scheme of the sequence within this thirteen heart beat breath-hold (SS-GS or GS-SS) was varied. Sequence parameters were based closely on those used successfully in MOLLI implementations and included a 35 excitation flip angle, image slice thickness of 8 mm, an initial TI of 115 ms (with the following values being 115 ms þ RR, 115 ms þ 2RR,. . .), TR/TE of 3 ms/1.5 ms, 320 mm field of view with 75% phase resolution and 6/8 partial Fourier, a 192 Â 144 image matrix (interpolated to 384 Â 288) and an acceleration factor of GRAPPA 2 (20). All inversion pulses and readouts occurred during the diastolic phase of the cardiac cycle. This timing guaranteed that the signal recovery was not corrupted by through-plane bulk cardiac motion, by restricting inversions and readouts to the most stable cardiac phase. Due to the inversion recovery nature of the sequence at least 15 s was allowed between scans to allow for full relaxation of the magnetization. Simulation Bloch simulations of the sequence were performed in MATLAB (The Mathworks Inc., Natick, MA) with the acquisition parameters defined above and not accounting for magnetization-transfer effects. The inversion pulses were simulated fully as HS8 pulses with the parameters matching those described above and the power set as per the scanner maximum output voltage to simulate the actual B 1 power used on the scanner to ensure the adiabatic condition is met. The evolution of the longitudinal magnetization, M z , after inversion, and the influence of the readout blocks, based on these simulations, is shown in Figure 1. Simulations were performed for a range of T 1 s (100-2000 ms) and heart rates (40-140 bpm) to test the robustness of the sequence against variations in these parameters. To determine an appropriate value for the thickness of the SS inversion pulse, a range of thicknesses (8-35 mm) were simulated for the stated range of T 1 s, with a fixed image slice profile with thickness of 8 mm full width half maximum to investigate the interaction between the HS8 inversion pulse profile and the image slice profile. The inflow was not simulated, such that the performance of the sequence in measuring only the relaxation under the two different inversion conditions could be investigated to ensure that any difference measured in vivo was due to blood flow and not systematic error. Phantom Studies A phantom imaging study was carried out to further investigate the performance of the LL-Fair-ASL sequence in the absence of flow. The data gathered were used to investigate the dependence of T 1 and T Ã 1 on heart rate with the LL-FAIR-ASL sequence by plotting these values against a range of simulated heart rates. The length of the gap between the two inversion blocks (measured in heartbeats) was also varied to investigate the effect of imperfect recovery before the second inversion. These experiments were carried out on an integrated calibration phantom distributed by Dr. S. Piechnik, Oxford, for the HCMR study (Clinicaltrials.gov ID: NCT01915615). The phantom was designed to monitor stability of T 1 mapping techniques, and provides a T 1 range of $300-3000 ms, and T 2 $60-3000 ms. The phantom consists of nine test objects made of agar, carrageenan water gels doped with nickel chloride. The study was carried out on the 3 T scanner using a 32-channel body receive array. The simulated ECG trigger was varied from 40 bpm to 140 bpm in 20-bpm intervals. The resulting images were loaded into MATLAB for analysis, where regions of interest (ROIs) were drawn by hand within the confines of each tube and propagated throughout an image series to calculate the T 1 s of the samples. In Vivo Studies ASL image series were acquired on 11 healthy volunteers (31 6 7 years old; 71 6 9 kg; two female) at 3T, in basal, mid-ventricular, and apical short axis slices using the 32-channel body receive array. All volunteers were recruited in accordance with the ethical practices of our institution and their informed consent obtained. Each of the volunteers was scanned twice, on separate days, making 22 scan sessions in total. The data from one subject was discarded due to excessive motion within the image series. In 17 of the 20 successful scans, the ASL acquisitions in the mid-ventricular slice were repeated. B 0 shimming was performed in a volume over the left ventricle (LV) covering all three slices. In each slice, the sequence as described in Figure 1 was run three times with the SS inversion block preceding the GS inversion block (called the SS-GS ordering scheme), then run a further three times with the GS inversion block preceding the SS (GS-SS), making for a total of six measurements, from which a single value of MBF could be calculated. An SS inversion thickness of 24 mm was used based on the results of the simulation and phantom scans described. This inversion slab and the imaging slice were positioned as described in Figure 1c. In all cases, ECG triggering and breath-holding were used. No motion correction or image registration was used. Image Analysis All image series were loaded into MATLAB and regions of interest (ROIs) were drawn by hand for both the left FIG. 1. A schematic of the LL-FAIR-ASL pulse sequence over 13 heartbeats for breath-holds 1, 3, and 5 (a) and breath-holds 2, 4, and 6 (b). 180 pulses (one SS, one GS, within a single breath-hold, the order of which is varied) are each followed by 5 bSSFP readouts (shown as gray boxes), separated by an R-R interval. The evolution of M z throughout the sequence is shown in red. (c) A four-chamber view of the heart with the position of the imaging slice in red, and the position of the SS inversion slab in green. ventricular myocardium and the blood pool. For the myocardium, the epicardial and endocardial borders were drawn and the myocardium divided into segments as per the American Heart Association (AHA) model (21). The apparent T 1 values (T Ã 1 ) for the myocardium for both the SS and global inversion blocks were calculated, as was the T 1 of blood from the global data by using a three-parameter least squares fit as described by: where S was the signal intensity recorded at time TI and A, B, and T Ã 1 were the fitted parameters. Where T 1 values are reported, the correction described by Deichmann and Haase for FLASH images (22) and described in Equation [2] was used, although only strictly applicable in the small tip angle regime. The phase data of the most fully recovered image from each inversion block were used to correct the polarity of the magnitude images on a pixel-by-pixel basis before fitting (23). The myocardial blood flow (MBF) for each slice was calculated from these data by the Belle quantification model (24): where l ¼ 0.92 mL/g (25) is the blood-tissue partition coefficient of water, T blood 1 is the relaxation time of the blood pool and T GS 1 and T SS 1 are the values for the longitudinal relaxation time calculated for the myocardium, from the GS and SS experiment, respectively. For the in vivo study, the T GS 1 and T SS 1 values used were the observed T Ã 1 s. As has been discussed previously (26,27), where the Deichmann-Haase correction is used the ratio of the relaxation times remains constant as the fitted values of A and B should remain the same for both the GS and SS cases. Thus, use of the correction would have no effect on the final value of MBF. The use of T Ã 1 is discussed further later in this work Reproducibility and Variability Analysis The in vivo data were used to perform Bland-Altman analysis (28) to assess reproducibility and variation. The mean difference in each case and the value of 6 1.96 times the standard deviations (SD) were calculated, which represent the upper and lower 95% confidence bounds. When normalized to the mean value of MBF, these values give the coefficient of repeatability for both the between-session (CR BS ) and within-session cases (CR WS ). The variability of the MBF estimates was assessed by the coefficient of variation (CV, equal to the ratio of the standard deviation to the mean) for the whole sample (CV all ), and each subject between-session (CV BS ) and within-session (CV WS ). Segmental Analysis Values of MBF were calculated for each of the 16 standard myocardial segments as defined but the AHA (21) to assess the viability of the ASL technique at segmental level. For each segment, these are reported as the mean and standard deviation across all the volunteers. The coefficient of variation (CV seg ) is also reported for all segments. Simulation T Ã 1 data were plotted for heart rates of 40, 80, and 120 bpm and for the full range of input T 1 s. Figure 2 shows the simulated SS and GS T Ã 1 s to agree well in the absence of flow, with an R 2 value of 0.9996 and both the T Ã 1 s shortening with increasing heart rate. Figure 3 shows the results of simulations to determine an appropriate value for the thickness of the SS inversion pulse. Results for both T 1 and T Ã 1 are plotted showing the dependence of the calculation on the extent of the inversion. The values are stable down to an inversion thickness of 23 mm then decrease with decreasing thickness. The effect is more marked with T 1 than T Ã 1 . For the input T 1 of 1200 ms, approximately in the myocardial range, the T Ã 1 drops from a maximum of 928 ms with the thickest inversion slice to 860 ms with the thinnest, a drop of 7%. The calculated T 1 , however, drops from 1043 ms with the thickest slice to just 63 ms with the thinnest, a drop of 94%. Phantom Studies An example plot of T 1 and T Ã 1 against simulated heart rate is shown in Figure 4 for a gap of three heart beats. The dependence of T Ã 1 on heart rate can be clearly seen for both the SS-GS and GS-SS ordering schemes, decreasing steadily as the heart rate increases. The calculated T 1 , however, is stable for the first inversion block in the ordering scheme, whether SS or GS, with a value of 1431 6 8 ms for the range of heart rates, but decreases rapidly with increasing heart rate, and, therefore, decreasing recovery time, for the second inversion block. Thus, the calculation of T 1 is dependent on the ordering scheme, whereas the calculation of the T Ã 1 s is not. In Vivo Studies Figure 6. The data are presented for all scans in all three slices. Reproducibility and Variability Analysis Values for the coefficients of variation and reproducibility, for both the between session and within session cases, are presented in Table 1. The CV across all scans was calculated as 39, 43, and 36% in the basal, mid- ventricular, and apical slices, respectively. The CV BS was 14, 16, and 17% in basal, mid, and apex; and the CV WS was 13% in the mid-ventricle. The CR WS was 53% in the mid-ventricle, compared with CR BS values of 72, 61, and 85% in the base, mid, and apex. Segmental Analysis The mean values and standard deviations of segmental MBF are reported in Table 2 along with the coefficient of variation, for all 16 segments. The range of mean segmental MBF values is 0.87-1.52 mL/g/min. These values are also plotted in Figure 7a, along with previous literature values of MBF in four segments in the mid-ventricle (septum and anterior, lateral, and inferior walls) using 15 O PET (29). The numbering of the segments is described in Figure 7b. DISCUSSION In this study, a noninvasive method for quantitatively measuring myocardial blood flow using arterial spin labelling was developed and tested by means of simulation, phantom experiment and in vivo studies. The variation and reproducibility of the method were then investigated in three short axis slices, where previous cardiac ASL studies have been restricted to the midventricle. The Bloch simulations of the LL-FAIR-ASL sequence show good agreement between the SS and GS T Ã 1 s as presented in Figure 2. Thus, there is no dependence in the MBF calculation on error in the calculation of the T Ã 1 ratio. This is true of the three simulated heart rates (40,80, and 120 bpm), despite the expected shortened T Ã 1 with increasing heart rate due to saturation from multiple high flip angle readouts, as seen in the simulated evolution of magnetization in Figure 1. As observed in Figure 3, the calculation of T Ã 1 and T 1 is compromised at lower values of the inversion slice thickness relative to the imaging slice thickness. This occurs due to differences between the inversion profile and the image slice profile. The effect is far more marked in the calculation of T 1 due to the reliance of the Deichmann-Haase correction on inversion efficiency. At lower inversion thicknesses the efficiency is low due to the slice containing a mixture of inverted and noninverted spins. By comparison, T Ã 1 has little dependence on inversion efficiency, providing further motivation for its use over T 1 . These data were used in the choice of inversion thickness for in vivo application of 24 mm. Varying the simulated heart rate in the phantom scans demonstrates that the values of T Ã 1 are far more stable than those of T 1 . Although T Ã 1 is dependent on heart rate, as demonstrated in Figure 4, the T Ã 1 values calculated for the SS and GS cases exhibit the same dependence, meaning heart rate does not bias the calculation of MBF as it is calculated from a ratio of the GS and SS T Ã 1 s. As discussed above, the T 1 correction is highly dependent on the inversion efficiency, and in each ASL ordering scheme the second of the two inversions suffers from apparent poor efficiency due to insufficient time for relaxation between blocks, which is necessary for achievable breath-holds. T Ã 1 was used for all MBF calculations as the Deichmann-Haase correction is only strictly applicable in a small angle regime and for inversion pulses applied to spins at equilibrium magnetization (22). T Ã 1 is independent of inversion efficiency and, as stated earlier the ratio of GS to SS T Ã 1 s is the same as the ratio of GS to SS T 1 s. The LL-FAIR-ASL method is attractive as it provides a noninvasive alternative to SPECT, PET, and first-pass perfusion CMR. Using this method, MBF can be measured quantitatively in a single slice in six 13-heartbeat breath-holds. Each breath-hold contains both SS and GS acquisitions, which ensures that the position of the heart and heart rate are the same for each pair. The ability to acquire an SS/GS pair in a single breath-hold is an important feature of the LL-FAIR-ASL sequence, as the measured changes in T Ã 1 are small and could easily be Table 3 (29)(30)(31)(32)(33)(34)(35)(36). While reproducibility and variability of similar techniques have been investigated in mice (37,38), to our knowledge this has not been carried out in human myocardium. Resting MBF values for healthy volunteers have previously been shown to be heterogeneous in studies using PET (29) and first-pass perfusion CMR (39). The high observed values for CV all across all subjects, 39, 43, and 36% for basal, mid, and apical slices, respectively, show that our results reflect this heterogeneous nature. This variability, calculated as the coefficient of variation is comparable to similar measures reported for preclinical cardiac ASL (37,38). The values of CV BS and CV WS calculated for individual subjects were all below the values reported for CV all except for a single apical slice which gave a CV BS of 43%. However, the mean values of CV BS for each slice and CV WS for the mid slice only, are significantly lower than CV all , with a maximum of 17%. This shows that the variation in results exhibited by an individual subject in each case is much less than across the population as a whole. The CV BS shows the variation due to the method, plus re-setup effects such as repositioning, re-localization, re-shimming etc. The CV WS primarily reflects the methodological effects. The Bland-Altman analysis showed the mean difference in both the between-session case for all three slices and the within-session case, for the mid slice only, to be close to zero and all bar one of the data points to lie within the 6 1.96 SD bounds. The values of CR give an indication of the change in MBF required to be detected above systematic errors. The between-session CRs of 72, 61, and 85% for the basal, mid, and apical slices, show the level of repeatability expected across repeat scans. They show the change in MBF required to show a difference over time and is useful to consider if planning a longitudinal study in a patient group. The withinsession CR of 53% for the mid-ventricular slice gives a useful indication of the detectable change in MBF. Thus, a change in MBF would be reliably detectable when rising from the mean mid-ventricular MBF, measured with LL-FAIR-ASL, of 1.08 mL/g/min to a value of 1.74 mL/g/ min between-session and 1.65 mL/g/min within-session. During vasodilator stress, the change in MBF in nonischemic segments of the heart has been shown to be between 300 and 420% (35,40,41), or between 3.24 and 4.54 mL/g/min based on the mean mid-ventricular MBF reported here. Therefore, the expected change in global MBF under stress should be detectable with the LL-FAIR-ASL method. Segmental MBF is an important measure in disease, as it allows investigation of MBF changes in the arterial territories of the myocardium. Previous publications using PET (29) and ASL (35) have commented on the spatial heterogeneity of resting MBF within a slice. The relative dispersion, here called the coefficient of variation, was reported as 13% with PET and 68% (range, 11% to 152%) with ASL, compared with 67% (with a range of 49% to 98%) with LL-FAIR-ASL. This difference in error between the ASL methods and PET is also reflected in the data plotted in Figure 7 and is perhaps expected given the inherently low signal-to-noise ratio of the method and the low resolution of the PET method, noted as a limitation by the authors and reported as 8.43 Â 8.33 Â 6.6 mm 3 full-width half maximum compared with 1.7 Â 2.2 Â 8 mm 3 with LL-FAIR-ASL. However, given the expected increase in MBF under stress, the difference in perfusion reserve between ischemic and normal segments, should be detectable. This study served to validate the LL-FAIR-ASL technique for measurement of resting MBF in healthy humans. Future studies will be required to validate the method in patient populations under stress and compare the results with standard methods such as first-pass perfusion CMR. Image registration methods may be used to compensate for residual motion within breath-holds, particularly in studies involving patients. In addition, more complex models for quantification of MBF, taking account of such effects as arterial transit time, the partial inversion of the blood pool during SS inversion and the perturbation of the inversion recovery by the readouts could be implemented. However, these studies are beyond the scope of this work. CONCLUSIONS The LL-FAIR-ASL sequence for quantitative measurement of MBF was shown to be robust and efficient when performed in vivo. The method is completely noninvasive, not requiring contrast agent, and provides resting MBF values in healthy volunteers which compare well with the literature and display the previously reported heterogeneity within healthy volunteer groups. MBF values are reported globally in three slices with a cardiac ASL technique for the first time. The variability of the method was shown to compare favorably with published values in similar techniques used in preclinical imaging but, to our knowledge, has not previously been performed in human volunteers. The method was shown to be sensitive enough to detect MBF increase under stress conditions in future studies. The presented results should prove useful in the planning of clinical research studies using this sequence to quantitatively measure MBF at rest and under stress in healthy volunteers or in patient groups. ACKNOWLEDGMENT C.T.R. is funded by the Wellcome Trust and the Royal Society.
6,248.2
2016-09-08T00:00:00.000
[ "Medicine", "Engineering", "Physics" ]
Regulatory mechanism of circular RNAs in neurodegenerative diseases Abstract Background Neurodegenerative disease is a collective term for a category of diseases that are caused by neuronal dysfunction, such as Alzheimer's disease (AD), Parkinson's disease (PD), and amyotrophic lateral sclerosis (ALS). Circular RNAs (circRNAs) are a class of non‐coding RNAs without the 3′ cap and 5′ poly(A) and are linked by covalent bonds. CircRNAs are highly expressed in brain neurons and can regulate the pathological process of neurodegenerative diseases by affecting the levels of various deposition proteins. Aims This review is aiming to suggest that the majority of circRNAs influence neurodegenerative pathologies mainly by affecting the abnormal deposition of proteins in neurodegenerative diseases. Methods We systematically summarized the pathological features of neurodegenerative diseases and the regulatory mechanisms of circRNAs in various types of neurodegenerative diseases. Results Neurodegenerative disease main features include intercellular ubiquitin–proteasome system abnormalities, changes in cytoskeletal proteins, and the continuous deposition of insoluble protein fragments and inclusion bodies in the cytoplasm or nucleus, resulting in impairment of the normal physiological processes of the neuronal system. CircRNAs have multiple mechanisms, such as acting as microRNA sponges, binding to proteins, and regulating transcription. CircRNAs, which are highly stable molecules, are expected to be potential biomarkers for the pathological detection of neurodegenerative diseases such as AD and PD. Conclusions In this review, we describe the regulatory roles and mechanisms of circRNAs in neurodegenerative diseases and aim to employ circRNAs as biomarkers for the diagnosis and treatment of neurodegenerative diseases. | INTRODUC TI ON Neurodegenerative diseases are slow-progressing diseases caused by progressive damage and selective dysfunction of neurons in the central and peripheral nervous system, with an age of onset usually between 50 and 70 years. 1The main pathogenesis mechanisms of neurodegenerative diseases included abnormalities in the intercellular ubiquitin-proteasome system, changes in cytoskeletal proteins, and continuous deposition of insoluble protein inclusion bodies in the cytoplasm or nucleus, such as β-amyloid deposition, neurofibrillary degeneration, and Lewy body formation. 2These factors lead to subsequent pathological changes, including oxidative stress, neuroinflammation, abnormal autophagosome/lysosomal system, and programmed cell death. 1 Circular RNAs (circRNAs) are a class of small-molecule noncoding RNAs (ncRNAs) with special biological functions that are widely expressed in the cells, tissues, and organs of multiple species, 3 such as Homo sapiens, 4 Mus musculus, 5 Caenorhabditis elegans, 6 and Drosophila melanogaster. 7CircRNAs were enriched in the synapses of neurons [8][9][10] and regulated the pathological process of neurodegenerative diseases via various signaling pathways, including the nuclear factor kappa-B (NF-κB) signaling pathway, 11 Wnt/β-catenin pathway, 12 and mitogen-activated protein kinase (MAPK) pathway. 13is review discusses the pathogenesis of neurodegenerative diseases and circRNAs in four major neurodegenerative diseases with the aim of laying the groundwork for exploring circRNAs as biomarkers for the diagnosis and treatment of age-related neurodegenerative diseases. | Classification of neurodegenerative diseases Several criteria, including clinical symptoms, anatomical region of neuronal dysfunction, and major molecular or protein conformational variants, had been utilized to classify neurodegenerative diseases 1,14,15 (Table 1).For example, based on clinical symptoms, neurodegenerative diseases could be classified as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), and amyotrophic lateral sclerosis (ALS).In addition, neurodegenerative diseases can also be named amyloidosis, tauopathy, alpha-synucleopathy, and transactivation response DNA-binding protein 43 (TDP-43) proteinopathy according to the major molecular or protein conformational variants.These major molecules and proteins have specific conformational properties that influence neurodegenerative diseases. | Aging and neurodegenerative diseases Aging refers to a series of degenerative changes that occur in tissues, organs, and the whole body over time as the body matures and cell function gradually declines and eventually dies. 16,17Various pathogenic factors of neurodegenerative diseases were closely related to the aging process, such as abnormal deposition of proteins, DNA damage, mitochondrial dysfunction, cellular aging, metabolic dysfunction, dysregulation of nicotinamide adenine dinucleotide (NAD+) levels, oxidative stress, stress response, telomerase inactivation, and inflammation. 18Some of these features have been observed in AD and PD. 18,19en, exogenous administration of NAD+ could prolong the lifespan of C. elegans and improve the pathogenesis and pathological characteristics of age-related neurodegenerative diseases. 20Furthermore, the inhibition of mTOR signaling by rapamycin enhanced neuroprotection and inhibited cellular senescence. 21Therefore, exploring the link between neurodegenerative diseases and aging characteristics may lead to new therapeutic strategies for such diseases. | Abnormal deposition of proteins Neurodegenerative diseases are characterized by progressive damage and selective dysfunction of neurons, associated with pathologically altered proteins deposited in the human brain as well as in peripheral organs.Abnormal conformational protein deposition impaired normal physiological processes of the neuronal system. 22ese proteins included the β-amyloid protein (Aβ), tauopathies, synuclein-alpha (SNCA), and TDP-43. 23The deposition of abnormal proteins affected the function of neurons in brain tissue to varying degrees, resulting in cognitive and functional impairments 24 (Figure 1).Although biochemical modification of proteins is a potential therapeutic target and biomarker in neurodegenerative diseases, there is currently no effective clinical method for accurately identi- | Function of circRNAs CircRNAs are a class of novel noncoding RNAs that are covalent closed-loop structures without the 5′ caps or 3′ poly (A) tails that linear RNAs possess, forming a continuous ring structure through covalent bonds. 25CircRNA molecules have three cyclization patterns in different organisms: exon-skipping or lariat-driven circularization, direct back-splicing or intron-pairing-driven circularization, and RNAbinding protein-driven circularization 26,27 (Figure 2A).CircRNAs were highly abundantly expressed in the brain 8 and involved in the regulation of biological processes through various regulatory mechanisms.For example, circRNAs regulated the expression of target genes through microRNAs (miRNAs) (Figure 2B).9][30] CircRNAs could also interact with specific RNA-binding proteins, thereby affecting the expression of related proteins (Figure 2C).CircRNA muscleblind (circMbl) was generated by circularization of the second exon of the Drosophila mbl gene, which in turn regulated circMbl levels by binding to the Mbl produced by its native mbl gene. 31Another circRNA glutamate ionotropic receptor AMPA type subunit 1 (circGRIA1) was abundantly expressed in the brain and could bind to glutamate receptor 1 to regulate synaptic plasticity and improve age-related synaptic function. 32Furthermore, circRNAs play important roles in cell biology as transcriptional regulators (Figure 2D).For example, circRNA human antigen R (circHuR) interacted with CCHC-type zinc finger nucleic acid binding (CNBP) to inhibit the binding of CNBP to the HuR promoter, thereby downregulating HuR expression levels and inhibiting the pathological process of gastric cancer. 33Finally, although circRNAs were noncoding proteins, some recently discovered circRNAs function as coding proteins (Figure 2E).field of ncRNAs, will reveal more important functions and mechanisms with the development of new technologies in the future. | Expression patterns of circRNAs in neurodegenerative diseases CircRNAs are involved in the regulation of synaptic function, and their expression is age-related and tissue-specific.CircRNAs had a high expression level in the mammalian brain and accumulated in neuronal tissue with age, 8,35,36 which might be due to their high stability. 37CircRNAs were highly conserved and specifically expressed in mouse, human, and Drosophila brain tissues. 8,36Eighty percent of all circRNAs in mouse neurons with high expression levels were detected in the human brain. 8CircRNAs in the Drosophila brain were the most abundant among all tissues. 36The above results show that circRNAs have similar expression patterns in the brains of different model organisms, suggesting that circRNAs can stably exist in brain development and neuronal systems and can be used as biomarkers for brain pathologies. Highly enriched circRNAs in the brain have extremely important roles in a variety of neurodegenerative diseases.Several dysregulated circRNAs have been identified in neurodegenerative diseases (Table 2).For example, 344 dysregulated circRNAs were detected in the brains of 6-month-old AD mice, of which 192 were upregulated and 152 were downregulated, and there were 244 dysregulated cir-cRNAs in the brains of 9-month-old AD mice, of which 142 were upregulated and 102 were downregulated. 38CircRNAs are abundant and dynamically expressed, suggesting their role in neurodevelopment as well as in the pathogenesis and progression of neurological diseases. | CIRCRNA S INVOLVED IN REG UL ATI ON OF NEURO DEG ENE R AT IVE DIS E A S E S CircRNAs could affect a variety of brain diseases, such as brain tumors, and acute and chronic neurodegenerative diseases, by affecting angiogenesis, neuronal plasticity, autophagy, apoptosis, and inflammation. 39At present, multiple circRNAs involved in neurological diseases by participating in important components of the synaptic system and presynaptic and postsynaptic neurons, thereby affecting the development and biogenesis of the nervous system, such as circRNA regulating synaptic be exocytosis 2 (circRims2), 40 circRNA E74-like ETS transcription factor 2 (circElf2), 41 and circRNA SHOC2 leucine-rich repeat scaffold protein (circSHOC2) 42 (Table 3). | CircRNAs in AD AD is a neurodegenerative disease typically of insidious onset and characterized by the presence of neurotic plaques and neurofibrillary tangles. 43AD had two basic pathological features: (1) interneuronal neurofibrillary tangles composed of abnormally modified tauopathies and (2) accumulation of Aβ and formation of senile plaque deposits. 44In addition, inclusion bodies of TDP-43 protein had also been found in AD patients. 45,46To date, several circRNAs have been found to play a role in the regulation of AD pathological processes (Figure 3). | CDR1as 8][49] CDR1as had a powerful miRNA sponge function and regulated the levels of target genes by adsorbing miRNAs. 50Multiple miR-7 binding sites were found on CDR1as, which could increase the expression of the downstream target gene ubiquitin-conjugating enzyme E2A (UBE2A). 50E2A typically coordinated the clearance and degradation of amyloid and damaged proteins by the proteolysis of 26S proteasomes during the ubiquitination cycle. 51,52In sporadic AD brains, the regulation of the ubiquitination cycle involves a genetic defect that may lead to the inability to clear Aβ peptides from the cytoplasm. 53Therefore, the study of the miRNA sponge function of circRNAs is a key approach to the important epigenetic regulatory mechanisms of circRNAs in the central nervous system pathogenic gene expression program. | CircHDAC9 CircRNA histone deacetylase (circHDAC9) in the mouse hippocampus had multiple binding sites for miR-138 to promote the expression of the target gene sirtuin 1 (Sirt1), which increased the content of Aβ and further led to synaptic and learning/memory deficit damage in neuronal function. 56The expression level of circHDAC9 was decreased in the serum of patients with AD and those with mild cognitive impairment. 56These results suggest that circHDAC9/miR-138/Sirt1 plays an important regulatory role in synaptic dysfunction and abnormal Aβ splicing, providing a new therapeutic target for AD patients. | CircHOMER1 CircRNA homer protein homolog 1 (circ HOMER1) was produced by alternative splicing of the HOMER1 gene.The linear and circular expression levels of HOMER1 were significantly downregulated in the development of prefrontal cortex and pluripotent stem cellderived neurons in patients with schizophrenia and bipolar disorder. 57The HOMER1 was involved in synaptic plasticity, learning, and memory, and affected Aβ deposition in the cerebral cortex. 57rthermore, circHOMER1 had multiple binding sites for miR-651, targeting presenilin-1 (PSEN1) and presenilin-2 (PSEN2) genes to affect the development of neuronal synapses. 4Further research needs to determine how circHOMER1 affects the expression of its native gene, HOMER1. | CircCwc27 CircRNA spliceosome-associated cyclophilin (circCwc27), derived from the CWC27 gene, was highly expressed in neurons, upregulated in the temporal cortex and plasma of APP/PS1 mice and AD patients and has an impact on cognition, neuropathology, and transcriptome in APP/PS1 mice. 59CircCwc27 bound to Pur-α in the cytoplasm. 60e interaction between Pur-α and circCwc27 was significantly enhanced in APP/PS1 mice, altering the transcription of AD-associated genes and APP proteins. 60Pur-α was involved in brain development, synaptic plasticity, and memory retention and played a key role in gene transcription regulation. 61Knockdown of circCwc27 reduced the expression level of APP and the production of Aβ. 59 Thus, circR-NAs could directly affect the regulation of AD-related pathological proteins and become promising AD therapeutic targets with clinical transformation potential. | CircAβ-a CircAβ-a, derived from the Aβ coding region of the APP gene, was detected in the brains of AD patients and nondementia control groups, and encoded a polypeptide (19.2 kDa) associated with Aβ175. 62This was a new method to synthesize Aβ proteins from circRNAs.Biogenesis of circAβ-a did not require this specific mutation, unlike a specific mutation in the APP gene during AD pathology. 63It was likely that all human individuals produced circAβ-a, suggesting that it may play a role in the pathogenesis of sporadic AD and that the translation of circRNAs could be activated and controlled under certain conditions. 34,63,64The Aβ ratio of circAβ-a translation Aβ175 processing to APP full-length protein-derived peptides might be an important indicator for diagnosing AD pathology.The multiple differentially expressed circRNAs had been identified in the hippocampus of AD model APP/PS1 mice, which were involved in the Hippo, cGMP-PKG, and cAMP signaling pathway, affecting functions such as axon guidance and platelet activation in neuronal synaptic systems. 5,38A variety of differentially expressed circRNAs act on sponges of miRNAs to regulate target genes.For example, both mmu_circ_0001125 and mmu_circ_0000672 regulated the expression of cystatin F (Cst7) by binding mmu-miR-351-5p. 5As an endosomal/lysosomal cathepsin inhibitor, Cst7 is involved in cathepsin activity in the lysosomal pathway, and reduced the phagocytic ability of activated microglia to promote clearance of Aβ. 65 Therefore, the ceRNA regulatory network is an important epigenetic data of AD pathology, suggesting that various dysregulated circRNAs in AD pathology can serve as important biomarkers and therapeutic targets for neurodevelopment. | CircRNAs in Parkinson's disease PD was a common progressive neurodegenerative disorder that was characterized by tremors and bradykinesia. 66The main pathological feature of PD was the degeneration of the Lewy body or the Lewy nerve synapses due to SNCA aggregation, resulting in the formation of filamentous cytoplasmic inclusions, accompanied by lesions in the substantia nigra and dysregulation of dopamine homeostasis. 67st circRNAs regulate PD through a miRNA mechanism (Figure 4). | Circzip-2 CircRNA zinc-regulated transporters and iron-regulated transporterlike protein-2 (Circzip-2), derived from zip-2 genes, was significantly downregulated in the PD model of C. elegans, and formed a competitive inhibitory cleavage with its parental gene zip-2. 6The zip-2 protein directly affected the expression level of SNCA and the content of reactive oxygen species. 6In addition, circzip-2 mitigated the pathological process of PD by binding to miR-60.The protective mechanism of circzip-2 against PD could be applied to the design of specific therapeutic molecules and the development of effective diagnostic tools for neurodegenerative diseases. | CircSNCA CircRNA alpha-synuclein (circSNCA), sourced from the SNCA gene was downregulated in the 1-methyl-4-phenylpyridinium (MPP+)-treated SH-SY5YPD cell model.CircSNCA regulated the expression of SNCA mRNA by binding to miR-7. 68SNCA protein was related to neurotoxicity and anti-apoptotic pathways and was the most important protein in PD pathological abnormal protein deposition.Therefore, downregulation of circSNCA could affect the expression of SNCA mRNA in the native gene, thereby reducing neuronal apoptosis and inducing autophagy in PD patients. 68 | CircSAMD4A CircRNA sterile alpha motif domain containing 4A (circSAMD4A) originated from SAMD4A and promoted apoptosis and autophagy. 69Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and MPP+ were neurotoxins that affected the pathological process of PD. 70 Human neuroblastoma cells (SH-SY5Y) were treated with MPTP and MPP+ to establish a PD cell model.70 In this model, circSAMD4A regulated the levels of MPTP and MPP+ and participated in the apoptosis and autophagy of dopaminergic neurons through the AMPK/mTOR pathway 71 .Furthermore, the binding of miR-29c-3p to circSAMD4A attenuated the cytotoxicity of MPTP or MPP+ in dopaminergic neuronal cells 71 .Therefore, circSAMD4A might be as a promising diagnostic biomarker and therapeutic target for PD. | CircDLGAP CircRNA DLG-associated protein 4 (circDLGAP4) expression was decreased in MPTP-induced PD mouse model and MPP + -intoxicated PD cell models.In vitro, circDLGAP4 might be promoted in the development of PD by affecting SH-SY5Y and MN9D cell viability, apoptosis, mitochondrial damage, and autophagy. 72In addition, circDLGAP4 exerted its function by regulating miR-134-5p. 73CircDLGAP4/miR-134-5p also regulated the activation of CREB signaling and the expression of CREB target genes. 73CircDLGAP4/miR-134-5p/CREB axis could explain the pathogenesis of PD in human and mouse models. | CircTLK1 CircRNA tousled-like kinase 1 (CircTLK1), derived from the TLK1 gene, regulated tumor cell proliferation, metastasis, and myocardial ischemia/reperfusion injury. 74,75Then, circTLK1 was significantly increased in MPTP-induced PD mouse models. 76Knockdown of circTLK1 inhibited apoptosis and toxicity in PD pathological mouse model cells and improved cell viability. 76In addition, circTLK1 acted as a miR-26a-5p sponge to regulate its target gene [death-associated protein kinase 1 (DAPK1)] and improved neurological dysfunction caused by middle cerebral artery occlusion and reperfusion in vivo. 76e circTLK1/miR-26a-5p/DAPK1 regulatory axis highlights the role of circTLK1 in the pathogenesis of PD and provides a new theoretical basis for the development of effective treatments for PD. | Circ_0070441 Circ_0070441 originated from the SNCA gene, and its expression was upregulated in the PD cell model. 68In a PD model constructed with MPP + -treated SH-SY5Y cells, circ_0070441 upregulated the expression level of the target gene insulin receptor substrate 2 (IRS2) by adsorbing miR-626. 77The IRS2 protein is involved in the pathological process of PD through affecting the Aβ level and reducing the cellular neurotoxins. 78In the regulatory mechanism of PD, the ceRNA network involved in circ_0070441 may provide a new target for PD treatment. | CircSLC8A1 CircSLC8A1 (solute carrier family 8 member A1) was increased in cultured cells, and exposed to the oxidative stress-inducing agent paraquat. 79CircSLC8A1 had seven binding sites for miR-128 and is strongly bound to the microRNA effector protein Argonaute 2(Ago2).HD. 84 More CAG repeat expansions were generally associated with an earlier age of onset, and male sperm had a greater potential for repeat variation.Therefore, HD was often associated with genetic predisposition in men. 85,86[89] One study found 19 downregulated and 4 upregulated circRNAs between mouse PC12 cell lines expressing wild-type huntingtin protein as a control and mHTT protein. 90In this study, 16 downregulated circRNAs came from the same chromosomal region of the Rere gene, and the remaining three downregulated circRNAs came from other chromosomal regions. 90From the analysis of the mechanism, 15 of the 23 differentially expressed circRNAs were related to the MAPK pathway, and 16 were involved in the dopaminergic synaptic pathway.The pathological mechanism of dopamine in HD has been widely demonstrated. 90The MAPK pathway had important effects on cell proliferation and division, the stress response, differentiation, and apoptosis; c-Jun N-terminal kinase (JNK) and p38 in the MAPK pathway were the main signaling factors involved in the pathogenesis of HD. 91 In summary, circRNAs might regulate the pathological process of HD through dopaminergic synapses and MAPK pathways, and their specific biological functions require further exploration. Another study found that circHTT 2-6 derived from the exons 2, 3, 4, 5, and 6 of the HTT gene was enriched in in the frontal cortex of HD patients. 92When circHTT (2-6) was overexpressed in HEK293 and SH-SY5Y cells, no change in the CAG repeat region of HD was detected, which decreased the cell proliferation, nuclear area, and altered subcellular localization of the HTT protein. 92These results demonstrate the overexpression of circHTT undergoes HD-related pathological changes, but its specific functional mechanism needs further study. | CircRNAs in ALS ALS is a heterogeneous neurodegenerative disease whose main pathological features are the degeneration of upper motor neurons that project to neurons in the brainstem and spinal cord and lower motor neurons that the brainstem or spinal cord projects onto muscles.ALS patients had features of TDP-43 proteinopathy, such as loss of TDP-43 in the nuclei of neuronal cells and cytoplasmic aggregated with skeletal-like or dense morphology in residual motor neurons. 93 addition, circRNAs have been involved in the regulation of fused in sarcoma (FUS) proteins during ALS. 94For example, circ-Hdg-frp3 took part in protecting neuronal function and integrity in neuronal cells, whereas mutant FUS protein (mtFUS) affected the localization of circ-Hdgfrp3 under oxidative stress conditions.95 The mtFUS could lead to abnormal accumulation of cytoplasmic FUS protein and increased mitochondrial translocation, resulting in excessive mitochondrial fission and damage, eventually leading to neuronal death, which was a major pathological feature in some ALS patients. 12In summary, circRNAs are likely to be involved in the pathological regulation of ALS. Many dysregulated circRNAs were also found in ALS patients, including 151 downregulated circRNAs, most of which were involved in the pathological process of ALS. 96Hsa_circ_0000567 derived from the SETD3 gene, and SETD3 was the histone methyltransferase that regulated muscle differentiation in mouse. 97Hsa_circ_0023919 originated from the PICALM gene, which was involved in clathrin-mediated endocytosis at neuromuscular junctions.Single-nucleotide polymorphisms upstream of the gene were associated with AD. 98 Simultaneously, the authors evaluated the correlation between the expression levels of circRNAs and the potential association with clinical data. 96Three circRNAs (hsa_circ_0000567, hsa_circ_0023919, and hsa_circ_0088036) were negatively correlated with patient age at the time of blood collection. 96The sensitivity and specificity of these three circRNAs were as high as 90% in patients with ALS, significantly higher than the most representative biomarkers in ALS, phosphorylated neurofilament heavy chain and neurofilament light chain. 96Therefore, circRNAs have great potential as biomarkers for ALS diagnosis and treatment. | CON CLUS ION The brain is the most plastic organ, and its circuits are tightly reg- | 3 of 12 XIAO fying inclusion bodies formed by abnormal proteins in the course of neurodegenerative diseases.TA B L E 1 Classification of neurodegenerative diseases.et al. F I G U R E 2 CircRNA zinc finger protein 609 (circ-ZNF609), which encoded a protein, had an open reading frame, and the basic elements of start and stop codons were the same as those of linear transcripts. 34In summary, circRNAs mainly function by acting as miRNA sponges, regulating transcription, binding proteins, and translating polypeptides.CircRNAs, as a research hotspot in the F I G U R E 1 Protein deposits in various neurodegenerative diseases.AD, Alzheimer's disease; HD, Huntington's disease; PD, Parkinson's disease; ALS, amyotrophic lateral sclerosis; MSA, Multiple System Atrophy; FTD, Frontotemporal dementia.Molecular functions of circular RNAs (circRNAs).(A) CircRNA molecules produce three cyclization patterns: (1) exonic circRNAs, (2) exonintron circRNAs, and (3) intronic circRNAs; (B) CircRNAs act as miRNA sponges to regulate the activity of miRNA target genes; (C) CircRNAs bound to RNA binding proteins; (D) CircRNAs also directly participated in protein translation, affects protein-coding function, and regulated transcription; (E) CircRNAs encoded peptides or proteins and affected their biological functions. ( APP) and SNCA by acting as a sponge for miR-105.4APP and SNCA were important risk factors for the course of AD. 58 CircCORO1C reduced the accumulation of Aβ and SNCA in neurons and alleviated the pathological process of AD. F I G U R E 3 Regulatory mechanisms of circRNAs in AD.Red words indicate the upregulated noncoding RNAs and blue words indicate the downregulated noncoding RNAs.CircRNA, circular RNA; miR, microRNA.4.1.8| The other circRNAs-competitive endogenous RNA in AD ulated and modified throughout an organism's lifespan.The high abundance of circRNAs in the brain indicates that they are involved in the regulation of the nervous system.The diversity of causative factors in neurodegenerative diseases makes blocking one or both pathways incapable of significantly reducing overall neuronal dysfunction and loss.With the continuous deepening of research on neurodegenerative diseases, multi-channel and multi-targeted treatments can improve the symptoms of patients, regulate brain function, and play a therapeutic role.However, the course of neurodegenerative diseases often involves in cognitive impairment in the middle and late stages when treatment can only slow down the development of the disease and cannot fundamentally reverse the damage to neural networks.Therefore, the high stability and tissue specificity of circRNAs make them an important pathological detection marker, which has important guiding significance for the early treatment and diagnosis of neurodegenerative diseases as a key biomarker.Current research mainly focuses on circRNAs in the neurodegenerative diseases AD and PD, and research on circRNAs in other neurodegenerative diseases needs to be supplemented.However, the powerful role of circRNAs in AD and PD is likely to be important in other pathologies.Further studies on circRNA structure and function will improve our understanding of the pathogenesis of neurological diseases and lead to the development of new diagnostic and treatment methods.AUTH O R CO NTR I B UTI O N S Contributions: Feng Xiao and Deying Yang concept and design, literature search, manuscript preparation, manuscript editing; Jiamei Li, Siqi Wang: literature search, manuscript editing and review; Deying Yang, Zhi He: conceptual design, writing guidance, manuscript review; Mingyao Yang, Xiaolan Fan: conceptual design, directed review; Taiming Yan: manuscript review; Feng Xiao, Deying Yang edited the manuscript, and all authors approved the final version of the review. Function of circRNAs in neurodegenerative diseases. 55breviations: AD, Alzheimer's disease; ALS, amyotrophic lateral sclerosis; HD, Huntington's disease; PD, Parkinson's disease.| 5 of 12 XIAO et al. and p16) and inflammatory factors (tumor necrosis factor-α and NF-κB), which reduced the expression of AD marker proteins tauand Aβ, thereby delaying the pathological process of AD.54The process of aging was often accompanied by the pathological processes of neurodegenerative diseases and many related features of aging that have been found in neurodegenerative diseases.55TABL E 3Abbreviations: AD, Alzheimer's disease; ALS, amyotrophic lateral sclerosis; HD, Huntington's disease; PD, Parkinson's disease. [80][81][82]Oxidative stress is considered an important cause of many neurodegenerative disorders.Thus, circSLC8A1 can regulate oxidative stress activation, which has great significance in the pathology of PD.
5,758.2
2023-10-21T00:00:00.000
[ "Medicine", "Biology" ]
Flow Behavior through Porous Media and Displacement Performance of a SILICA/PAM Nanohybrid: Experimental and Numerical Simulation Study Nanoparticles (NPs) have been proposed as additives to improve the rheological properties of polymer solutions and reduce mechanical degradation. This study presents the results of the retention experiment and the numerical simulation of the displacement efficiency of a SiO2/hydrolyzed polyacrylamide (HPAM) nanohybrid (CSNH-AC). The CSNH-AC was obtained from SiO2 NPs (synthesized by the Stöber method) chemically modified with HPAM chains. Attenuated total reflection–Fourier transform infrared spectroscopy, field emission gun–scanning electron microscopy, X-ray diffraction, and thermogravimetric analysis were used to characterize the nanohybrid. The injectivity and dynamic retention tests were performed at 56 °C in a sandstone core with a porosity of ∼26% and a permeability of 117 and 287 mD. A history matching of the dynamic retention test was performed to determine the maximum and residual adsorption, IPV, and residual resistance factor (RRF). A laboratory-scale model was used to evaluate the displacement efficiency of CSNH-AC and HPAM through numerical simulation. According to the results, the nanohybrid exhibits better rheological behavior than the HPAM solution at a lower concentration. The nanopolymer sol adsorption and IPV (29,7 μg/grock, 14,5) are greater than those of the HPAM solution (9,2 μg/grock, 10), which was attributed to the difference between the rock permeabilities used in the laboratory tests (HPAM: 287 mD and CSNH-AC: 117 mD). The RF of both samples gradually increases with the increase in shear rate, while the RRF slightly decreases and tends to balance. However, the nanopolymer sol exhibits greater RF and RRF values than that of the polymer solution due to the strong flow resistance of the nanohybrid (higher retention in the porous media). According to the field-scale simulation, the incremental oil production could be 295,505 and 174,465 barrels for the nanopolymer sol and the HPAM solution, respectively (compared to waterflooding). This will represent an incremental recovery factor of 11.3% for the nanopolymer sol and 6.7% for the HPAM solution. INTRODUCTION −4 Polymer injectivity determines how easily a polymer solution can be injected and propagated through a reservoir formation. 5t is a critical characteristic because a reduction in injectivity can affect the cash flow of a polymer flooding project due to high pumping costs or delays in oil production. 6Polymer rheology and retention are the main factors that reduce injectivity.Polyacrylamide solutions have exhibited a dilatant behavior when propagating in porous media due to their elastic character, 7−10 which increases resistance to flow. 11However, if the stretch rates that cause the dilatant behavior are high enough, then polymer chains can suffer mechanical degradation, yielding viscosity loss. −17 Mechanical entrapment occurs when polymer molecules are large relative to the size of pores. 2,18,19Some mechanisms observed in mechanical entrapment are hydrodynamic retention, 15,20,21 bridging adsorption, 22,23 and trapping on dead-end pores. 24Hydrodynamic retention is caused by osmotic forces, which temporally trap polymer molecules in stagnant regions of porous media. 25,26Adsorption occurs due to interaction between polymer molecules and the rock surface (especially between the polar groups of polymers and polar points available on the rock surface). 27,28Adsorption affects the solution concentration and effectiveness of the mobility control at the displacement front because the polymer is removed from the injected fluid.The amount of polymer adsorbed strongly depends on polymer concentration, 29,30 polymer charge, 31 permeability, 7,17,32 clay and iron content, 33,34 salinity, and pH. 35,36tatic and dynamic methods are used to measure polymer adsorption in laboratory-scale experiments.In the static method, the polymer concentration is measured before and after soaking sand or crushed rock samples in the polymer solution.Polymer adsorption is determined by dividing the loss of mass from the solution by the weight of the sand or crushed rock sample.This method is simple and inexpensive.However, the results may not represent the field values because the surface area and the minerals exposed to the polymer may be different from those available in dynamic experiments, 29 the wettability of the crushed rock may be different from that of the reservoir rock, 37 and the polymer that can be mechanically entrapped is not measured. 33here are different methods for measuring the adsorption under dynamic flow conditions.In the first method, a polymer solution is injected at a constant frontal advance velocity into a linear core or sand pack until the normalized effluent concentration reaches unity.In the second method, polymer injection is switched to water or brine injection after the normalized effluent concentration reaches unity and the mobile polymer is displaced from the pore space. 29Polymer retention in both methods is determined by the material balance.Another method is the one proposed by Loetsch et al., 38 Hughes et al., 39 and Osterloh and Law. 40In this method, a slug of polymer solution is injected into a linear core or sand pack with a tracer.After the normalized concentration for both polymer and tracer reaches unity, the injection is switched to brine or water.Subsequently, the second slug of the polymer is injected with a tracer.Polymer retention and inaccessible pore volume (IPV) are determined by using the front part of the effluent curves during the two injection stages.IPV is calculated as the difference in area between the polymerbreakout curve and the tracer-breakout curve during the second injection stage.In the last method (concentration profile method), two polymer slugs are injected following the same procedure as explained previously.Adsorption is calculated from the cutoff values of the normalized concentration at 0.5 for both polymer slugs.IPV is calculated by 1 minus the value of the normalized concentration at 0.5 of the second polymer slug. 26−56 According to results presented by Abdullahi et al. 47 and Maghzi et al., 48 the NPs prevent the electrical shielding effect caused by the presence of cations in the polymer solution because the ion− dipole interactions occur between the cations and the oxygen on the NP surface instead of the cations and the amide groups of the polymer molecules.−59 Few studies on the flow behavior of polymer nanohybrids through porous media have been reported.For this reason, more investigations are needed to improve our knowledge of the underlying enhanced oil recovery (EOR) mechanisms of polymer nanohybrids. For the reasons stated above, the aim of this study is to evaluate the effect of surface-modified SiO 2 NPs on the flow behavior in porous media and the oil displacement efficiency of the HPAM solution.Displacement tests were performed to quantify the polymer retention, the IPV, and the resistance and residual resistance factors (RRFs) of the HPAM solution and the nanopolymer sol.The history matching of the dynamic retention test was performed by using the STARS module of CMG.The history matching parameters were used to predict the displacement efficiency for injecting 0.3 PV of 550 ppm of nanopolymer sol and 750 ppm of HPAM solution. 2.2.Nanohybrid Synthesis.The detailed synthesis of the nanohybrid used in this work has been reported previously by the authors. 60The SiO 2 NPs were prepared by adding tetraethyl orthosilicate (TEOS, 1 mL) under vigorous stirring to a solution of ammonium hydroxide and ethanol (1:5 ratio) at 90 °C.After 3 h, the SiO 2 NPs were recovered by centrifugation and dried for 24 h at 90 °C.The SiO 2 NPs were modified with APTES (nSiO 2 -APTES) following the proce-dure proposed by Chen et al. (2009). 61The nanohybrid (CSNH-AC) was obtained by dispersing 2 g of nSiO 2 -APTES in a THF/water solution at 400 rpm and then adding 3 g of HPAM powder.The reaction was carried out at room temperature for 24 h.Thereafter, CSNH-AC was recovered by centrifugation and washed with 2-propanol.Finally, the product was dried at 60 °C for 24 h. 2.3.Nanohybrid Characterization.The size and morphology of the CSNH-AC were characterized through field emission gun−scanning electron microscopy (FEG− SEM) (QUANTA FEG 650 model, Thermo-Fisher Scientific, USA) at a high vacuum and an accelerating voltage of 25 kV.The X-ray diffraction pattern (XRD), used for structural analysis, was performed with a Bruker D-8 A25 DaVinci X-ray diffractometer (D8 ADVANCE, Bruker, Billerica, MA, USA) with CuKα radiation and a LynxEye detector at a voltage of 40 kV.FTIR spectra were collected on a Bruker Tensor 27 FTIR spectrometer (Alpha, Bruker, USA).Thermogravimetric analysis (TGA) was performed using a TA2050 TGA analyzer (TA Instruments, INC., USA).For the TGA measurements, a mass of 5 mg of the nanohybrid or the HPAM was heated from 25 to 800 °C at a heating ramp of 10 °C/min in a nitrogen atmosphere. 2.4.Fluid Preparation and Filtration.The formation and injection brine composition are presented in Table 1.Each brine was filtered through a 5.0 μm MCE membrane filter (Merck Millipore, USA) before use.The formation brine was employed in the core saturation and permeability measurements, while the injection water was used for the preparation of the nanopolymer sols and the polymer solutions.For this, a mass of 5 g of HPAM powder or nanohybrid was added into the injection water to prepare the stock solutions of 5000 ppm, respectively.Each sample was stirred at 200 rpm for 48 h before dilution into the required concentration.30 ppm of KSCN were dissolved into the polymer solution or the nanopolymer sol to determine IPV and adsorption. Rheology of the Nanopolymer Sol and Polymer Solution. The flow curves of the nanopolymer sols and the polymer solutions were measured on an MCR502 rheometer (Anton Paar, Austria) with concentric cylinder geometry (measuring bob and measuring cup had radii of 13.329 and 14.463 mm, respectively) over the range 4−424 s −1 .A strain amplitude of 1% was selected to ensure the samples fell within the linear viscoelastic region.The rheological behavior of the samples was well described by the Carreau−Yasuda model. 29he uncertainty of the reported value remained between ±1 and 4%. 2.6.Core Flooding Tests at 100% Sw.The properties of the sandstone plugs are listed in Table 2.These properties were measured by following the procedures described by McPhee et al. 62 All tests were performed at 56 °C because it is the reservoir temperature of the Colombian field selected to evaluate the performance of the synthesized nanohybrid.The polymer solutions and nanopolymer sols were filtered and preheated before injection.For the preshearing process, 300 mL of sample were pressurized with nitrogen and passed through a capillary (ID 1/8″). 2.6.12.7.1.Resistance Factor and Residual Resistance Factor.First, the sandstone core plugs were vacuumed and saturated with formation brine.After that, the plug was mounted in the setup, the formation brine was injected at different flow rates (0.067, 0.167, 0.333, and 0.5 mL/min), and corresponding pressure drops were recorded.The absolute permeability was calculated by Darcy's law.The salinity of the plugs was changed by injecting different formation/injection ratios until the plugs were fully saturated with the injection brine.Then, the brine injection continued at 0.067, 0.167, 0.333, and 0.5 mL/min, and the corresponding pressure drops were recorded.Second, the polymer solutions (750 and 950 ppm) or the nanopolymer sols (550 and 750 ppm) were injected at the same flow rates used in the previous step, followed by brine injection.All the stable pressure drops were recorded and used to calculate the RF and the RRF, which are defined as 13 = P P RF p w (1) where ΔP w is the pressure drop during brine injection, ΔP wp is the pressure drop during brine injection after polymer flooding, and ΔP p is the pressure drop during polymer or nanopolymer sol injection. Dynamic Polymer Adsorption and IPV. The material balance method was used to measure the adsorption and IPV of the 750 ppm polymer solution and the 550 ppm nanopolymer sol.For this, each sample with 30 ppm of KSCN tracer was injected (until C/C o on the effluents was equal to 1), followed by injection of brine (until polymer concentration on the effluents was close to zero).All fluids were injected at a rate of 0.067 mL/min.The effluents were collected to determine the KSCN, HPAM, and CSNH-AC concentrations through UV−vis analysis (DR5000, Hach, USA).For the UV−vis measurements, two 1 mL aliquots of the effluents were taken and treated with iron chloride hexahydrate (FeCl 3 •6H 2 O, Merck, USA) to determine the KSCN concentration and with sodium hypochlorite and glacial acetic acid to determine the HPAM and CSNH-AC concentrations. 63The procedure was repeated for the second batch of the polymer/nanopolymer sol and tracer solution.IPV and adsorption were calculated from eqs 3 and 4. 26 The shear rate (γ) in porous media was calculated from eqs 5 and 6. 64,65 Finally, the effective viscosity of the polymer solution in porous media was determined from eq 7. 66 first polymer slug@ 0.5 second polymers slug@ 0. where C is the polymer concentration in the effluent, C o is the initial polymer concentration, Q is the flow rate (cm 3 /min), A is the surface flow area of the porous media (cm 2 ), ϕ is porosity (fraction), K is the absolute permeability (cm 2 ), R p is the porous radius (cm), α is the formation shape factor which is assumed 1 (dimensionless) for the sandstone plugs, μ eff is the effective viscosity of polymer (cP), and μ w is the viscosity of water (cP). The concentrations of the nanopolymer sol and the HPAM solution were selected to reach mobility ratios close to one (1.2 and 1.6, respectively) to minimize the viscous fingering in the core flooding tests.The mobility ratios were calculated from eq 8 where K rw is the water-effective permeability, K ro is the oileffective permeability, μ w is the water viscosity, and μ o is the oil viscosity.2.7.Numerical Simulation.The numerical simulation was performed using a laboratory-scale model built-in commercial software (CMG STARS).Also, the CMOST module was used to perform the history matching of the laboratory tests, combining advanced statistical analysis and machine learning.The fundamental grid dimensions were 100 × 5 × 5 for X, Y, and Z, respectively, and the total number of blocks was 2500.The properties of each model are summarized in Table 3.Some of these data correspond to the results obtained from the rheological and rock-fluid experiments for the nanohybrid sol and the polymer solution on the laboratory scale.The producer and injector wells were placed at the edge of the numerical grid, representing the inlet and outlet of the core holder.Although laboratory cores physically have cylindrical dimensions, the numerical models were built in Cartesian coordinates by adjusting the surface flow area and the pore volume (Figure 1). The history matching provides a considerable understanding of the transport mechanisms.For this reason, the model input data was adjusted until the minimum difference between the results of the simulated model and the laboratory data was obtained.The methodology is based on the correct representation of the phenomena occurring during core flooding tests and the adjustment of some uncertain properties. 67It is a typical inverse problem where the result is known (laboratory production and pressure history), and the input parameters that allow the model to obtain this result must be determined.In this case, the input parameters adjusted were the permeability reduction factor, dynamic adsorption, and IPV.These parameters were selected as a result of the sensitivity analysis because they had the greatest impact on the objective functions of the history-matching process. Once the best match was obtained, the new values of the fitting parameters were used to predict the displacement efficiency for the CSNH-AC and HPAM solutions in the same laboratory-scale model to evaluate the performance of both products under the same conditions.Oil recovery by waterflooding was used to represent a typical base scenario on the laboratory scale, followed by chemical flooding. To evaluate the volumetric sweep efficiency of the CSNH-AC and HPAM injection on a field scale, an inverted 5-point injection pattern was built in a box model (Figure 2).The fundamental grid dimensions were 47 × 47 × 10 for X, Y, and Z, respectively, and the total number of blocks was 22,090.The PVT properties of water and dead oil are listed in Table 4.The porosity and permeability were defined through geostatistics.The pattern has an area of 20 acres, a pore volume of 3.75 million barrels, and a volume of oil in place of 2.62 million barrels.One injection well at the center of the pattern area was controlled by 600 BPD as the maximum injection rate and 3000 psi as the maximum bottom hole pressure.Four producer wells around the injector were regulated by 1000 BPD as the maximum liquid rate and 1000 psi as the minimum bottom hole pressure.The chemical injection was evaluated as a tertiary recovery method by injecting a 0.1 PV slug followed by chase water.The goal of injecting a small slug was to demonstrate that the performance of the CSNH-AC is better than that of the HPAM under this condition. Nanohybrid Characterization. 3.1.1. SEM and XRD Results.The SEM micrographs of CSNH-AC, HPAM, and nSiO 2 -APTES are presented in Figure 3.As shown in the images, the nSiO 2 −APTES have a spherical morphology and size of 150 nm (Figure 3a).The HPAM polymer has an amorphous morphology (Figure 3b), while the nanohybrid (Figure 3c) exhibits a well-formed structure with the NPs attached to the polymer at specific sites.The micrograph of the nSiO 2 particles used to synthesize the nSiO 2 −APTES are not shown in Figure 3, but they have a spherical morphology and size of 85 nm. Figure 4 shows the diffractograms of nSiO 2 -APTES, CSNH-AC, and HPAM.The spectra of HPAM exhibit two broad halo peaks located at 2θ values of 23 and 40°.The spectra of the nSiO 2 -APTES exhibit a broad peak centered at around 2θ = 21.6°.Upon hybridization of the nSiO 2 -APTES with HPAM, this peak signal shifted to higher 2θ values.This was attributed to the attachment of the organic functional groups of the polymer onto the surface of NPs, which tends to reduce the scattering power of the amorphous silica. In conclusion, all XRD patterns are typical of amorphous materials because the atoms are randomly distributed in threedimensional space.In this case, the X-rays were scattered in many directions, giving rise to a halo distributed over a wide range of 2θ, not following Bragg's Law. 3.1.3.TGA Results.TGA curves of CSNH-AC and HPAM are displayed in Figure 6.Both curves have three stages according to the peaks associated with the mass changes, which were identified as stage 1, from room temperature to 270 °C; stage 2, between 270 and 350 °C; and stage 3, >350 °C.The weight loss in stage 1 was 18% for HPAM and 21.1% for CSNH-AC, corresponding to the remaining adsorbed water or volatile solvents in each sample.The weight loss in stage 2 was 10.5% for HPAM and 9.9% for CSNH-AC, and it was assigned to the thermal decomposition of the amide and carboxylate groups of the polymer.Stage 3 corresponds to the decomposition of the C−C bonds from the HPAM backbone. 71n our previous work, 72 it was reported that the weight loss between 350 and 600 °C of the nSiO 2 -APTES was 2.8%, which was attributed to the thermal decomposition of the aminopropyl groups.This weight loss is not significant in comparison to that reported for the HPAM and the CSNH-AC (>20%) in the same conditions. Nanopolymer Sol and Polymer Solution Characterization. 3.2.1. Rheology. As stated earlier, the viscosity data of the HPAM solution and the nanopolymer sol (Figure 7) follow the Carreau−Yasuda model.The model parameters are presented in Table 5.The nanohybrid sol exhibited slightly higher viscosities at shear rate values below 100 s −1 than the HPAM solution at a lower concentration due to the NP/ polymer interaction. 73At higher shear rates, the viscosity of both solutions drops until they reach brine viscosity (infinite viscosity of the Carreau−Yasuda model). 3.3.Dynamic Adsorption and IPV.The breakthrough curves of the HPAM, the nanopolymer sol, and the tracer (KSCN) slugs are shown in Figure 8.The first nanopolymer sol slug had a later breakthrough than the tracer (Figure 8a), showing that nanohybrid retention predominates over the effect of the IPV.The retention occurs by mechanical entrapment, which is the primary mechanism for the slow recovery of the nanohybrid after the breakthrough. 74Also, the breakthrough of the first and second slugs of the HPAM solution happened later than the tracer due to polymer retention (Figure 8b).For the HPAM solution and the nanohybrid sol, the second tracer slugs breakthrough earlier than the first ones because the polymer retention in the first injection reduced the available pore volume for the tracer. The breakthrough time difference method was used to measure the IPV of the HPAM solution and the nanopolymer sol (eq 3).This method provides better accuracy in determining IPV than the areal difference when mechanical entrapment occurs during the core flooding test. 74The nanopolymer sol exhibits higher mechanical entrapment and IPV than the HPAM solution (Table 6) due to its tridimensional network conformation.Also, these parameters could be affected by the low permeability of the rock used in the experimental test. 75.4.Mobility (RF) and Permeability (RRF) Reduction.The RF and RRF values of the HPAM solution and the nanopolymer sol at different shear rates were calculated from eqs 1 and 2 and are shown in Figure 9a,b.The curve of effective viscosity was obtained by eq 7 and is presented in Figure 10.For the HPAM solution, the RF gradually increases with the increase in shear rate, while the RRF slightly decreases and tends to balance.It has been previously reported that the increase in the injection rate of the polymer solution (shear rate) produces the elastic deformation of the polymer molecules by hydrodynamic forces, 32 leading to an increase in the effective viscosity and the RF. 66In contrast, the increase in the injection rate of the chase water causes a reduction in the RRF due to the scouring of the retained polymer molecules in the porous media.The nanopolymer sol's RF and RRF values are greater than those of the HPAM solution but exhibit the same trend with an increase in the injection rate.This can be attributed to the strong flow resistance of the nanohybrid (high retention in the porous media).Due to the high RRF values obtained for both products, some changes in the methodology should be considered, such as the use of higher permeability rocks, the injection of the polymer/nanohybrid until C/C o = 0,5 to prevent filter cake formation, and an increase in the pore volumes of brine injected in the postflush.It could improve the estimation of these parameters, which are vital to the proper design of fieldscale polymer projects. History Matching. The objective functions for the history matching were the pressure drops recorded during the core flooding tests and the breakthrough curves shown in Figure 8. Figures 11 and 12 show the modeling of the laboratory production curves of the nanopolymer sol and the HPAM solution, respectively.500 possible solutions were run by the probabilistic simulator (CMG-CMOST).Figure 13 shows the history matching of the pressure drop during the CSNH-AC flooding at the laboratory scale.Table 7 presents the parameters used in the probabilistic simulation for history matching.It was observed that the best solution (red line) accurately predicts the breakthrough of the first slug of the nanopolymer sol and the HPAM.However, all solutions predicted an anticipated tracer breakthrough.Two reasons can be attributed to this result: (1) the use of a homogeneous conceptual model to represent the average properties of the core sample (porosity and permeability) presents limitations for reproducing the possible heterogeneities in the core plugs, and (2) the tracer concentration was not determined in realtime, causing a difference between the actual breakthrough time and the reported one. The best-fit parameters for the three target functions of both core flooding tests are presented in Table 8.The calculated maximum adsorptions for the CSNH-AC and HPAM are significantly higher than those obtained in the laboratory tests.Polymer adsorption is considered a reversible process that depends on the polymer concentration, rock composition, salinity, and hardness.In the numerical simulation, the reversibility of the adsorption is represented by two modeling parameters: maximum and residual adsorption.The maximum adsorption includes mechanical entrapment, hydrodynamic retention, and chemical adsorption. 2,67When flow conditions change in porous media (i.e., velocity, flow direction, and polymer concentration), some retained chemicals are released 29 but another amount remains adsorbed (residual adsorption) by chemical and/or physical interactions between the polymer backbone and the rock surface. 68For this reason, the estimated and measured adsorption values differ. Lower values of IPV and RRF than those obtained by the laboratory test were predicted by the history matching of CSNH-AC and HPAM.However, the calculated residual adsorption that fits the HPAM model is higher than the laboratory value.RRF, IPV, and desorption values depend on the core heterogeneity.Adsorption can reduce the flow path, leading to a reduction in effective permeability. 32,69Therefore, if adsorption decreases, IPV and RRF decrease.Since the history-matching data reproduced the performance of the chemical slugs in the laboratory tests, they were used to forecast the oil production in the sector model (Figure 2). The model parameters presented in Table 8 were used to predict the displacement efficiency of the nanopolymer sol and the HPAM solution (Figure 14) in the laboratory-scale simulation model (model 1, Table 3).The waterflooding was performed by injecting 10 PV.Then, 0.3 PV of polymer solution (750 ppm) or nanopolymer sol (550 ppm) was injected, followed by 23 PV of water.The incremental recovery factors (compared to the waterflooding) of the WF/HPAM/ WC and the WF/CSNH-AC/WC schemes were 4.1 and 5.4%, respectively.An acceleration of the oil production was observed for the HPAM injection, although the final recovery factor of the nanopolymer sol was 1.3% higher at a lower concentration (Figure 14, green line).This oil recovery is comparable to that obtained in the laboratory tests previously reported by Corredor et al. (2021), 73 where the displacement experiments showed that the nanopolymer sol increased the cumulative oil recovery by 2.2% OOIP compared to the HPAM solution.These results were attributed to the reduction of the capillary forces, the increment of the viscous forces, 73 and the contact of unswept oil areas due to the piston-like displacement of the chase water after the nanopolymer sol injection. The differential pressures obtained by numerical simulation and the laboratory displacement tests 73 were similar.The differential pressures estimated by numerical simulation during the CSNH-AC and HPAM injections were 34.8 and 9.1 psi, respectively.Meanwhile, the maximum differential pressures reached on laboratory tests were 21.8 and 10.2 psi when 0.4 PV of CSNH-AC (550 ppm) and HPAM (750 ppm) were injected into the porous media, respectively.The results of the CSNH-AC injection suggest that the nanohybrid was able to reduce the water permeability (log jamming), 70 allowing the nanopolymer sol to contact unswept zones and displace the oil trapped in the porous media.However, special attention should be paid to the injectivity of the nanopolymer sol.The model presented in Figure 2 and Table 4 was used to perform a field-scale simulation.The HPAM concentration was increased from 750 to 850 ppm to reach the target apparent viscosity of 5 cP in porous media, while the CSNH-AC concentration was kept at 550 ppm. Figure 15 shows the predicted cumulative oil production for water and the HPAM/ nanohybrid injection.The injection of the nanopolymer sol and the HPAM solution (0.1 PV) increased the oil production by 295,505 and 174,465 barrels, respectively, compared to waterflooding.This will represent an incremental recovery factor of 11.3% for the nanopolymer sol and 6.7% for the HPAM Increasing the concentration of the CSNH-AC from 550 to 1200 ppm will produce an additional 60,550 barrels of oil (2.3% of incremental recovery factor).Instead, the incremental oil production from increasing the HPAM concentration from 850 to 1500 ppm will be 35,500 barrels (1.35% of incremental recovery factor).Even after injecting 1500 ppm of HPAM, the incremental oil production is lower than that of 550 ppm of CSNH-AC.Nonetheless, the optimal chemical concentration for a field application should be established based on the operational conditions (i.e., injectivity) and the economic feasibility. CONCLUSIONS This study reports the results of the retention experiment and numerical simulation of the displacement efficiency of a SiO 2 / HPAM nanohybrid.The nanohybrid was characterized by attenuated total reflection−Fourier transform infrared spectroscopy (ATR−FT-IR), FEG−SEM, XRD, and TGA.The results showed that the nanohybrid exhibits better rheological behavior than the HPAM solution at a lower concentration. The RF and RRF values of both samples are shear-dependent. The RRF values decrease by increasing the shear rate (injection rate) due to the scouring of the retained polymer molecules in the porous media by the chase water.In contrast, the RF values increase with the increase in shear rate due to the deformation of the adsorbed polymer/nanohybrid layer by hydrodynamic forces.The nanohybrid exhibited greater retention and IPV than the HPAM solution due to its tridimensional network conformation and because it was injected in a lower permeability core.The incremental recovery factors predicted by the field-scale simulation were 11.3 and 6.7% for the nanopolymer sol and the HPAM solution (as compared to waterflooding), respectively.More oil production with less chemical injection may widen the applications of nanohybrids for the EOR process, but further experiments should be performed. a Circular core section approximated to a square one with the same area open to flow.b Considered to be the effective permeability to water. Figure 1 . Figure 1.Model used to simulate the laboratory experiments. Figure 2 . Figure 2. Sector model used for the numerical simulation of the nanopolymer sol and HPAM solution injection. Figure 11 . Figure 11.Experimental and predicted breakthrough curves of (a) 550 ppm of nanopolymer sol and (b) 30 ppm of KSCN at Sw = 1. Figure 12 . Figure 12.Experimental and predicted breakthrough curves of (a) 750 ppm of HPAM solution and (b) 30 ppm of KSCN at Sw = 1. Figure 13 . Figure 13.Experimental and predicted pressure drop curves for the injection of 550 ppm of nanopolymer sol and 30 ppm KSCN. Figure 14 . Figure 14.Comparative oil recovery factor of the nanopolymer sol and HPAM flooding. Table 1 . Formation and Injection Brine Composition Table 2 . Properties of the Sandstone Plugs Used for the Retention Tests Table 3 . Parameters Used for the History Matching of the Numerical Simulation Model Table 4 . Model Properties Used for Numerical Forecasting Table 5 . Viscosity Carreau−Yasuda Parameters for the HPAM Solution and the CSNH-AC Nanohybrid at 56 °C Table 6 . Adsorption and IPV of the Nanohybrid and HPAM at Sw = 1 Table 7 . Parameters Used in the Probabilistic Simulation for History Matching Table 8 . Model Parameters for Polymer and Nanohybrid Flooding
7,016.8
2024-02-07T00:00:00.000
[ "Materials Science", "Engineering" ]
Few-body semiclassical approach to nucleon transfer and emission reactions A three-body semiclassical model is proposed to describe the nucleon transfer and emission reactions in a heavy-ion collision. In this model the two heavy particles, i.e. nuclear cores A$_1(Z_{A_1}, M_{A_1})$ and A$_2(Z_{A_2}, M_{A_2})$, move along classical trajectories $\vec R_1(t)$ and $\vec R_2(t)$ respectively, while the dynamics of the lighter neutron, n, is considered from a quantum mechanical point of view. Here, $M_i$ are the nucleon masses and $Z_i$ are the Coulomb charges of the heavy nuclei ($i=1,2$). A Faddeev-type semiclassical formulation using realistic paired nuclear-nuclear potentials is applied so that all three channels (elastic, rearrangement and break-up) are described in an unified manner. In order to solve these time-dependent equations the Faddeev components of the total three-body wave-function are expanded in terms of the input and output channel target eigenfunctions. In the special case when the nuclear cores are identical (A$_1 \equiv$ A$_2$) and the two-level approximation in the expansion over target functions the time-dependent semiclassical Faddeev equations are resolved in an explicit way. To determine the realistic $\vec R_1(t)$ and $\vec R_2(t)$ trajectories of the nuclear cores a self-consistent approach based on the Feynman path integral theory is applied. I. INTRODUCTION When trying to describe nuclear collisions, compound and halo nuclei, or, for instance, complex nuclear fusion reactions, few-body models are extremely useful and can play a very important role in the field [1,2]. For example, in works [3][4][5] the authors widely use various few-body models of complex nuclei for numerical computation of different systems and nuclear reactions. In an older paper [6] a detailed few-body approach has been developed for calculation of an important problem in nuclear astrophysics, namely the first two 0 + levels in the nucleus 12 C which was considered as a model three α-particle system [7][8][9]. Specifically, well known three-body Faddeev equations [10] were used in [6,9]. Further, in the case of heavy-ion collisions a three-body semiclassical model has been introduced in [11]. Once again a Faddeev-type formulation was utilized featuring single-term separable (non-local) potentials between particles. For solution of the few-body equations a simplified semiclassical model was applied, where heavier nuclear cores of the system followed along straight-line classical trajectories. Therefore, the resulting model featuring the semiclassical Faddeev equations become a set of coupled time-dependent integral equations. More generally, in the case of the heavy ion collisions [12] various semiclassical models have been formulated and applied, see for example [13][14][15]. However, these approaches mainly used simple straight-line model trajectories [16]. Also, there are other interesting and important problems in the field of heavy-ion collisions such as neutron and a charge transfer and emission reactions [17]. In the framework of the Faddeev approach these channels can be treated in a unified manner. Nontheless, in [14] an interesting attempt has been made to expand this process. In this work the author tried to apply a semiclassical Pechukas formalism [18] to obtain more realistic classical trajectories of heavy nuclear particles. Pechukas's method was originally developed for atomic, molecular collisions and chemical reactions. This theory expands on Feynman's interpretation of quantum-mechanics, which is based on path integrals [19]. Usually, semiclassical methods and models, such as [18], allow us to gain even deeper insight into different few-body or many-body physical systems. They also enable us to introduce even more realistic classical trajectories of heavier particles in models, i.e. to take into account quantum-mechanical corrections in a self-consistent manner [18]. Generally speaking, such a combination of few-body models and methods together with semiclassical models, where the dynamics of heavier particles can be separated from the dynamics of lighter particles, seems to be quite useful. The same approach has already been developed and widely used in some problems of chemical physics for molecular dynamics [20] and even for the description of many body systems, see for example [21]. In the current work we develop a semiclassical model for a few-body treatment of neutron transfer and emission reactions in heavy ion collisions at different impact energies. The (A 1 , n) + A 2 system is shown in Fig. 1, where A 1 and A 2 are the heavy nuclear cores which move along classical trajectories R 1 (t) and R 2 (t). The inter-distance vector R 12 (t) = R 1 (t) − R 2 (2) is also shown in Fig. 1 together with the coordinate r of the third particle, i.e. neutron n, ρ is the impact parameter, v 1 ( v 1 ′ ) and v 2 ( v 2 ′ ) are the initial (final) velocities of the heavy particles, O is the center-of-mass of the three-body system. The semiclassical model of a time-dependent set of Faddeev equations is used. However, in contrast to Revai's approach [11] we formulate the model with the use of two local (realistic) paired nuclearnuclear potentials between the A 1 particle and n and between A 2 and n. The heavy nuclei while the motion of the relatively light neutron n (m n ≪ M A i ) in their nuclear fields is treated from a quantum mechanical point of view. In this model the heavy particles can move along complex Coulomb trajectories. This problem is particularly important for lower energy collisions and small impact parameters, i.e. at ρ ≈ 0: when the use of simple straightline trajectories does not provide an appropriate approximation [22]. In this work we employ a self-consistent Pechukas method [18] which provides a proper way to determine the true trajectories of the heavy classical particles [14,22]. In the next section we will delineate our semiclassical formalism. The self-consistent Pechukas approach is also explained and applied to the three-body semiclassical system as shown in Fig. 1. II. SEMICLASSICAL MODEL In this section a few-body semiclassical model for a single neutron, n, transition from one heavy center to another and n-emission process when the particle reaches the continuous spectrum is presented. In order to describe these processes in a unified way the few-body Faddeev equation approach is applied. To solve these equations a modified close coupling method is used in this work [23]. This method provides a set of coupled time-dependent differential equations with unknown expansion coefficients. A. Time-dependent few-body Faddeev equations The time-dependent integral differential Faddeev equations [10] can be delineated: here H 0 is the kinetic energy operator of the three particles: r jk and R l are the Jacobi coordinates, µ jk and M l the corresponding reduced masses, V jk the two-body potentials. As mentioned above we consider the third particle (neutron, electron or muon) to be the light one, i.e.: Then, the heavy particles 1 and 2 can be considered as moving along classical trajectories R 1 (t) and R 2 (t). For the treatment of this situation and description of the light particle dynamics we use instead of three coupled, time-dependent Faddeev equations just two Faddeev-like equations [24,25]: Here, R(t) is the relative vector between particles A 1 and A 2 , where the time dependence is determined according to classical mechanics. The motion of the light particle 3 (n -neutron) is treated quantum mechanically, p r = ∇ r /i is the momentum operator corresponding to the relative variable r between third particle n and the center of mass of particles A 1 and A 2 . The relative vectors in the subsystems (13) and (23) are denoted by x and y, respectively. To solve these equations (5), we expand the wave function components Ψ k ( r, R(t), t) into the solutions Φ k3 n ( r, R(t), t) of the respective subsystem's Schrödinger equation: That is, we can write the summation (integration) runs accross the whole discrete and continuous spectrum. For the functions ϕ k3 n as being given by Inserting the expansion (7) into (5), we obtain for the coefficients C k n a set of coupled equations: where The matrix elements W jk nm (R(t), t) are obtained by integrating the potentials in Eq. (5) between the channel functions (8), The equations (5) are then to be solved under the initial conditions which implies that for the coefficients C j n ( R(t), t): For reactions at low energies the relative nuclear velocities are practically zero in the respective unities. The exponential factor in eq. (8), hence, can be replaced by the unit and the matrix elements (13) which simplifies to: In order to obtain the capture probabilities |C 2 n (t ∼ ∞)| 2 we have to solve the system of coupled ordinary differential equations (10)- (11). Note that its ingredients and initial conditions are described in (12), (15) and (16). When solving the resulting coupled set of equations for the expansion coefficients, it is observed that its solutions C k n ( R(t), t) tend toward an asymptotic value C k n (ρ) which depends, of course, on the impact parameter ρ. The elastic and transfer semiclassical cross sections of the three-particle collisions are [18]: and respectively, where (dσ/dΩ) cl is the cross section of the classical scattering of A 1 and A 2 which are heavy nuclear cores. For the break-up channel of the reaction one can delineate: where W b−up is the neutron emission probability and k 0 (t) is its wave function in the continuous spectrum: that is: finally one can obtain the following formula for the three-body break-up, i.e. neutron emission process: The triple-differential cross section of this process is: B. Application of Pechukas's self-consistent approach To obtain the true trajectories of the heavy classical particles or nuclear cores A 1 (Z A 1 , M A 1 ) and A 2 (Z A 2 , M A 2 ) one can employ the Pechukas self-consistent method [18] based on the Feynman path-integral theory [19]. In accordance with the method a reduced propagator containing exact information about the reaction β → α can be written using continual where S 0 [ R(t)] is the classical action of the heavy particle moving along R(t); T αβ [ R(t)] are the transition amplitudes used for finding a quantum particle at t 2 in the state |α if at The time-dependent behaviour of the amplitudes T αβ is determined by the Hamiltonian h(t) [18]: In accordance with the self-consistent method [18] a basic variational principle is: Variation of Eq.(26) gives the Newton equations for the dynamics of classical particles in the effective potential field V( R(t)). It takes into account an interplay between the classical and quantum degrees of freedom in the semiclassical system, i.e. quantum-mechanical corrections from the third particle n [18]: where Here, |α(t, t ′′ ) and |β(t, t ′ ) are two solutions of the time-dependent Schrödinger equation with different boundary conditions [18]: and Therefore Next, because the Coulomb potential U c (R t ) between A 1 and A 2 is a constant (c-number) in the quantum r-space one can write down: where The three-body hamiltonian is: Thus, the classical part of the three-body problem (A 1 , A 2 , n) can be resolved in a selfconsistent way. In practice, it can be realized, for example, by few iterations: 1) for an arbitrary R (0) (t), e.g. straight-line trajectories, we solve the quantum part of the problem. The Eqs.(10) -(15) should be solved to get the unknown amplitudes C 1 n (t → ∞) and C 2 n (t → ∞) as time-dependent functions. 2) Now the effective potential V( R(t)) can be computed. To determine R (1) (t) one can employ the expression [27] where J = ρ √ 2ME; ρ is an impact parameter; E is a collision energy and M is the reduced mass of the nuclear cores . Within the next step one needs to compute R j points as a function of time t j . In order to obtain a smooth function one can make a spline R (1) (t) = α Z αj (t − t j ) α , t j ≤ t ≤ t j+1 and obtain a first approximation for R (1) (t), and so on (i=1, 2, 3,.., iterations). The cross-section for the reaction is [18], [27]: where and here r m is a maximum of R when the root is zero. III. QUOTIENT ANALYTICAL SOLUTION OF THE SEMICLASSICAL FAD-DEEV EQUATIONS In this section we consider a special case of a neutron transfer reaction when the heavy nuclei A 1 (Z A 1 , M A 1 ) and A 2 (Z A 2 , M A 2 ) are identical particles and then we restrict ourselves to the two-level approximation in the expansion (7), i.e. n = m = 1. To describe the matrix elements we delineate: and the binding energies in the two channels are equal too: E 13 n=1 = E 23 m=1 . So it turns out that the equations (10)- (11) can be solved in an explicit way: Now, taking into account that: energy, M is the reduced mass, e is the elementary charge. Thus, the three-body transfer cross section can be written down as: Here θ is the scattering angle of the classical particles [27]: and also from [27]: (45) Thereby our final result for the semiclassical three-body neutron transfer cross-section is: Let us now proceed to a calculation of the effective "quantum-classical" potential V(R(t)) between A 1 and A 2 in the transfer channel. To define amplitudes for the reaction we have to adopt the limit t → ∞ which is equivalent to t = t ′′ in Eq. (31) here |α corresponds to the outgoing (A 2 , n)-bound state wave function where ν denotes a quantum state of the system (A 2 , n), e.g. ν = 1. Next, |β(t ′ , t ′′ ) corresponds to the total wave function of our three-particle system: and for the nucleon transfer channel we have: Thus, the effective potential is: where: Finally: where amplitudes C (A 1 ,n) and C (A 2 ,n) are from Eqs. (40)-(41). Now one can do the following: in the first step (or we could name it as a zero-th (0-th) approximation) we only retain the Coulomb interaction in the effective potential V (0) (R t ) = (Z A 1 Z A 2 )e 2 /R, and then calculate the 0-th approximation to the amplitudes C In the second step we can then compute the quantum corrections to the effective potential V (1) (R t ) then in the first approximation one can compute the amplitudes (∞) and of course one could continue the process if desired. Let us write down the matrix element W(R(t)) where V j3 (x) is a local interaction, e.g. a potential pit for the bound system 16 O-n which gives bound states and ϕ (A j ,n) ν ( r − R j (t)) are its wave functions. The results obtained can be used to describe the one n-transfer reaction between two 16 O nuclei. This example has also been numerically calculated in [11] using single-term separable potentials and straight-line trajectories. In the framework of the current formalism the simple expressions (43)-(46) and (54) have been derived by taking into account the Coulomb potential between nuclear cores and using the local nuclear potentials between A i and n (i = 1, 2). The elastic and transfer reaction cross-sections are obtained using the self-consistent Pechukas method. In turn the Pechukas method takes into account the interplay between the classical and quantum degrees of freedom in the semiclassical system and is consistent with the conservation laws of energy and angular momentum [18]. It is essential to note here, that the same consideration as above could be carried out for the three-body break-up channel. This is a very attractive and complicated problem in the field of the heavy-ion collisions. Namely, a neutron emission reaction and/or a charge particle, such as the α-particle emission process. In the case of such reactions, for instance the α-particle emission, in Eqs. (19)- (22) we would need to apply Coulomb asymptotic wave function in the three-body continuum. Obviously, the effective potential between the heavy particles A 1 and A 2 will also be different in the three-body break-up channel. IV. CONCLUSION We have formulated a semiclassical approach for a model three-body system with two heavy nuclear cores A 1 and A 2 moving along classical trajectories and a lighter particle n, i.e. neutron. The three-body system is shown in Fig. 1. The quantum dynamics of n is described based on the few-body quantum-mechanical Eqs. (4)-(5) with realistic (local) nuclear-nuclear potentials V 13 ( x) and V 23 ( y). The classical dynamics of A 1 and A 2 are described based on Newtonian (non-relativistic) mechanics Eq. (27). However, this becomes important, with the use of the Pechukas self-consistent method we could take into account the interplay between classical and quantum degrees of freedom in the system and thereby obtain even more realistic trajectories for the classical particles. Therefore, the proposed method is divided into two parts: the 1st part is the quantum-mechanical problem for a lighter particle "n" dipped into the nuclear potential pits of the heavy particles A 1 and A 2 , the second part is the classical problem for two heavy nuclear cores interacting by the quantum and classical (Coulomb) self-consistent potential V( R(t)), i.e. Eqs. (32)-(33). Also, it would be appropriate to make few comments about the semiclassical Faddeev-type equations, i.e. Eqs. (4)- (5). First of all, the constructed coupled equations satisfy the Schrödinger equation exactly. Secondly, the Faddeev decomposition avoids the over-completeness problems. Therefore, two-body subsystems are treated in an equivalent way and the correct asymptotic is guaranteed [23]. The current method simplifies the solution procedure and provides the correct asymptotic behavior of the solution. Finally, the Faddeev-type equations have the same advantages as the original Faddeev equations, because they are formulated for the three-body wave function components Ψ 1 ( r, R(t), t) and Ψ 2 ( r, R(t), t) with correct physical asymptotes. In the solution of the time-dependent Eqs. (4)-(5) one needs to consider the number of channels n which are needed to be included in the close-coupling expansion (7). This is an important issue, because n controls the number of coupled differential equations to be numerically solved, i.e. Eqs. (10)- (11). However, in the actual numerical computation one could only retain a few states in Eq. (7). For example, it is quite reasonable to expect that for closed shell nuclei, e.g. A i ≡ 4 He, 12 C or 16 O, just one or two states should be predominate during low energy collisions. Next, the expression (8) is true for an inertial coordinate system, i.e. when v =˙ R(t)=const. In the case of the realistic trajectories ˙ R i (t) =const and one needs to make considerable alterations in the expression and in the theory. However, at low energies when v ≈ 0 the exponent multiplier is approximately equal to one, i.e. e imn v r−i mn 2 v 2 t ≈ 1. In conclusion, as mentioned in the introduction, few-body semiclassical models in nuclear physics can help us gain deeper insight into complex nuclear processes. Specifically, in the case of identical heavy nuclei and a two-level approximation in the expansion (7) the resulting set of coupled differential Eqs. (10)-(11) can be resolved analytically, i.e. expressions (40)-(41). This analytical solution might be useful, for example, in the investigation of the nucleus 13 C, e.g. in the collision 13 C + 12 C → 12 C + 13 C; 12 C + 12 C + n. The structure of 13 C=( 12 C, n) can be important for low energy reactions in the s-process neutron source in stars 13 C(α, n) 16 O, see for example [28][29][30]. Also, we would like to note, that a possible relativistic expansion of the semiclassical theory presented above in this paper would be a very useful future work. ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ⑥ FIG. 1. Neutron (n) few-body quantum dynamics between two classically moving nuclear cores: A 1 =(M 1 , Z + 1 ) and A 1 =(M 2 , Z + 2 ). Here, M i , Z + i are masses and Coulomb charges respectively of A i (i = 1, 2), O is the center-of-mass of the three body system, r is the coordinate radius-vector of n, R 12 (t) = R 1 (t)− R 2 (t) is the separation vector between (M 1 , Z + 1 ) and (M 2 , Z + 2 ), R 1 (t) and R 2 (t) are the radius-vectors of A 1 and A 2 ( R 1(2) (t) are not shown in this figure), t is the time in the system, v 1 and v 2 are initial at t → −∞ velocities of (M 1 , Z + 1 ) and (M 2 , Z + 2 ) respectively, v ′ 1 and v ′ 2 are final at t → +∞ velocities of (M 1 , Z + 1 ) and (M 2 , Z + 2 ) respectively, ρ is the impact parameter of the three-body collision: [(M 1 , Z + 1 ), n]+(M 2 , Z + 2 ) → [(M 2 , Z + 2 ), n]+(M 1 , Z + 1 ); (M 2 , Z + 2 )+(M 1 , Z + 1 )+n, where the nucleon transfer from (M 1 , Z + 1 ) to (M 2 , Z + 2 ) and the three-body break-up channels are presented here. The nuclear interaction between n and A i (i = 1, 2) depends on the distances | x| and | y| between n and A 1 and between n and A 2 respectively ( x and y are not shown in this figure).
5,103
2013-07-09T00:00:00.000
[ "Physics" ]
Optimizing the operating time of wireless sensor network A difficult constraint in the design of wireless sensor networks (WSNs) is the limited energy resource of the batteries of the sensors. This limited resource restricts the operating time that WSNs can function in their applications. Routing protocols play a major part in the energy efficiency of WSNs because data communication dissipates most of the energy resource of the networks. There are many energy-efficient cluster-based routing protocols to deliver data from sensors to a base station. All of these cluster-based algorithms are heuristic. The significant benefit of heuristic algorithms is that they are usually very simple and can be utilized for the optimization of large sensor networks. However, heuristic algorithms do not guarantee optimal solutions. This article presents an analytical model to achieve the optimal solutions for the cluster-based routing protocols in WSNs. Introduction There is a common problem in energy efficiency considerations in wireless sensor networks (WSNs): maximizing the amount of data sent from all sensor nodes to the base station (BS) until the first sensor node is out of battery. In sensor networks, sensors send data to each BS periodically during each fixed amount of time. Thus, the problem is the same as maximizing network operation lifetime until the first sensor node run out of battery. Numerous studies have been done on the energy efficiency using cluster-based routing in WSNs [1][2][3][4][5]. Cluster-based routing was originally used to solve the scalability problems and resources-efficient communication problems in wire-line and wireless networks [6,7]. The method can also be used to perform energyefficient routing in WSNs. In the cluster-based routing, nodes cooperate to send sensing data to a BS. In this routing, a network is organized into clusters and nodes play different roles in the network. A node with higher remaining energy can be elected as the cluster head (CH) of each cluster. This node is responsible to receive data from its members in the cluster and to send the data to the BS. However, all of the above-mentioned cluster-based routing work is heuristic. The real benefit of heuristic algorithms is that they are usually very simple and can be used for the optimization of large sensor networks. However, in general, heuristic algorithms do not guarantee optimal solutions. In this article, an analytical model is used to obtain the optimal solutions for the above clustering lifetime problem. The basic idea is to formulate the problem as an integer linear programming (ILP) problem and to utilize ILP solvers [8] to compute the optimal solutions. These solutions are employed to evaluate the performance of previous heuristic algorithms. These analytical models are used to formulate the system lifetime problem into a simpler problem, find the optimum solution for the system lifetime problem, and evaluate the performance of heuristic models. This article is organized as follow. The following section summarizes previous work in energy efficiency using cluster-based routing. Then, an analytical model of the cluster-based routing is developed. The model is first implemented by an analysis of a simple network with one cluster. After that, the analysis is extended for more complex cases of multiple clusters. A new heuristic cluster-based routing is also proposed. Finally, the simulation results of the analytical model, old heuristic solutions, and the new ones are presented and discussed. Previous work in energy efficiency using cluster-based routing In a cluster-based routing, higher remaining energy nodes can gather data from low ones, perform data aggregation, and send the data to a BS. Nodes in networks are grouped into clusters, and nodes that have higher remaining energy are elected as the CHs. In each cluster, the nominated CH node receives and aggregates data from all sensor nodes in the cluster. Usually, the sizes of the data of all sensors are the same and the aggregated data at the CH node has the same size with the data of every sensor in the cluster. As the data are aggregated in the CH node before reaching a BS, this technique reduces the amount of information sent to the distant BS, hence saves energy. For example, if each sensor in the cluster sends a message of 100 bits to the CH node, then the CH node sends the aggregated message of 100 bits to the BS. Details are given in [2,6,9]. As shown in Figure 1, all nodes in Cluster 1 send data to the CH. The node aggregates the data with its own data and sends the final data to the BS. In sensor applications, every sensor node sends data periodically to its BS. Initially, every node starts with the initialized battery storage. A round of data transmission is defined as the duration of time to send a unit of data to the BS. At the end of each round, every sensor node loses an amount of energy which is used to send a unit of data to the BS. The lifetime of sensor networks is defined as the total number of rounds sending data to the BS until the first node is off. Heinzelman et al. [1,2] proposed a Low-Energy Adaptive Clustering Hierarchy (LEACH). In LEACH, the operation of the protocol is divided into rounds. Each round consists of the setup and the transmission phase. In the setup phase, the network is divided into clusters and nodes negotiate to nominate CHs for the round. In more details, during the setup phase, a predetermined fraction of nodes, p, elect themselves as CHs as follows. A node picks a random number, r, between 0 and 1. If (r<T(n)) then The node becomes a CH for the current round else The node remains a non-CH node where T is a threshold value given by: where G is the set of nodes that are involved in the CH election. The selected CHs for the round advertise themselves as the round's new CHs to the rest of the nodes in the network. All the non-CH nodes decide on the cluster to which they want to belong to. The decision is based on the distance to the closest CH. In the transmission phase of LEACH, the elected CH collects all the data from nodes in its cluster, aggregates these data, and forwards them to a BS. In the next rounds, the process is repeated and CH positions are reallocated among all nodes in the network to extend the network lifetime. For examples, as can be seen from Figure 2, the role of CH for Zone 1 is moved from Node 2 to Node 1 and the role of CH for Zone 2 is moved from Node 4 to Node 3 in the next round of data transmission. Therefore, the energy dissipation of these nodes during the network operation is balanced. The LEACH protocol ensures that every node can become a CH exactly once within 1/p rounds. This will not give the optimum network lifetime, as sensor nodes that are far away from the BS will consume more energy than closer nodes to send data to the BS. Therefore, nodes, which are close to BS, need to become CHs more frequently than other nodes. There are some LEACH variants to address the above issues in LEACH protocol [3,[10][11][12][13]. Saha Misra et al. [3] proposed the energy enhanced-efficient adaptive clustering protocol for distributed sensor networks. CHs can be formed based on the residual energy of each node. The residual energy is calculated for every node after each round of transmission. Every node transmits a code containing the information about its residual In cluster-based routing, networks are divided into clusters, in which a node is elected as the CH for each cluster. energy and its identification. If this residual energy is more than the ones of all other nodes in the same sub-area, then the node is the CH for that round in this sub-area. Otherwise, it can detect the node that has the maximum residual energy and elects this node as the CH. A different approach was used by the authors of [4,5] who add the current energy information of sensor nodes into Equation (1). where E current is the current energy of Node n and E initial is the initial energy of the node. If (r < T(n)) then The node becomes a CH for the current round else The node remains a non-CH node Simulation results showed that the lifetime of the network with the scheme is improved 30% compared with the LEACH algorithm under the same experiments for LEACH. After the design of LEACH protocol, these authors further proposed a new centralized version called LEACH_C in [2]. Unlike LEACH, LEACH_C utilizes the BS for creating clusters. During the setup phase, the BS receives the information about the location and the energy level of each node in the network. Using this information, the BS decides the number of CHs and configures the network into clusters. To accomplish this, the BS computes the average energy of nodes in the network, and nodes that have energy storage below this average cannot become CHs for the next round. From the remaining CH nodes, the BS uses the simulated annealing (SA) algorithm to find the k optimal CHs. The selection problem is an NP-hard problem [14,15]. The solution attempts to minimize the total energy required for non-CH nodes in sending data to the corresponding CHs. As soon as the CHs are found, the BS broadcasts a message that contains a list of CHs for all sensors. If a node CH's ID matches its own ID, the node becomes a CH. Otherwise, the node determines its TDMA slot for its data transmission from the broadcast message and turns off its radio until the transmission phase. The transmission phase of LEACH_C is identical to that of LEACH. Under the same experimental settings, LEACH_C improves LEACH from 30 to 40%. Besides cluster-based routings [10][11][12][13], there is also a chain-based one. Lindsey and Raghavendra [16] proposed one type of chain-based protocol called powerefficient gathering in sensor information systems (PEGASIS), which is near optimal for gathering data in sensor networks. PEGASIS forms a chain among sensor nodes so that each node will receive data from a near neighboring node and transmit data to another near neighbor. Gathered data move from a sensor node to the nearest neighbor, are aggregated with the neighbor's data, and eventually reach a determined CH before finally being transmitted to the BS. Figure 3 illustrates the ideas of the PEGASIS protocol. In this round of data transmission, Node 3 is elected as the CH. Node 5 transmits data to Node 4, and Node 4 fuses the data with its own data and transmits the fused data to Node 3. Similarly, Node 1 transmits data to Node 2, and Node 2 transmits the fused data to Node 3. Finally, Node 3 fuses the data of the other nodes with its own data and transmits the final fused data to the BS. The data fusion function can be any function, e.g., minima, maxima, and average, depending on specific applications. Nodes take turns equally to be the CH so that the energy spent by each node is balanced. In other words, each node becomes a CH once for every n rounds of data transmission, where n is the number of sensor nodes. The comparison between the chain-based routings and cluster-based routings were done extensively in [9] and this is not mentioned here as this article only focuses on cluster-based routing. In the next section, an analytical model is presented to achieve the optimal solutions for the frequency of CHs of sensor nodes. The basic idea is to formulate the problem as an ILP problem and to utilize ILP solvers [8] to compute the optimal solutions. These solutions are employed to evaluate the performance of previous heuristic algorithms. Analytical model for optimizing the lifetime of sensor network with one CH In order to minimize the complexities of the clustering problem, the wireless radio energy dissipation model is not used. This assumption does not change the validation of any simulation result. A very simple energy usage model is given as where S denotes a source node, Ddenotes a destination node, E(S) is the energy usage of node S, and dis the distance from S to D. This formula states that the energy required to transmit a unit of data is proportional to the square of the distance to a destination, and there is no energy spent at the destination. In this section, α is set to 1. Let us analyze a very simple network to establish a general method that can be applied for any complicated problem. Figure 4 shows a simple network topology in which there are five nodes that lie on a line. The nodes are located equally from position 0 to position 80 m and the BS is located on the position 175 m. In sensor applications, every sensor node sends data periodically to the BS. A round of data transmission is defined as the duration of time to send a unit of data to the BS. Therefore, the lifetime of sensor networks is defined as the total number of rounds of sending data to the BS until the first node is off. It is assumed that every node starts with the equal initial battery storage of 500,000 units. The problem is maximizing the total the number of rounds of sending data to the BS until the first sensor node runs out of battery. In each round of operation, every node must transmit a unit of data to the BS. It is also assumed that only one node acts as the CH in each round of transmission and the role is reallocated among all nodes so the system lifetime is maximized. The analytical model needs to compute the optimal usage of nodes as CHs under the battery constraint of every sensor. Let us denote x j , ∀j∈ [1. . .5] to be the number of rounds, which Node j becomes a CH and c j i be the energy consumption of Node i, to deliver a unit of data in each round, when Node j becomes a CH, ∀i, j∈ [1. . .5]. As there are five nodes and only one CH, there are five possible choices for the CH in each round and there are also five energy usages for these five sensor nodes, respectively. This is shown in Table 1 CH, c 5 1 is (80 -0) 2 = 6400, the energy dissipation of Node 1 when Node 1 becomes a CH, c 1 1 is (175 -0) 2 = 30625. The optimum number of transmission rounds (or system lifetime) for the network is written as the following ILP problem. Maximize: where E i is the initial battery storage of node i. Formulation (3) states that the total number of rounds must satisfy the battery storage constraint of every sensor node. Table 2 shows the optimum result obtained from (3) when the battery capacity increases from 125,000 to 50 million units. When the battery size is large enough (greater than 1 million units), the number of rounds that each node becomes a CH increases almost linearly with the battery capacity (e.g., the number of rounds of each node is nearly doubled when the battery capacity is increased from 1 to 2 million). Simplification of formulation (3) Formulation (3) can be converted to a linear programming (LP) formulation as given below: Maximize: where the condition of variables being integers is removed. There are two cases to use the formulation to obtain the optimization solutions: (1) E i → ∞ then the solution of (4) becomes the solution of (3) (2) E i ≠ ∞ then the solution of (4) is the approximation of the solution of (3) Formulation (4) can remove the NP-hard characteristic of the ILP formulation (3). Therefore, the optimization solution can be solved by the simplex method [8,9]. In the next section, we will verify the solutions obtained from both formulations. A simple network topology of 11 nodes is given in Figure 5. All nodes are located equally on the line. The nodes are located equally from position 0 to position 100 m (separated each 10 m) and the BS is located on the position 175 m. In the simulation, each node starts with an equal amount of initial energy of 500 million units. The lifetime problem for the network is first formulated as an ILP problem using (3). Then the LP formulation as in (4) is used to calculate the approximate solutions. Table 3 shows that the solutions given by both methods are almost identical. Therefore, the formulation of (4) can be an approximating solution of (3). Also, Nodes 10 and 11 never become a CH as they are too far from other nodes. Node 1 will never become a CH as it is too far from the BS. Analytical model for optimizing the lifetime of sensor network with multiple CH The previous section assumes a very simple case when there is only one CH. It is obvious that for the simple network of Figure 4, too many CHs will drain the energy of all sensor nodes very quickly as the nodes have to send data to the distant BS. This is not true for the other network topologies. The network considered in the analysis section has 20 nodes. The network topology is given in Figure 6. All nodes are located equally on the two lines. For the network, one CH could not be enough, as other non-CH nodes would consume energy significantly to deliver a unit of data to the CH in each round. Table 4 shows the performance of the network with a variable number of clusters. The simulation result shows that two CHs will minimize the total energy consumption to send data to the BS. When the number of CHs is more than one, it is much more complicated to obtain optimum solutions. The number of possible combinations of CHs isO(n k ), where n is the number of sensor nodes and k is the number of CHs. Furthermore, with a selected solution of CHs, each sensor has k choices to select its CH. Therefore, the method of finding the optimum solution includes two optimization processes: optimization of the position of CHs and optimization of gathering traffic to the CHs. In order to design an analytical model for complex cases with multiple CH in sensor networks, Theorem 1 is stated and proved. Theorem 1: Consider two ILP problems with the same objective function and the same variables, if the set of coefficients of ILP problem 2 is smaller than the set of coefficients of ILP problem 1, respectively, for all of these coefficients, then the optimal solution of Problem 2 is higher than that of Problem 1. Consider two ILP problems: Problem 1: Maximize: Problem 2: Maximize: X n j¼1 x j Subject to: Definition: O 1 is the optimal solution of Problem (5 Simple problem 2: Subject to: x 1 þ 2:5x 2 ≤20 2:5x 1 þ 3:5x 2 ≤20 Applying Theorem 1 for two simple problems (1) and (2), as the coefficients of the constraint functions (7) are all higher than those of (8) respectively, the optimal solution Figure 5 A simple topology of 11 nodes on a line. Table 3 The number of rounds each node i becomes a CH solved by formulations (2) and (3) Node i Formulation of (7) must be smaller than that of (8). This result is verified by using the ILP solver in [8]. The optimal solution of Simple problem (1) is 6 while the optimal solution of Simple problem (2) is 8. This theorem is important because in many cases, this is very hard to calculate O 1 . One of the reasons is that working out all coefficients c j i is impossible. Based on the theory, we know that O 2 can be an upper bound of O 1 , or all the feasible solutions of Problem 1 are bounded by O 2 . Theorem 2: Given a clustering sensor network with k CHs, connection from non-CH nodes to the closest CH node of the k CHs provides the optimal lifetime for the clustering network. In more detail, we are given a set of n sensors located in two-dimensional space R 2 . Let us define S as the set of ways to select k CHs in the given set of n sensors. If every CH is different to the remaining k − 1 CHs, the number of elements in S is n k . However, in the theorem, some CHs might be the same and these same CHs are considered as one CH. Therefore, the number of elements in S is n k elements. Let us define s n k (i) as the ith element in S where i in (1. . .n k ). Let us define c i j as the energy usage of Node j consumes, when the ith element in S is selected as the CHs. Let us define n i as the number of rounds, which the ith element in S is selected as the CHs. Let us define E j as the initial energy of Node j and O as the optimal solution of the following ILP problem: Maximize: Subject to: The energy c i j is equal to the energy dissipation of Node j to send a unit of data to the closest sensor node in the ith element in S. Then, O is the optimal lifetime for the sensor network with k CHs. Subject to: As c' i j ≥ c i j ∀i∈S, ∀j∈ [1. . .n], since c i j is equal to the energy dissipation of Node j to send a unit of data to the closest sensor node in the ith element in S, any optimum solution O' of (10) is smaller than the optimum solution O obtained by (9) as Theorem 1. This statement is illustrated in Figure 7. As the result, O is the global optimum solution for maximizing the operation time with k CHs. ■ Calculation of coefficients for Problem (9) The energy coefficients c i j of formulation (9) for a network of n nodes with k CHs can be calculated as follows: For every combination of k CHs from the n nodes For every node from the n nodes If (the node is a CH) then End of code where d toCH is the distance from the sensor node to the closest CH from the k CHs, d toBS is the distance from the sensor node to the BS. Figure 8 shows that for the current selection of k = 3 CHs and n = 15 nodes, the energy coefficient of Node 2 is equal to d 24 2 , and the energy coefficient of Node 1 is equal to d 1 2 . Theorem 3: The problem formulation in (9) provides the optimum solution for maximizing the operation time for any clustering network with the number of CHs smaller than or equal to k. Proof:As stated in Theorem 2, S is the set of ways to select k CHs in the given set of n sensors. In each : Cluster-head Figure 7 Connection from Node 1 to any CH will dissipate more energy than connection to CH 1 (the closest CH of Node 1). : Cluster-head Table 5 The average energy dissipated (units) per round and the number of rounds over the number of CHs combination selection, some CHs might be identical and these identical CHs are considered as one CH. In this case, the number of CHs is less than k. Therefore, any network of less than k CHs is a special element in S, where some CHs are the same. ■ It is of interest to know the optimum solution of the network topology in Figure 6. Every sensor node begins with 1 million units of energy and the above-mentioned simple energy model is used. Table 5 shows the optimum system lifetime versus the number of CHs. The results show that the network achieves the optimum solution at the number of two CHs. It is also of interest to see the distribution of optimums CHs among the 20 sensor nodes in Figure 6. The distribution depends on the position of sensors. The energy model used is d 2 energy model (gamma = 2). Figure 9 shows the five pairs that are chosen as CHs most frequently. The results show that the pair of nodes (7,17) is the most preferred CHs. This is due to the fact that the nodes are not very far from the BS as well as the rest of other nodes. As such, they can become intermediate CHs to deliver data to the BS. The five pairs are selected as CHs for 56% of the total number of rounds. The same experiments are carried out on the same network over the "power 4" (gamma = 4) model. The model is given below: where S denotes a source node, Ddenotes a destination node, E(S) is the energy usage of node S, and dis the distance from S to D. This formula states that the energy required to transmit a unit of data is proportional to the "power 4" of the distance to a destination, and there is no energy spent at the destination. For the rest of this section, α is set to 1. Figure 10 shows the simulation results whenα is set to 1. Compared to the previous results, the CHs move closer to the BS. This is because when the "power 4" model is used, the energy of CH nodes is drained quickly. As such, the nodes need to be closer to the BS. The five pairs are selected as CHs for 58% of the total number of rounds. A simplified LEACH_C protocol (AVERA) As mentioned in the Section "Previous work in energy efficiency using cluster-based routing", LEACH_C utilizes the BS for creating clusters. During the setup phase, the BS receives information about the location and the energy level of each node in the network. Using this information, the BS decides the number of CHs and configures the network into clusters. To do so, the BS computes the average energy of nodes in the network. Nodes that have energy storage below this average cannot become CHs for the next round. From the remaining possible CH nodes, the BS uses the SA algorithm to find the k optimal CHs. The selection problem is an NP-hard problem. If the BS is also far away from main power sources and is energy-limited and processing-limited, it is impractical for the BS to run LEACH_C as it creates significant delay and requires significant computation. In this case, we modify LEACH_C algorithm by removing Patterns of cluster-heads, Gamma=2 Node pairs Percentage of total rounds Gamma=2 Figure 9 Percentage of the total number of rounds that each pair of nodes is a pair of CHs for d 2 energy model. the SA algorithm process. In more details, our algorithm AVERA is implemented as below. Patterns of cluster-heads, Gamma=4 AVERA: In every round, select k CHs randomly from m sensor nodes that have their energy level above the average energy of all nodes. Simulation and comparison Most of previous work on WSN lifetime [1][2][3][4][5] used the energy consumption model and the energy dissipation parameters given in [9]. The data are kept the same in our experiments to make the comparison between our proposed algorithms and previous ones feasible. The power transmission coefficients for free space and multipath are given below. From the parameters, the output power of a transmitter over a distance d is given by where d o is set to 82.6 m. The value of E elec follows the experiments in [1,2,[17][18][19] and is set to 50 nJ/bit. In summary, the total transmission energy of a message of k bits in sensor networks is calculated by and the reception energy is calculated by where E elec , ε FS , ε MP , and d o are given above. First, the optimum number of CHs of these networks is studied. In the experiments, 100 random 80-node sensor networks are generated. Each node begins with 1 J of energy. The network settings for the simulations are given below. The sensor positions and the BS position are defined as below . This is the same settings used in [1][2][3][4][5]9,18,19]. Network During the sensor operation, every sensor node sends data periodically to the BS. A round of data transmission is defined as the duration of time to send a unit of data (4000 bits) to the BS. Each round consists of a setup and a transmission phase. In the setup phase, the network is divided into clusters and nodes negotiate to nominate CHs for the round. In the LEACH_C and AVERA protocols, each node sends its energy level message to the BS (20 bits). The BS decides the CHs for the round and sends a broadcast message (200 bits) about the decision for the round to all sensor networks. In the transmission phase, the elected CH collects all data from nodes in its cluster and forwards the data to a BS. After each round, every sensor node loses an amount of energy for the data transmission in the round. The amount depends on the distance from the sensor to its CH or to the BS. The lifetime of sensor networks is measured as the total number of rounds sending data to the BS until the first node is off. LEACH, LEACH_C, and AVERA are used over 100 network topologies while varying the number of CHs from 1 to 8, and the system lifetime and the energy dissipation per round are recorded for these numbers of CHs. Figure 11 shows that the energy dissipation per round is minimized for LEACH, LEACH_C, and AVERA at the number of CHs from 3 to 4. The result agrees well with the analytical model and the results are presented in [1,2,17]. Validation of the analytical model In this section, the performance of LEACH, LEACH_C, and AVERA and the optimum solution from the analytical model is verified. The number of CHs is set to three in all methods. All methods are run over the above 100 random 80-node network topologies and the ratio between the lifetime of the three protocols and the optimum are recorded. For the calculation of the optimum solution, we use the GNU Linear Programming Kit (GLPK) and the MIP solver. GLPK is a free GNU LP software package for solving large-scale LP, MIP [8]. GLPK provides two methods to solve LP and MIP problems: (1) Create a problem in C programming language that calls GLPK API routines (2) Create a problem in a text editor and use a standalone LP/MIP solver to solve it. We use method 2 to calculate the optimum solution. Figure 12 shows that both AVERA and LEACH_C perform very closely to the optimum solution while LEACH is only 70% of the optimum solution. The computation time for all three protocols is also recorded on the 100 network topologies. The computational time for LEACH, AVERA, and LEACH_C are 1.6,2.5, and173.2 s, respectively. This shows that the new protocol AVERA provides a reasonably good operation time while guarantees less processing from the BS. Conclusion This article has presented some energy-efficient clusterbased routing protocols. In sensor networks, BSs only require a summary of the events occurring in their environment, rather than the sensor node's individual data. To exploit the function of the sensor networks, sensor nodes are grouped into small clusters so that CH nodes can collect the data of all nodes in their cluster and perform aggregation into a single message before sending the message to the BS. Since all sensor nodes are energy-limited, CH positions should be reallocated among all nodes in the network to extend the network lifetime. The determination of adaptive clusters is not an easy problem. We start by analyzing simple networks with one CH first to be able to obtain an effective solution for the problem. Then the model is extended to networks with multiple CHs. Heuristic algorithms are also proposed to solve the problem. Simulation results show that LEACH solution performs quite far from the optimum solution as it does not directly work on the remaining energy of all sensor nodes. At the same time, both AVERA and LEACH_C solutions perform very closely to the optimum solution. Note that the computational time for AVERA is also 1.4% of LEACH_C.
7,590.8
2012-11-21T00:00:00.000
[ "Computer Science", "Engineering" ]
Growth of professional noticing of mathematics teachers: a comparative study of Chinese teachers noticing with different teaching experiences The last decade has witnessed increasing interest in the study of teacher noticing in mathematics education research; however, little is known about the growth of teacher noticing and how it is influenced by teaching practice. Departing from the expert-novice-paradigm, in this paper we address this research gap by a cross-sectional study that investigates how Chinese mathematics teachers’ noticing is affected by their developmental stage, measured by the length of their teaching experience. The study included 152 pre-service teachers at the end of their initial teacher education, 162 early career teachers with one to five years’ teaching experience, and 123 experienced mathematics teachers with more than 15 years’ teaching experience, who participated in a video-based assessment of their noticing competency conceptualized by the sub-facets of perception, interpretation, and decision-making. Our findings indicate a nearly linear growth in teacher noticing among Chinese mathematics teachers, with significant differences identified between pre-service and experienced teachers and only small differences between pre-service and early career teachers. Analyses using the method of Differential Item Functioning (DIF) further suggest that pre-service and early career teachers demonstrated strengths in aspects more related to reform-oriented or Westernized approaches to mathematics teaching, such as working with open-ended tasks, identifying characteristics of cooperative learning, and mathematical modeling tasks. By contrast, experienced teachers demonstrated strengths in perceiving students’ thinking, evaluating teachers’ behavior, and analyzing students’ mathematical thinking. Our findings further highlight that the three sub-facets of teacher noticing develop differently within the three participating groups of teachers. These findings suggest that teaching experience acts as one influential factor in the development of teacher noticing in the Chinese context. Introduction The past decade has witnessed increasing interest in teachers' professional noticing, particularly in mathematics education research (e.g., Schack, Fisher, and Wilhelm 2017;Sherin, Jacobs, and Philipp 2011a, b). Teachers' noticing is generally accepted as a critical component of mathematics teaching expertise and an important factor in the improvement of teaching effectiveness generally, and students' mathematical achievements in particular (Sherin et al. 2011a, b). Therefore, a clear understanding of the development process of teacher noticing seems essential not only for understanding the construct of teacher noticing but also for the effective understanding and promotion of the growth of teacher noticing. Hitherto, little empirical evidence has been available to answer the question of "What trajectories of development related to noticing expertise exist for prospective and practicing teachers?" posed by Schoenfeld (2011, p. 234) nearly ten years ago. The study by Jacobs, Lamb, and Philipp (2010) is one of a few that used a cross-sectional design to compare the similarities and differences of teacher noticing among four groups of mathematics teachers with different teaching experience and-especially-professional development experience. Although they observed variability within each teacher group, consistent patterns and a monotonic growth were evident across the four groups. Schoenfeld's (2011) question suggests that the patterns of strengths and weaknesses of teacher noticing should be investigated in relation to the level of teaching experience in classrooms. Therefore, the growth of teachers' noticing skills seems to be "worthy of attention" (Jacobs et al. 2010, p. 193) and studies investigating patterns are overdue. However, most studies on the development of teacher noticing are intervention studies aimed at identifying effective ways to enhance teacher noticing. Mainly video-based, the pre-and post-test results generally suggested clear improvement of teacher noticing (Santagata et al., 2021, this issue). The intervention compounds various effects that must be considered, such as increased teaching experience, differences in the observed videos, and a learning effect from seeing the same videos several times (Simpson, Vondrová, and Žalská, 2018). Moreover, such studies offer only limited insight into the possible development trajectories of teacher noticing since many interventions took place during the pre-service training period and do not reflect the impact of teaching practice on the growth of teacher noticing without specific interventions. However, teachers' professional growth in general, and the development of teacher expertise in particular, is a "complex and continuing process" (Wilkie 2019, p. 96). Therefore, although teaching experience alone is insufficient for the development of teacher noticing (Jacobs et al. 2010;Simpson et al. 2018), teaching experience is undoubtedly necessary for the growth of teachers' noticing. Teachers, especially in-service teachers, will continue learning to teach mainly within a specific school community through informal learning opportunities, such as mentoring, observing other teachers' teaching, and peer group discussion (Kyndt, Gijbels, Grosemans, and Donche, 2016;Patrick 2010). Therefore, studies within a more authentic context are necessary to obtain a more comprehensive picture of the development of teacher noticing. Teacher noticing has been described as "socially and culturally constructed" (Louie 2018, p. 61), referring to approaches that frame learning to teach mathematics as a cultural activity. Therefore, teacher noticing development should be more thoroughly understood as a process that takes place within a particular sociocultural context. Hitherto, however, most studies of mathematics teacher noticing in general, and of teacher noticing growth in particular, have been conducted in Western contexts. Little empirical evidence is available from non-Western countries, for example, from China as an influential East Asian country with a specific mathematics teacher education culture. In China, pre-service teachers are trained at specific teacher training institutions, called normal universities, with a strong focus on subject matter knowledge. Normal universities generally provide four-year bachelor programs for pre-service teachers. Around 60% of pre-service teacher curriculum hours are devoted to mathematics subject courses, such as advanced algebra, analytical geometry, functional analysis, abstract algebra, and topology (Paine, Fang, and Wilson 2003;Li, Huang and Shin, 2008). Owing to the lack of pedagogical content knowledge and teaching skills, newly graduated teachers in China are not regarded as qualified but as "semi-finished products" (Paine et al. 2003, p. 216), who will need to learn and continue to develop their teaching skills when they enter teaching positions in schools. The most common way for in-service teachers to develop their teaching skills in China is through school-based professional activities, such as compulsory mentoring programs, open lessons, and exemplary lessons (see Li and Huang, 2018). Departing from these manifold research gaps, in this study we seek to answer the following research questions: (1) How do teachers' noticing skills develop globally in relation to their teaching experience? We addressed this question by comparing three cohorts of Chinese teachers with different degrees of teaching experience (preservice teachers, early career teachers, and experienced teachers). For this purpose, we examined whether a growth in teacher noticing in relation to the degree of teaching experience can be identified and how it can be characterized. At a more fine-grained level, in the study we aimed to answer the second research question: (2) Is it possible to identify strengths and weaknesses of these different cohorts of secondary school mathematics teachers concerning different sub-facets of noticing and different aspects of noticing, and if yes, which of these are present? As the study included an East Asian context, we examined whether the same trend of growth can be identified as was reported in the few already existing studies carried out in Western contexts. Mathematics teachers' noticing and our own theoretical approach Owing to the complexity of teaching and the different areas of focus in empirical studies, teacher professional noticing is defined in various ways, and different aspects of teaching practice are considered (Sherin et al. 2011a, b). For example, Sherin and others defined teacher professional noticing as comprised of the following three components: (a) identifying what is important in an instructional setting; (b) making connections between specific events and broader principles of teaching and learning; and (c) using knowledge about the context to reason about a situation (Sherin 2007;van Es 2005, 2009). Jacobs et al. (2010) subsequently refined the definition and proposed that teacher professional noticing includes (a) attending to students' strategies, (b) interpreting students ' understanding, and (c) deciding how to respond based on students' understanding. To date, these two definitions of teacher noticing are the most widely cited conceptualizations in the field of teacher noticing. Sherin et al. (2011a, b, p. 5) noted the consensus within research that this construct can be characterized as consisting of at least two components, namely, attending to important classroom incidents, and making sense of events in an instructional setting including interpreting and reasoning. Depending on the conceptualization of the last component, it may include instructional responses by the teachers. These two components were considered as the two phases of teachers' noticing, and were described as consequential, interrelated, and cyclical. The current study TEDS-East-West is embedded in the TEDS-M (Teacher Education and Development Study in Mathematics) research program, specifically the TEDS-Follow-up study. Overall, TEDS-East-West aims to compare the influence of mathematics teachers' professional competence on instructional quality and students' mathematics achievement between China and Germany. The TEDS-FU study extended the cognitive approach to teacher competence (namely teacher knowledge and beliefs) to include situation-specific competence facets referring to the approach of 'noticing' (Kaiser et al. 2015;. In the extended theoretical framework developed within the TEDS-M research program, teacher noticing was defined as consisting of the following three facets: (a) perceiving particular events in an instructional setting; (b) interpreting the perceived activities in the instructional setting; and (c) decision-making, either anticipating responses to students' activities or proposing alternative instructional strategies (Kaiser et al. 2015(Kaiser et al. , 2017. These three sub-facets of noticing-perception, interpretation, and decision-makingwere called the 'PID model'. This approach refers to the expert-novice paradigm, in which the construct perception was widely used to describe the first phase of teachers' actions in an instructional setting, restricting the construct to observable, discernable incidents (Berliner 2001;Carter et al. 1988). The construct of attending was deliberately not used, although it is the more usual terminology in the noticing discourse, as this construct is already strongly connected to interpreting what is important in the classroom (Sherin et al. 2011a, b). In order to allow an empirical separation of the different facets of teachers noticing, our conceptualization refers to the construct of perception. In contrast to other frameworks, this definition not only requires teachers to perceive and interpret particular events but also to make decisions and develop reasonable proposals for the continuation of classroom activities. This definition of teachers' noticing is not restricted to noticing of students' mathematical thinking, as in most of the earlier frameworks. This understanding of noticing comprises a broad understanding of the whole classroom situation and the aspects important for the quality of mathematics teaching, such as the design of mathematical teaching and learning processes, the potential for students' cognitive activation, individual learning support, and classroom management (Yang et al. in press). Overall, this model differentiates teacher professional noticing in two sub-domains: noticing based on general pedagogy (P_PID) and noticing based on mathematics pedagogy (M_PID). As the results reported in the present study were developed within this research program in an East Asian context, the theoretical framework used in studies concerning this conceptualization of teacher noticing was employed in this study. Differences in teacher noticing in relation to teaching experience In recent years, many researchers have investigated the development of teacher noticing. The most popular approach is to adopt the expert-novice paradigm by comparing teachers at different developmental stages or levels of expertise (e.g. Jacobs et al. 2010). Beginning in the 1980s, within general research on expertise (Chi et al. 1981), studies compared the differences in knowledge and classroom teaching behaviors between novice, early career, proficient, and expert teachers (e.g., Berliner 2001). Although the main focus of these studies was not teacher noticing, they "can be regarded as precursors" (Lachner, Jarodzka, and Nuckles 2016, p. 198) to current studies on teacher noticing, since many aspects of teacher behaviors are essentially related to teacher noticing. In earlier studies on teacher expertise, novice teachers were found to attend mainly to surface-level events, such as student behavior and disciplinary issues, and were sometimes able only to attend to one event and ignored the others (Berliner 2001; Leinhardt, Putnam, Stein, and Baxter 1991;Tsui 2003). By contrast, expert teachers were better able to read critical cues from students and attend to classroom teaching events swiftly, holistically, and accurately. Expert teachers exhibited greater ability to interpret the attended events in greater detail and with more insight. For example, novice teachers were unable to provide in-depth explanations or struggled to develop accurate interpretations of what they noted (Carter et al. 1988). By contrast, expert teachers could relate teaching principles and concepts to the noticed events and could therefore make knowledge-based interpretations (Berliner 2001;Tsui 2003). Furthermore, expert teachers were better able to make ongoing adjustments to their teaching or make fast decisions (e.g., Livingston and Borko 1989). Indeed, teaching with flexibility has been widely proposed as a characteristic of expert teachers (e.g. Berliner 2001Berliner , 2004, which implies that expert teachers will make necessary and proper decisions during teaching. More recently, empirical studies also compared the differences in teacher noticing between expert and novice teachers from the theoretical frameworks of teacher noticing as mentioned above. For example, Huang and Li (2012) found in their study that both expert and novice teachers attended to the development of students' mathematics knowledge and mathematical thinking ability, but expert teachers paid greater attention than novice teachers to developing higherorder mathematical thinking and mathematics knowledge, with less attention to teachers' direct guidance. Wolff, Jarodzka, van den Bogert, and Boshuizen (2016) found by comparing novice and expert teachers from diverse subjects, based on eye-tracking technology, that expert teachers' perception was more knowledge-driven and focused; furthermore, expert teachers were more likely to attend to critical cues and interpret them in relation to relevant classroom management issues. By contrast, novice teachers' perception was found to be more image-driven and scattered; they described more superficially salient cues. A cross-sectional study by Jacobs et al. (2010) that compared four cohorts of mathematics teachers with different degrees of teaching experience identified a significant monotonic trend for all three facets of noticing skills (attending, interpreting, and decision-making). They found that pre-service mathematics teachers struggled with all three noticing facets; early career teachers showed evidence of attending to students' strategies and interpreting their understanding; almost all advanced teachers and teacher leaders demonstrated evidence of attending to students' strategies and some evidence of interpreting students' understanding; most emerging teacher leaders demonstrated superior expertise in deciding how to respond. Methodology The study reported in this paper was conducted between 2016 and 2019 within the frame of the study TEDS-East-West embedded in the TEDS-M research program. Participants The present study's sample consisted of the following three cohorts: 1) 152 pre-service teachers close to completing their four years' pre-service teacher education at undergraduate level; 2) 162 early career teachers with one to five years' teaching experience; 3) 123 experienced teachers with over 15 years' teaching experience at junior secondary school level (grades 7-9). The 152 pre-service mathematics teachers were recruited from two normal universities in China. Fifty-one percent completed their teaching practicum in junior or lower secondary schools (Grades 7-9) and the others completed teaching practicum in senior or upper secondary school (Grades 10-12). Overall, all had four months of practical experience of school teaching lasting one semester, and all were trained to teach secondary school mathematics after graduation. The 162 early career teachers were chosen from 18 provinces in China. Among them, 126 were part-time Master's degree students in mathematics education with at least one year of teaching experience in junior or senior secondary school. All were trained in the mathematics department at a normal university to teach junior (lower) and senior (upper) secondary school mathematics. The other 36 early career teachers with a first degree in mathematics education were chosen from the 18 junior secondary schools in which the highly experienced teachers were working. Among the 162 early career teachers, 59% were female and 17% taught in rural schools. The 123 highly experienced junior secondary school mathematics teachers with teaching experience ranging from 15 to 36 years were chosen from different school types and school locations (rural and urban). In China, junior secondary school teachers teach only one subject, in this case mathematics. These teachers were chosen from 18 junior secondary schools in the same district in Chongqing, Western China's largest administrative area. Among them, 48% were female, and 35% taught in rural schools when the assessment was carried out. Assessment instruments The instruments used in the present study to test participants' noticing skills were adapted from the instruments designed in the TEDS-Follow-Up and TEDS-Instruct/Validate projects within the TEDS-M research program (for a detailed description of the adaptation and validation process, see Blömeke 2018, 2019). The original three video-vignettes were developed within the TEDS-M research program to assess German mathematics teachers' professional noticing. As the instrument has already been described in other publications, we refrain from describing item examples and refer to descriptions in various publications (e.g., Kaiser et al. 2015). The video assessment examined teachers' mathematics instruction-related noticing (M_PID) and general pedagogical noticing (P_PID) and distinguished the following three facets: perception, interpretation, and decision-making. Both the P_PID items and M_PID items required teachers to notice mathematics classroom teaching holistically-that is, items related to almost all aspects of classroom teaching. Critical incidents in mathematics teaching and a range of typical teaching phases in a mathematics lesson were covered in the three video-vignettes, which explored the topics of functions, volumes and surfaces of geometrical solids. The three video-vignettes were based on scripted plots rather than episodes from real mathematics classroom teaching. Each video-vignette lasted around four minutes to provide participants with an overview of the whole lesson. Background information about the class and lessons prior to the lesson shown was also provided to help participants achieve a more comprehensive understanding of the teaching. After watching each of the three videos, the mathematics teachers were asked to answer several items related to each of the videos within 15-20 min (around 60 min altogether). In total, there were 38 items (22 P_PID and 16 M_PID, see Fig. 1 for a sample item) based on Likert scales (four categories ranging, for example, from 'wholly correct'" to 'incorrect') to assess teachers' perception. Thirty-six constructed-response items (18 P_PID and 18 M_PID) were used to assess the teachers' interpretation and decision abilities. An expert rating was implemented in developing the test instrument to decide which answer could be regarded as correct with respect to the rating scales. A coding manual was developed and piloted before it was used in the German projects to improve its reliability and validity. Various different approaches, such as curricular analyses of the mathematical content and comprehensive expert workshops, were employed to ensure the instruments' content validity (Hoth et al. 2016). To adapt the instruments for the Chinese context, the instruments were translated into Chinese and were checked by two mathematics education researchers and four junior secondary school mathematics teachers. Necessary modifications were made to several expressions used in the instrument. Several items closely related to the German mathematics curriculum and heterogeneity or multiple cultural backgrounds of students were deleted since they did not match the situation in China. Three Chinese junior secondary school mathematics teachers and their students retook the three video-vignettes and performed exactly as their German counterparts did. To further validate the instrument of teacher noticing in a Chinese context, both qualitative and quantitative methods, including content validity, 'elemental validity' (Hill, Dean, and Goffney 2007;Kane 2001), and construct validity, were employed to evaluate the psychometric properties of M_PID and P_PID, respectively (see Yang et al. 2018). In the video-vignette the working processes of three cooperating pairs have been observed more closely. These working processes are to be examined from two perspectives: (a) mathematics education and (b) pedagogics. (a) Mathematics education perspective In each of the three approaches the task is represented and solved mathematically in a specific way. Please describe (in note form) the essential aspects of the approaches in a contrasting mode from a mathematics education view. Please name -if possible -the corresponding technical terms. (b) Pedagogics perspective Please describe (in note form) for each of the three pairs in a contrasting mode the essential aspects of the way the two students cooperated in their work. Scaling and data analysis The data analysis comprised the following steps. First, the open response items were coded according to the coding manual's rubrics. Independent raters coded 56 of the questionnaires; good Cohen's Kappa values were reached (k > 0.79 and K average = 0.86). For all open response items, items with no response or incorrect responses were scored 0, and each correct answer was scored 1 (for items with several sub-items, the sum of the correct answers was calculated). After completion of coding, the relative item difficulties for a one-parameter (Rasch model) item response theory (IRT) model were calculated separately on P_PID and M_PID. Items with extreme difficulty were removed from the final analysis because they showed weak discrimination and did not substantially contribute to the measurement of the construct (Bond and Fox 2007). The internal consistency of the remaining items in P_PID and M_PID were estimated using Cronbach's alpha reliability coefficient: these ranged from 0.712 to 0.789. Subsequently, a multi-group graded response model was applied for both the P_PID and M_PID aspects of their subfacets (perception, interpretation and decision-making). Owing to the low number of items in the decision-making sub-facet, decision-making and interpretation were merged, supported by the strong relationship between these sub-facets. The calibration of the a-parameters (item discriminations) and b-parameters (item difficulties) was conducted based on three teacher groups (concurrent calibration). Thus, item parameters were constrained to be equal in all groups, ensuring the same metric in the groups. The person parameters (maximum likelihood estimates) were then computed. The overall P_PID and M_PID and the four sub-facets person parameters of the three groups were transformed to a scale with an average score of 500 and standard deviation of 100. One-way ANOVAs were performed separately to examine the differences among the three groups of teachers, and post hoc (Scheffé) significance tests were further conducted to examine the differences between each of the two groups of teachers. Furthermore, three pairwise differential item functioning (DIF) analyses were conducted with the main aim of identifying which items were typically in favor of which group of teachers. DIF is typically used to investigate item bias (Holland and Wainer 1993): an item is labeled as exhibiting DIF when different groups of test-takers have different probabilities of answering the item correctly after controlling for their abilities on the construct being measured. DIF analyses have also been used to identify potential strengths and weaknesses of specific groups as a complement to other methods in examining cultural influences in cross-cultural comparative studies (e.g., Blömeke, Suhl, and Döhrmann 2013;Mesic 2012;Yang et al. 2019) or to examine the influences affecting learning in specific subjects (Gess, Wessels and Blömeke 2017). In the present study, DIF analyses were performed to identify possible strengths and weaknesses of a specific group of teachers at the item level due to differences in their teaching experience defined by group samples (Sect. 3.1). DIF was detected using manifest logistic regressions (Swaminathan and Rogers 1990), which can detect uniform and non-uniform DIF (Hambleton et al. 1991). An item showing uniform DIF indicated that a specific group of teachers outperformed another group systematically throughout all the ability levels. If an item showed non-uniform DIF, the teaching experience factor significantly impacts only teachers with either higher or lower competence. The magnitude of DIF in the present study was further classified into three levels according to the thresholds proposed by Jodoin and Gierl (2001): ΔR 2 ≤ 0.035 is negligible, 0.035 < ΔR 2 ≤ 0.07 is moderate, and 0.07 < ΔR 2 is large. After the items with DIF had been detected and classified, content analysis of the items with DIF was further conducted to identify explanations. Overall performance differences on P_PID and M_PID The overall teacher noticing performance results from the aspects of P_PID and M_PID and the two sub-facets under each for the three teacher groups are presented in Table 1. The overall mean of the three groups was transformed to 500 test points with a standard deviation of 100 test points. As Table 1 indicates, an initial pattern can be identified: a monotonic increase in the numerical values of mean scores of both the overall P_PID and M_PID and their sub-facets, which can be interpreted as almost linear growth of teacher noticing amongst the three groups of teachers. Aside from the perception sub-facet of P_PID, one-way ANOVA results showed significant differences among the three teacher groups for all other aspects and sub-facets. However, as Table 1 demonstrates, the post hoc analysis results showed significant differences only between preservice and experienced teachers and early career and experienced teachers. No significant differences could be identified between pre-service and early career teachers, indicating no significant development between pre-service and early career teachers. Indeed, as Table 1 indicates, pre-service teachers' achievement in the (sub)facets related to P_PID was only about 0.15 standard deviations lower than that of early career teachers. The differences in the mean scores related to M_PID (sub) facets were even smaller (less than 0.1 standard deviations). Another pattern can be identified, namely, that the difference in the interpretation and decision-making sub-facets among the three groups of teachers is much greater than the difference in perception. As shown in Table 1, experienced teachers outperformed pre-service and early career teachers by 0.92 to 1.5 standard deviations in the sub-facets of interpretation and decision-making in P_PID and M_PID. However, in the sub-facet of perception, the differences were much smaller. This result suggests that it is relatively difficult for pre-service and early career teachers to develop noticing skills related to interpretation and decision-making. Specific strengths and weaknesses in P_PID as indicated by DIF To identify possible strengths and weaknesses in the P_PID sub-facets of the three groups of teachers, three pairs of DIF analyses were carried out separately between each of the two teacher groups. Table 2 summarizes the distribution of the uniform DIF results considering the two facets of P_PID (perception and interpretation/decision-making) between two of the three teacher groups since no items showed nonuniform DIF. As Table 2 demonstrates, the low number of DIF differences between pre-service and early career teachers can be identified as a clear pattern, which confirms the overall results. We found that more items showed DIF between pre-service and experienced teachers and early career and experienced teachers. By contrast, only three items showed moderate DIF between pre-service teachers and early career teachers. Between the early career and experienced teacher groups, more items were found to favor early career teachers on the sub-facet of perception, and by contrast, more items were found to favor experienced teachers on the sub-facet of interpretation and decision-making. However, between the pre-service and experienced teacher groups, a similar number of items were found to favor both groups on the sub-facet of perception, and relatively more items were again found to favor experienced teachers. Content analysis was further conducted on the items with DIF for the sub-facet of perception. For items with DIF between pre-service and early career teachers, the item favoring pre-service teachers was related to classroom management ("it takes a long time for the students to calm down and the lesson to start") and the item favoring early career teachers was related to students' behavior ("most students take an active part in the lesson"). For items with DIF between early career and experienced teachers, content analysis revealed that the main focus of the items favoring early career teachers was to investigate teachers' perceptions related to classroom management (e.g., "it takes a long time for the students to calm down and the lesson to start") and students' behavior, focusing particularly on students' participation in discussion (e.g., "the students take part in the lesson discussion"). The only item favoring experienced teachers was related to the accuracy of teachers' instruction ("the teachers' instruction is very precise"). For items with DIF between pre-service and experienced teachers, the items favoring pre-service teachers were mainly related to the teachers' behavior ("the teacher presents the central question of the lesson orally and in writing"). By contrast, the four items favoring experienced teachers were mainly related to students' thinking (e.g., "the teacher ensures that students have time for individual thinking"). Concerning the sub-facet of interpretation/decision-making we found that for items with DIF between pre-service and early career teachers, only one item favored early career teachers, requiring teachers to suggest methods to make their teaching less teacher-centered. For items with DIF between early career and experienced mathematics teachers, the same item was again found to favor early career teachers. The three items favoring experienced teachers contained more teaching-related reflections, such as evaluating why the teachers' comment is not helpful for developing students' cognitive activities or suggesting teaching methods to use instruction time efficiently, and identifying the teaching phase and modifying it accordingly to better address the differences in individual students' mathematical abilities. Interestingly, for items with DIF between pre-service and experienced teachers, the same item (requiring teachers to suggest teaching methods to make teaching less teacher-centered) was found to favor pre-service mathematics teachers, and the same three items were found to favor experienced teachers. Overall, the DIF results on the P_PID items indicated no significant difference between pre-service and early career teachers. However, for the sub-facet of perception, pre-service mathematics teachers were found to be stronger than experienced teachers in aspects related to teachers' behavior. Early career teachers exhibited strengths in paying attention to students' roles, particularly students' opportunities for participation in discussions. By contrast, experienced teachers exhibited strengths in items related to teaching accuracy and students' thinking. In addition, for the noticing sub-facet of interpretation/ decision-making, pre-service and early career teachers exhibited strengths in suggesting methods to make teaching less teacher-centered, a more recent approach to teaching inspired by Western approaches. By contrast, experienced teachers performed better in evaluating teachers' behavior and more traditional lesson organization, either by suggesting methods to effectively use lesson time or by analyzing individual students' answers. Specific strengths and weaknesses in M_PID as indicated by DIF To identify which items on the sub-facets of M_PID typically favored which specific group of teachers, three separate pairs of DIF analyses were conducted. Again, no items showed non-uniform DIF. Table 3 summarizes the distribution of the DIF results considering the two facets of M_PID (perception and interpretation/decision-making) between each pair of the three groups of teachers. As Table 3 indicates, for the sub-facet perception referring to M_PID, no items showed DIF between pre-service and early career mathematics teachers. Moreover, when compared with experienced teachers, only one item was found to favor early career or pre-service teachers, with relatively more items favoring experienced teachers. For the sub-facet of interpretation and decision-making referring to M_PID, generally speaking, between each pair of the three groups of teachers, similar numbers of items were found to favor each of them. However, as Table 3 indicates, more items demonstrated DIF between pre-service and experienced teachers and early career and experienced teachers. Content analysis was further conducted on all items demonstrating DIF. For the sub-facet of perception of M_PID, the one item that was found to favor early career or preservice teachers was the same item that required the teachers to judge whether the task had the characteristics of an openended task. The other items favoring experienced teachers required teachers to evaluate the correctness of a student's statement, whether a specific topic such as function and space and form is important during teaching, or whether the task shown in the video is already accessible for lower grade students, or required to determine whether the statement of one individual student was helpful in solving the main task shown in the video-vignette. For the sub-facet of interpretation/decision-making referring to M_PID, for items with DIF between pre-service and early career teachers, the only item favoring pre-service teachers required teachers to use a term with a meaning similar to the meaning of 'enactive' to describe the critical characteristics of students' group work. The two items favoring early career teachers both related to the aspect of 'reality'; for example, the teacher mentioned that the topic closely relates to reality and modified the lesson task to make it more realistic. For items with DIF between early career and experienced teachers and pre-service and experienced teachers, the items were all found to relate to the same tasks. For the items favoring early career teachers or pre-service teachers, four are from a cooperation task requiring the teachers to highlight the critical characteristics of students' group work in relation to mathematics education, more specifically distinguishing the different kinds of representation in mathematics, such as enactive, iconic, or symbolic representation. The other items required the teachers to modify the lesson problem to make it more realistic or foster students' modeling competence (the latter only favoring pre-service teachers). Two of the items favoring experienced teachers were related to the same problem, which required the teachers to provide three indicators from the answer given by one student shown in the video that she solved the problem using a purely algorithmic approach without deeper understanding. Another item required the teachers to identify the main difference between a student's statement and his classmates' statements from a mathematical perspective (i.e., to correctly answer the item, the teachers needed to notice that the student was deducing or making a linear assumption). The last two items also related to the cooperation task, which required the teachers to identify the different approaches ('enactive' and 'symbolic') to describe the critical characteristics of students' work. Overall, the content analysis of the characteristics of the items with DIF on the aspect of M_PID emphasizes that for the sub-facet of perception, no major differences exist between pre-service and early career teachers. Compared with experienced teachers, both pre-service and early career teachers demonstrated weaker professionalism on the aspects related to evaluating the correctness and usefulness of a student's statement and making a judgment concerning a specific aspect of mathematical content. By contrast, the early career and pre-service teachers were found to demonstrate stronger professional noticing on judging whether a task was open-ended. For the sub-facet of interpretation and decision-making, stronger differences could be identified between pre-service and experienced teachers and between early career and experienced teachers. In general, the pre-service and early career teachers were found to demonstrate stronger professional noticing on modern and more Westernized themes, such as mathematical modeling or cooperative learning. By contrast, the experienced teachers showed professional noticing strengths on aspects related to more abstract topics and the inner nature of mathematics, such as using a specific term to describe students' group work and analyzing students' mathematical thinking. Discussion Our main aim in the present study was to compare teacher noticing among three teacher cohorts with different degrees of mathematics teaching experience, namely pre-service, early career, and experienced teachers, to describe the development of teachers' noticing influenced by teaching experience and to identify possible patterns of teacher noticing among teachers at specific developmental stages. The results suggest that for the two facets of teacher noticing investigated in the present study, almost a linear growth in teacher noticing can be traced as teaching experience increases. However, post hoc analysis demonstrated significant differences only between pre-service and experienced teachers, and between early career and experienced teachers. No significant differences were identified between pre-service and early career teachers for both P_PID and M_PID and their sub-facets. The findings are thus consistent with those of Jacobs et al. (2010). They also identified monotonic trends for all three sub-facets of teaching noticing among teachers with different professional development experiences of students' mathematical thinking. Such findings further suggest that teaching experience indeed acts as a main-though not sufficient-factor in the development of teacher noticing (Schoenfeld, 2011). The linear growth of teacher noticing among the three teacher cohorts and the weak differences between 1 3 pre-service and early career teachers may first be explained by the tradition of Chinese mathematics teacher education. As mentioned above, pre-service mathematics teachers have few opportunities to improve their teaching practice skills within initial teacher education and are expected to develop their teaching skills further after they enter teaching positions (Paine et al. 2003). Such traditions hinder newly graduated teachers from the acquisition of necessary mathematics pedagogical content knowledge and teaching practice skills. It is thus understandable that there are no significant differences in teacher noticing between pre-service and early career teachers. However, the school-based professional development culture provides every in-service Chinese teacher with the opportunity to continuously develop his or her practical skills (Han and Paine 2010;Lu, Kaiser, and Leung 2020). Therefore, the existence of linear growth in teacher noticing among Chinese mathematics teachers is understandable as well as the significant differences between pre-service and experienced teachers and early career and experienced teachers. The relationship between knowledge and teacher noticing may also further help to explain the present study's findings. It has hitherto been widely accepted that teacher knowledge has a fundamental impact on teacher noticing (Schoenfeld 2011;König et al. 2014;Yang et al. in press). In terms of teacher knowledge, such as mathematics content knowledge and mathematics pedagogical content knowledge, empirical studies have found that pre-service and early career mathematics teachers performed significantly more poorly than experienced teachers (Han, Ma, and Wu 2016;Kleickmann et al. 2013). However, early career teachers' knowledge remained largely unchanged during their first years of teaching (Blömeke et al. 2015a, b). Therefore, owing to the differences in teacher knowledge foundation among teachers with different teaching experience, it can be expected that they will also perform differently on noticing-related tasks. DIF detection results reveal further interesting differences among the three teacher cohorts. First, several items showed DIF between pre-service and early career teachers. As reported above, only six items in total showed DIF, with four favoring early career teachers and two favoring pre-service teachers. The findings again confirm that relatively little development occurs in terms of teacher noticing between pre-service and early career teachers. However, it is worth highlighting several small differences between these two groups of teachers. Those items favoring early career teachers are mainly related to the perception of students' roles within teaching and learning processes, suggesting teaching methods to make teaching less teacher-centered and modifying instructional tasks. However, the tasks favoring preservice teachers are mainly related to classroom management. These findings suggest that although the development is small, after a couple of years of teaching in the classroom, Chinese early career teachers shift their focus from classroom management to students' roles and instructional methods. Similar findings have been found in non-Chinese contexts. After an intervention period, teachers were also found to attend to salient characteristics of mathematics instruction and students' situations, such as students ' mathematical engagement (e.g., McDuffie et al. 2014;Mitchell and Marin 2015;Stockero, Rupnow and Pascoe 2017). Secondly, more items were found to show DIF between pre-service and experienced teachers and between early career and experienced teachers, which again confirms the significant difference between less-experienced and experienced teachers. For the sub-facet perception within P_PID, the items favoring pre-service mathematics teachers were mainly related to teachers' behaviors, the items favoring early career teachers were related to students' discussion opportunities, and the items favoring experienced teachers were related to accuracy of teaching and students' thinking. To a certain degree, these results are consistent with the findings in previous studies. As reviewed above, according to expertise research, pre-service mathematics teachers typically attend to disciplinary issues and teacher moves during teaching rather than the salient characteristics of mathematics instruction (Berliner 2001;McDuffie et al. 2014). Moreover, earlier studies from expertise research found that, compared with teachers at other developmental stages, expert teachers organize learning processes quite differently: expert teachers tend to spend considerable time at the beginning of the school year establishing classroom norms and routines so that they can focus on teaching afterwards (Berliner 2004;Tsui 2003Tsui , 2009. As experienced teachers are used to wellestablished classroom routines, they may pay less attention to classroom management issues such as teaching disruptions but attend more to events closely related to teaching, such as students' errors and thinking. Thirdly, relatively more items concerning the sub-facet interpretation/decision-making were found to favor the experienced teachers, requiring them to use either pedagogyrelated or mathematics-related knowledge to interpret students' work and develop corresponding decisions, such as suggesting ways to use lesson time effectively, and analyzing students' mathematical thinking or answers. Similar findings have been reported in earlier research; for example, previous studies found that expert or more experienced teachers were more able to interpret students' understanding based on the mathematical elements used (e.g., Callejo and Zapatera 2017) or showed superior expertise in deciding how to adequately respond to their perception of students' understanding (Jacobs et al. 2010). The differences between the experienced teachers and pre-service and early career teachers in the sub-facet interpretation/decision-making can be explained by the differences in the knowledge foundation among the three groups of teachers. As mentioned above, experienced mathematics teachers were found to possess a more solid foundation in MCK and MPCK (Han et al. 2011). Recent studies on teacher noticing found that MPCK was more strongly needed during the process of interpreting and responding to students' mathematical understanding (Sánchez-Matamoros, Fernandez and Llinares 2019). Therefore, a profound understanding of mathematics itself and more developed and accessible MPCK allowed the experienced mathematics teachers in the study to identify students' mathematical misconceptions and to propose different ways to deal with these misconceptions, with approaches spanning from graphic to symbolic explanations. Moreover, China's school-based professional development culture also helps to explain the differences. As mentioned above, Chinese teachers are expected to further develop their teaching skills after they have begun teaching through school-based activities (Han and Paine 2010;Lu et al., 2020). In these activities, experienced teachers will typically take the leading role to help newly graduated teachers devise different lesson plans for the same topic or to comment on or evaluate less-experienced teachers' teaching with the aim of repeatedly modifying lessons to improve teaching quality (Huang, Su and Xu, 2014). Therefore, it was expected that the experienced teachers in this study would exhibit superior skills in modifying lessons and suggesting ways to facilitate students' understanding. The identified differences in teacher noticing related to the sub-facet interpretation/decision-making among the three groups of teachers can be further explained by results from the expertise research, including expert teachers' ability to teach more flexibly and swiftly and to provide more meaningful decisions for further teaching (Berliner 2001(Berliner , 2004Livingston and Borko 1989). Since the experienced teachers in the study all had over 15 years' teaching experience, it was expected that many would have developed the expertise to interpret teaching events meaningfully and adjust their teaching flexibly. Aside from the patterns elaborated above, several other findings merit discussion. First, concerning the sub-facet of interpretation/decision-making from P_PID, compared with experienced teachers, pre-service and early career teachers were better able to identify the essential weaknesses or shortcomings in students' group work, as shown in the video-vignettes. Furthermore, for the same sub-facet from the perspective of M_PID, pre-service and early career teachers demonstrated stronger noticing skills concerning aspects related to open-ended tasks or modifying modeling tasks. Moreover, for the sub-facet of perception of P_PID, early career teachers demonstrated specific strength on the aspects related to students' opportunities for discussion. These more unexpected results can be interpreted and understood considering the mathematics education tradition and culture in China. Group work, students' discussion, and mathematical modeling are relatively new topics in China, introduced only in the most recent mathematics curriculum from 2000, which encouraged cooperative learning and mathematical modeling. Previously, mathematics teaching in China could be described as relatively traditional, with dominant teacher talk, routine tasks and many mathematical exercises (Leung 2001). Consequently, experienced teachers in the study may have lacked the knowledge or experience to organize effective cooperative mathematics learning and carry out mathematical modeling and could therefore neither identify students' weaknesses in group work nor modify modeling tasks. Furthermore, experienced teachers may even doubt the usefulness of group work and modeling tasks in teaching. Recent studies found that teachers' transmissive beliefs hinder them from professionally observing classroom situations (Meschede, Fiebranz, Möller and Steffensky 2017). Finally, the present study's findings further confirm that the three sub-facets of teacher noticing-perception, interpretation, and decision-making-may develop differently and at different paces. More specifically, both the overall and DIF results suggest that perception is better developed at the pre-service and beginning stages of teaching. The preservice and early career teachers were better able to perceive events related to classroom management, teacher behavior, and process-oriented mathematical skills such as openended tasks and mathematical modeling, which they may have learned during their university study. By contrast, the findings reported above suggest that the sub-facets of interpretation and decision-making are more difficult to develop. This is consistent with findings identified in other contexts (e.g., Callejo and Zapatera 2017;Jacobs et al. 2010). As these authors observe, the sub-facets of interpretation and decision-making, especially those related to mathematics instruction, may need more deliberate practice to achieve a certain level of proficiency. However, the sub-facet of perception may be the easiest to develop, particularly in relation to general pedagogical issues. However, more studies are needed in this respect before secure conclusions can be drawn. Conclusions and limitations Although many previous studies investigated the development of teacher noticing in mathematics education, little empirical evidence has hitherto been available in relation to possible patterns of strengths and weaknesses in noticing for teachers at different developmental stages. The present study's central goal was to close this gap by evaluating the growth of teacher noticing among three cohorts of mathematics teachers with different lengths of teaching experience, within authentic contexts rather than intervention contexts, as in many of the previous studies. As few studies have been conducted in non-Western cultures, this study seemed overdue, as noticing is considered a culturally shaped construct (Louie 2018;Yang et al. 2019). As reported above, the findings in the present study indicate a nearly linear growth of teacher noticing among Chinese mathematics teachers, as was pointed out in the study by Jacobs et al. (2010). However, significant differences were identified only between pre-service and experienced teachers, no significant differences appeared between pre-service and early career teachers. DIF results further suggest that for the noticing sub-facet of perception, pre-service mathematics teachers tended to demonstrate strength in relation to teacher behavior and judging whether a task was open-ended; early career teachers demonstrated particular strength in relation to students' opportunities for discussion; and experienced teachers tended to show strength in aspects related to teaching accuracy, students' thinking, and correctness in students' statements. For the noticing sub-facet of interpretation and decisionmaking, pre-service and early career teachers exhibited strength in aspects related to more reformed or Western approaches to mathematics teaching, such as identifying critical characteristics of cooperative learning and mathematical modeling-related tasks. By contrast, experienced teachers showed strength in relation to evaluating teachers' behavior and analyzing students' mathematical thinking. The present study's findings also indicate differences in the development rates of the three sub-facets of teacher noticing. Although the present study is one of the few empirical studies hitherto to have used cross-sectional design in a non-laboratory setting in China to investigate the development process of teacher noticing, its limitations should be mentioned. First, the experienced teachers were mainly chosen from one administrative area, and the pre-service teachers were mainly chosen from two normal universities in China. Therefore, the samples may not be sufficiently typical and representative to reflect the general situation or diversity of mathematics teacher noticing in China. Future studies should consider teachers from a wider geographical range and include teachers from other school levels, such as primary or senior upper secondary school level. In addition, only three cohorts of teachers with different teaching experiences were included. Further studies are needed, including teachers with a greater variety of teaching experience. Such studies would provide further insightful and meaningful information about teachers' noticing trajectories in mathematics education. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
11,354
2021-01-20T00:00:00.000
[ "Mathematics", "Education" ]
Failure Characteristics and Mechanism of Multiface Slopes under Earthquake Load Based on PFC Method Understanding the failure mechanism and failure modes of multiface slopes in the Wenchuan earthquake can provide a scientific guideline for the slope seismic design. In this paper, the two-dimensional particle flow code (PFC 2D ) and shaking table tests are used to study the failure mechanism of multiface slopes. The results show that the failure modes of slopes with different moisture content are different under seismic loads. The failure modes of slopes with the moisture content of 5%, 8%, and 12% are shattering-shallow slip, tension-shear slip, and shattering-collapse slip, respectively. The failure mechanism of slopes with different water content is different. In the initial stage of vibration, the slope with 5% moisture content produces tensile cracks on the upper surface of the slope; local shear slip occurs at the foot of the slope and develops rapidly; however, a tensile failure finally occurs. In the slope with 8% moisture content, local shear cracks first develop and then are connected into the slip plane, leading to the formation of the unstable slope. A fracture network first forms in the slope with 12% moisture content under the shear action; uneven dislocation then occurs in the slope during vibration; the whole instability failure finally occurs. In the case of low moisture content, the tensile crack plays a leading role in the failure of the slope. But the influence of shear failure becomes greater with the increase of the moisture Introduction e landslide caused by earthquakes is a common natural disaster in mountain areas [1][2][3]. ousands of geological disasters have been triggered by the Wenchuan earthquake (M S � 8.0) in western Sichuan, China, most of which (more than 60,000) were landslides [1,4]. ere are dozens of largescale landslides (>ten million m 3 ) and more than 100 landslides with an area of more than 50,000 m 2 [1,4]. e landslide triggered by earthquakes is easy to cause a large number of casualties and economic loss. erefore, it is important to study the failure mode and mechanism of slopes under earthquakes, which is of great significance to carry out earthquake defense and disaster reduction [5,6]. Gang et al. [7] studied the dynamic failure modes of the bedding slope and counterbedding slope by the shaking table test. e results show that the main dynamic failure modes of the bedding slope include a vertical tensile crack at the rear of the slope, bedding slide of the strata along the weak intercalation, and rock collapse from the crest of the slope. In contrast, the dynamic failure modes of the counterbedding slope mainly include horizontal and vertical dislocation fissures, weak interlayer extrusion, and breakage at the crest. Liu et al. [8] performed a shaking table test to study the failure mode of slopes with horizontal soft and hard interbeddings under frequent microseisms. e failure mode of the slope is summarized as follows: creep-opening tensile cracks at the shoulder and opening pressing cracks at the slope bottom-developing secondary joints in the back end-developing secondary joints near the slope surfaceshearing in soft layers-slope sliding-accumulating blocks at the slope bottom. Li et al. [9] studied qualitatively the failure mechanism of the Hongshiyan landslide and the stability of the remnant slope combining with the on-site investigation and unmanned aerial vehicle (UAV) threedimensional imaging technologies. e failure mechanism is summarized as tension-crushing-shattering-sliding. Deng et al. [10] pointed out that the dynamic failure mode of the bedding rock slope with zigzag asperities is mainly characterized by the vertical tensile crack at the slope rear edge and integral slipping of the slope along the bedding surfaces, and the deformation pattern of the sliding surface was mostly determined by undulating angles and normal stress. Hou et al. [11] used a two-dimensional particle flow program (PFC 2D ) to investigate the dynamic process and hyperactivity mechanism of the loose deposit slope on the Ya'an-Kangding Expressway. e results show that the porosity of the surface slope generally increases with increasing seismic-wave loading time, while the porosity of the slope remains unchanged. Abe et al. [12] simulated and analyzed the dynamic characteristics of the slope model with different inclined weak interlayers by MPM (material point method). Chang and Tabada [13] used the discrete element model to simulate the avalanche caused by the Jiufengershan earthquake under various assumptions of rock properties, water table height, and boundary shear strength. Tang et al. [14] used the PFC 2D model to simulate the motion behavior of landslides caused by earthquakes. e above studies are mainly based on a single free-face. However, there were some slopes damaged with multiple free-faces in the Wenchuan earthquake [15]. Yang et al. [16] simulated the failure process of double-side high and steep slope based on the continuous medium discrete element method (CDEM) combining with shaking table test data. e results show that the stress concentration appears at the top of the sliding mass at first, and then, a part of the tensionshear failure points appears, which expands from the top toward the toe of the sliding mass along the structural plane. Finally, the rupture of the toe leads to a landslide. rough model tests, the dynamic response of double-side slopes under strong earthquakes was studied by Xiao et al. [17]. It is found that the failure of different forms of double-side slopes under seismic waves is mainly caused by the repeated tension-shear effect and co-shear effect in both directions. Yang et al. [4] studied the failure process of a double-side slope with high moisture content under seismic load based on shaking table tests. e test results show that the slope failure has undergone a gradual deformation process, and the slope failure mode is a creeping landslide. rough the analysis of present studies, it is shown that the failure mechanism of multiface slopes with different moisture content under earthquakes is not clear. e mechanism of crack formation and evolution during slope failure needs to be studied deeply and carefully. In this paper, the seismic instability characteristics and mechanism of three kinds of multiface slopes with different moisture content will be investigated based on PFC 2D and shaking table test. Test Details To verify the effectiveness of the numerical modeling, a set of shaking table model tests were carried out. A 3 m × 2 m electro-hydraulic servo vibration table was used. e maximum displacement amplitude of the shaking table is 100 mm, and the effective load is 20 t. It can output acceleration within the range of 0.05 g-1.5 g, and the frequency range is 0.5-100 Hz. e model box used in the test is a selfdeveloped rigid model box with smooth glass on one side, which is 2 m long, 0.8 m wide, and 1.5 m high, as shown in Figure 1. In this paper, the slope with multiple free-faces was simulated by a double-sided slope. In the model box, the double-sided slope is built by layered filling, and the filling height of each layer is 10 cm. In the process of slope filling, the soil is rolled and compacted under constant external force to keep the same porosity in the slope. e final compactness of the model slope is 86.4%, which is medium dense. e height of the double-side slope is 800 mm, the width of the crest of the slope is 360 mm, and the slope angle is 50°. e white sand with vertical zonal distribution is placed on the inner side of the glass wall to observe the deformation of the soil in the test. e test model is shown in Figure 2. To monitor and study the dynamic characteristics and displacement of the slope, 15 acceleration sensors and 7 displacement sensors are arranged in the slope. At the same time, cameras are set on the front and side of the slope to record the whole process of slope failure, as shown in Figure 3. Sine wave was used to simulate excitation load. e test frequency is controlled at 3 Hz, and the amplitude is increased step by step from 0.1 g to 0.6 g (g is gravity acceleration). e failure of the slope is observed within the loading time of 12 s. If there is no obvious damage, it is found that the slope is stable under this loading amplitude. Numerical Models of PFC e numerical simulation method of particle flow (Particle Flow Code) is based on the discrete element method proposed by Cundall [18]. is method can be used to study the mechanical properties and behavior of media from a microscopic point of view. Constitutive Model and Parameters. In the two-dimensional PFC model, the particles are represented by a rigid disk, and these discrete particles are subjected to force only in the contact part. When the force acting on the contact point is larger than the contact strength, these particles can be separated from each other, making the model object deformed and displaced. e force and motion of the particles follow the basic principle of Newton's law. e constitutive relation of the soil material can be realized by microcontact and bonding mode between particles. For homogeneous geotechnical materials, a large number of studies have shown that it is more reasonable to use the parallel bonding model [11,19,20]. In this regard, the parallel bond model is adopted in this paper, under which the particles have normal strength, tangential strength, normal stiffness, and tangential stiffness. us, it has the ability of antitension, antishear, and antitorsion. Normal stiffness and tangential stiffness can be expressed as [20] 2 Shock and Vibration Acceleration sensor Laser displacement sensor Shock and Vibration where k n is the normal contact stiffness, k s is the tangential contact stiffness, E c is Young's modulus of each parallel bond, R is the bond radius, λ is the radius multiplier used to set the parallel-bond radius, L is the bond length, and I is the moment of inertia. e soil used in the test is a mixture of fine sand and clay at 1 : 1, with three kinds of moisture content of 5%, 8%, and 12%. e numerical model adopts the same soil parameters as the similar materials mentioned above, and the specific results are shown in Table 1. Boundary Conditions and Ground Motion Input. In PFC, the wall represented by rigid lines is used as the basic unit. To prevent the dynamic force from reflecting at the boundary of the model, the wall at the bottom of the slope is set with a certain value of damping, and the damping force will be directly substituted into the equation of motion, as shown in equation (2). ere are mainly viscous damping and local damping. In this paper, through a large number of trial calculations and sensitivity analyses, it is determined that the tangential and normal viscous damping is 0.157 and the local damping is 0.219: where F (i) , M (i) , and A (i) are generalized force, mass, and acceleration components, respectively, and F d (i) is the damping force. e field investigation shows that even in the extreme earthquake area of the Wenchuan earthquake, the horizontal seismic force plays a leading role in the instability of roadbed engineering [21]. erefore, to simplify reasonably, only the horizontal ground acceleration is considered in this paper. A sine wave is used to simulate the excitation load. In PFC, acceleration cannot be applied directly to the slope, so it can only be done by applying velocity. at is, for a � A sin(2πft), v � −A cos(2πf)/2πf, which is applied to the bottom of the slope. e load frequency is 3 Hz, and the amplitude is in the range of 0.1 g to 0.6 g. e failure of the slope is observed within the loading time of 12 s. If there is no obvious damage, the slope is considered to be stable under this loading amplitude. Two geometric dimensions were used in the numerical model, one is consistent with the model test size, and the other is 10 times larger than the model test size. e numerical model and the location of the monitoring points are shown in Figure 4. Figure 5 shows the failure process of slopes with different water content under earthquakes. It can be seen from Figure 5 that the numerical simulation results are consistent with the shaking table test results, which shows the effectiveness of the numerical model in this paper. e failure process of the slope with 5% moisture content is that the soil slips along the shallow layer of the slope. A large vertical displacement occurs at the top of the slope, but no obvious horizontal displacement can be observed within the slope, indicating that the shear stress is not the main factor at this time. However, there are two obvious shear slip surfaces in the slope with 8% moisture content, which are almost symmetrical along the middle axis of the slope. When the slope is unstable, the shear failure occurs along the sliding surface. e failure mode can be described as the tensionshear-slip mode, which is the same as the failure mode of Wangjiayan and Daguangbao landslides in the Wenchuan earthquake [15]. With the increase of moisture content of 12%, there is no obvious sliding surface within the slope when the slope fails. However, there are many cracks in the slope and the horizontal displacement in the slope is obvious when loading for 1 s. e slope remains relatively stable until the loading time reached 3 s, and the upper soil collapses and accumulates downward. In this condition, the shattering-collapse slip failure occurs in the slope, which is similar to the failure in the new area of the new Beichuan middle school [15]. Compared with the numerical simulation maps of the slope, it is obvious that, with the increase of moisture content, the distribution range and spacing of slope joints are expanding, and the settlement at the top of the slope is increasing. Based on the analysis of the failure mechanism of three kinds of slopes with different moisture content, it is found that, under the seismic load, the connection between surface soil particles is gradually destroyed, which results in the decreases of the tensile and shear strength. With the continuous vibration, the surface soil slips from the top to the bottom of the slope under the action of seismic force, accumulates at the foot of the slope, and slips on the shallow surface as a whole. Research studies show that, with the increase of soil moisture content, the dynamic shear modulus of soil decreases and the damping ratio is improved. So, the ability of soil to resist plastic deformation increases [22,23]. Shear failure occurs in the soil of 8% moisture content slope to form a continuous slip surface, and finally, the sliding body slips along the slip surface as a whole. For the slope with 12% moisture content, the dynamic shear modulus of the soil further is decreased, which results in serious damage of the soil by shearing stress. e obvious horizontal uneven dislocation can be seen in Figure 5(e). Consequently, the soil is cut into small pieces under the action of shear, and the stress is redistributed. With the increase of vibration time, instability and collapse occur. Dynamic Response of Slope. To further explore the characteristics of slope failure under seismic load, the dynamic response of the slope was studied based on the surface monitoring points (A1, A6, A10, and A13). e displacement time-history curve and velocity time-history curve for the three groups of working conditions are shown in Figure 6, in which the lines of monitoring point 1, point 6, point 10, and point 13 are represented by black, red, blue, and green curves, respectively. It can be seen from Figure 6 that the horizontal displacement of the slope with 5% moisture content decreases with the increase of height. At 0.15 s, the displacement and velocity have some changes. e velocity and displacement of monitoring points 1 and 6 were the largest, whose curves have roughly coincided with each other. Furthermore, a peak value occurs in the velocity time-history curve at this time and then decreases gradually, which tends to coincide with the peak value of the upper soil. e curves of monitoring points 10 and 13 are separated, and the displacement and velocity of monitoring point 13 are relatively small. rough the analysis of the time-history curves of velocity and displacement, there is a large slip at monitoring points 1 and 6 and a larger speed at the moment of sliding, and the bond between monitoring points 10 and 13 has been destroyed. e curves of monitoring points 1 and 6 did not separate until 0.8 s, which indicates that although the displacement of the lower soil was larger than that of the upper soil, the loosening time of the internal soil of points 1 and 6 was later than that of the upper soil. e slope with 8% moisture content is mainly affected by the shear failure, whose displacement and velocity timehistory curve abruptly change at 0.25 s, and the velocity and displacement at monitoring point 13 decrease, while the curves of the other three monitoring points remain coincident. Combined with the analysis of Figures 5(c) and 5(d), it can be seen that the lower three points slide along the shear plane in the same sliding body, while the failure of the upper soil is due to the collapse of the lower soil. e displacement and velocity of monitoring point 1 suddenly increase at 2.25 s, indicating that the soil at monitoring point 1 has slipped out from the original position. Besides, the displacement and velocity time-history curve of monitoring point 6 also increases sharply at 2.6 s, indicating that when the soil at monitoring point 1 slips out from the original position, monitoring point 6 immediately slips out from the original position. It can be seen that the displacement timehistory curve for 8% moisture content is much denser than that for 5% water cut slope, which indicates that the damage degree of the bond between particles for 8% moisture content is lower than that for 5%. e displacement and velocity time history for 12% moisture content almost coincides before 2.25 s, and the displacement increases steadily with the increase of vibration time, which shows that the bond failure between soil particles in the slope with 12% water content is lower. e transmission effect between particles is strong, and the relative sliding is not easy to occur between particles. After 2.25 s, the displacement and velocity at monitoring points 1 and 6 suddenly increase and slip out from the original position, while the displacement and velocity at monitoring points 10 and 13 still maintained a relatively stable development. By comparing the displacement and velocity time-history curves for the three kinds of moisture content, it can be found that the displacement and velocity for 5% moisture content are significantly smaller than that of 8% and 12%. e displacement of the monitoring point is related to the velocity of nearby soil particles. During the failure of the slope with 5% moisture content, the amplitude of the input sine wave is small, and the whole vibration of the slope is lower. e horizontal seismic force does less work to the soil particles. For this reason, the bond between the soil particles in the shallow topsoil for the 5% moisture content is destroyed into a loose accumulation. In addition, it can be seen from Figure 5 that the slope with 5% moisture content has a shallow surface slip, and the interior of the slope is relatively stable. Based on the above analysis, with the increase of moisture content, the displacement time-history curves of the slope become closer, and the velocity time-history curve before failure is relatively much closer, indicating that the relative motion between particles is reduced. Development Process of Microfissures. PFC 2D can simulate the development process of interparticle fracture. e normal bond failure between particles leads to tensile failure, and tangential bond failure leads to shear failure. In this section, black represents the crack caused by shear failure and red indicates the crack caused by tension. It can be seen from Figure 7 that the cracks in the slope with 5% moisture content develop rapidly, and shear cracks are mainly in the lower part of the slope. From the previous analysis, the slope began to slide at 0.1 s. ere are only tensile cracks in the shallow part of the slope at 0.3 s, and the shear cracks are concentrated at the toe of the slope. e displacement and velocity time-history curves at monitoring points 1 and 6 remain coincident at 0.15 seconds, which shows that the soil at the lower part does not split yet, while the separation of the curves at the upper monitoring points 10 and 13 indicates that the soil around the monitoring point 10 and 13 has become loose due to tension. At 0.5 s, more tensile cracks get developed in the lower shear region, which plays a leading role in the slope failure. e displacement and velocity time-history curves at monitoring points 1 and 6 are separated, which shows that the soil slides under the action of seismic force and self-weight stress. Overall, the slope with 5% moisture content is relatively loose. At the initial stage, the tensile damage of the slope surface is serious. e development of Shock and Vibration tensile cracks in the upper part of the slope causes the damage of the bond between particles, and the soil at the toe of the slope slides as a whole due to local shearing. In a very short time, the development of the tensile cracks at the slope toe and the destruction of the bond between particles lead to slope failure under the action of earthquakes and self-weight stress. e failure mode of the slope with 5% moisture content is shattering-shallow sliding. From Figure 8, it can be seen that the failure of the slope is related to the connection of the local shear failure surface. When loading for 0.5 s, it can be seen that the small cracks firstly propagate upward, and the tensile cracks develop along the shear cracks. As can be seen from Figure 6(c), at 0.5 s, the displacement time-history curve of monitoring point 13 has been separated from the other three curves, indicating that the overall local shear slip has occurred in the lower part of the slope. At this time, the integrity of the soil at the top of the slope is great, indicating that a small amount of settlement has taken place on the top of the slope. When loading for 1 s, the local shear plane runs through the slope to form the slip surface, and the slip lines intersected at the lower part of the crest of the slope. With the increase of vibration time, the soil on the crest collapses downward with the sliding of the soil on the two sides of the slope. From Figure 9, it can be seen that the shear cracks develop radially from the bottom of the slope to the top, and the tensile cracks develop along the shear crack. After loading for 1 s, the soil inside the slope breaks up into soil blocks of different sizes, forming a fracture network. It can be seen from Figure 6(e) that the displacement time-history curves of the monitoring points of the slope nearly coincide before failure. With the increase of loading time, uneven dislocation occurs in the interior of the slope, leading to the collapse of the slope. Development of Cracks' Number in Slope. Fractures include tensile cracks and shear cracks. As can be seen from Figure 10, with the increase of moisture content, the number of fractures and tensile cracks at slope failure gradually decreases, while the shear cracks increase. is shows that tensile cracks mainly developed in the interior of the slope under the action of earthquakes, but are not necessarily the main factor causing damage. e slope with 5% moisture content is relatively loose, and the tensile failure occurs easily under the earthquake. While for 8% and 12% moisture content, the double-sided slopes are damaged mainly by shearing stress. From Figures 10(a) and 10(c), it can be seen that the fractures' number in the slope with 5% and 12% moisture content is relatively stable when loading time is 1 s and increases slowly with the increase of vibration time. e shear slip occurs in the slope with 8% moisture content, and the cracks increase gradually during the sliding of the slope. In short, with the increase of moisture content, the ability of the slope to resist tensile crack increases, so attention should be paid to shear failure in slope reinforcement. Conclusions In this paper, the failure mode and mechanism of multiface slopes with different moisture content under earthquake action were studied by using a two-dimensional particle flow program (PFC 2D ). e following conclusions can be drawn: (1) e failure modes of slopes with different moisture content under earthquake action are different. e failure mode of the slope with 5% moisture content is shattering-shallow sliding, and the failure mode of slopes with 8% moisture content is tension-shearslip, and shattering-collapse slip failure occurs at 12% moisture content. (2) e failure of a multiface slope is a gradual failure process under the action of earthquakes. e shallow layer of the slope with 5% moisture content is seriously damaged by tension under the action of earthquakes; the upper part of the slope is unstable due to the damage of the bond between particles. For the slope with 8% moisture content under the action of earthquakes, there are shear cracks developed inside the slope. With the increase of vibration time, the local shear failure plane runs through into a slip plane, and the slope slides along the slip surface. Under the action of earthquakes, a fracture network develops in the slope with 12% moisture content due to shear failure inside the slope. (3) With the increase of water content, the fractures and tensile cracks in the slope decrease gradually, while the shear cracks increase gradually. In the case of low moisture content, the tensile cracks play a leading role in the failure of the slope. With the increase of moisture content, the influence of shear failure on slope failure is increasing, and more efforts should be placed on reducing the shear cracks. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,206.2
2021-08-18T00:00:00.000
[ "Geology" ]
A Backbone-reversed Form of an All- (cid:1) (cid:2) -Crystallin Domain from a Small Heat-shock Protein (Retro-HSP12.6) Folds and Assembles into Structured Multimers* The structural consequences of polypeptide backbone reversal (“retro” modification) remain largely unex-plored, in particular, for the retro forms of globular all- (cid:1) -sheet proteins. To examine whether the backbone-reversed form of a model all- (cid:1) -sheet protein can fold and adopt secondary and tertiary structure, we created and examined the recombinant retro form of a 110-residue-long polypeptide, an (cid:2) -crystallin-like small heat-shock protein, HSP12.6, from C. elegans . Following intracellu- lar overexpression in fusion with a histidine affinity tag in Escherichia coli , purification under denaturing con- ditions, and removal of denaturant through dialysis, retro-HSP12.6 was found to fold to a soluble state. The folded protein was examined using fluorescence and CD spectroscopy, gel filtration chromatography, non-dena- turing electrophoresis, differential scanning calorimetry, and electron microscopy and confirmed to have adopted secondary structure and assembled into a multimer. Interestingly, like its parent polypeptide, retro- HSP12.6 did not aggregate upon heating; rather, heating led to a dramatic increase in structural content and the adoption of what would appear to be a very well folded state at high temperatures. However, this was essen- tially reversed upon cooling with some hysteresis being observed resulting in greater structural content in the heated-cooled protein than in the unheated protein. The heated-cooled samples displayed CD spectra indicative of structural content The structure of a naturally occurring globular protein is determined by its amino acid sequence. The amino acid sequence has a definite polarity with the CϭO group of every residue forming a peptide bond with the N-H group of the next residue. Here, we explore the structural-biochemical consequences of reversing the polarity of the polypeptide backbone through the creation of a novel protein with an amino acid sequence that is the exact reverse of the sequence of a naturally occurring protein, the ␣-crystallin-like small heat-shock protein HSP12.6 from Caenorhabditis elegans (1). The consequences of effecting such a transformation have previously been explored both in theory and in experiment. Among works dealing with polypeptides that are large enough to be called proteins, the following discussions are worthy of note. (a) Guptasarma (2,3) hypothesized that the retro form of an all-␤-sheet protein would fold into a topological mirror image of the structure adopted by the parent sequence through mirror imaging of the entire scheme of side chain-side chain interactions facilitating folding. The dihedral angles characterizing each residue in the parent structure would thus change both sign as well as definition in the mirror-imaged structure because of the replacement of CϭO by N-H and vice versa. As a consequence, every would become Ϫ and every would become Ϫ. Notably, with ␤-sheets, such a transformation could conceivably allow each residue to remain in a ␤-sheet configuration (3). However, with ␣-helices, such a transformation would be expected to effect a change in the handedness of the helix and so mirror imaging would not ordinarily occur for single helices and all-helix protein structures, but rather, it would occur only with isolated helices in predominantly ␤-sheet structural contexts. In other words, only helices that could pay the energy penalty for switching handedness through stabilization by packing contacts with other mirror-imaged substructures would undergo the transformation. (b) Skolnick and colleagues (4) performed folding simulations with the retro form of an all-␣ protein, the B domain of Staphylococcal protein A, and showed that 3 of 12 folding simulations led to mirror-imaged structures, whereas the remaining nine simulations folded into ␣-helical structures. (c) Another result with ␣-helical proteins was obtained experimentally by Grutter and colleagues (5) who showed that the retro form of a GCN4 leucine zipper folds into a structure (determined crystallographically) that is almost identical to the parent structure and not a mirror image of the parent structure. The similarity of the parent and retro structures was explained on the basis of the fact that there was a 2-fold palindrome in the hydrophobicity profile of the protein, intersecting the central cavity in the structure of the protein. (d) Importantly, whereas the retro forms of ␣-helical proteins did display a tendency to fold, an experimental attempt to reverse the backbone of a protein containing ␤-sheets produced a polypeptide that displayed no tendency to fold. Serrano and colleagues (6) demonstrated that the retro form of an Src homology 3 domain does not fold. However, structural modeling carried out by the same group established that a mirror image topology was feasible, especially in combination with folding to a moltenglobule state rather than a rigid unique structure (6). Since only ϳ40% of an Src homology 3 domain constitutes ␤-structure with the remaining polypeptide being folded into rigid loop-like structures that do not qualify to be called secondary structure (the 60-residue-long domain has only ϳ25 residues of 60 forming ␤-strands (7)), the question of what would happen upon reversal of a larger protein with a much greater propensity to form ␤-sheets has thus far remained open to question. * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. We decided to examine the consequences of reversing the sequence of a larger all-␤-sheet protein. We chose as a parent sequence for backbone reversal the 110-residue-long HSP12.6 from C. elegans. This is the smallest known homolog of all of the proteins belonging to the family of ␣-crystallin-like heatshock domains (1). Based on actual determination of the structures of several members of this family (8,9), ␣-crystallin-like domains are currently accepted to be proteins of a defined all-␤-fold. Our investigations reveal that the reversed sequence too folds and assembles into a multimer like the parent. Molecular Genetic Manipulations and Design of Constructs-From the sequence of the gene encoding the parent protein, Hsp12.6 (1), the sequence of a novel gene encoding retro-HSP12.6 (RETHSP) 1 was created by reversing the sequence of codons used by the parent gene. The DNA encoding the reversed sequence (retro-HSP12.6) was then synthesized through a combination of contract synthesis of double-stranded oligonucleotides and our own molecular genetic manipulations to derive constructs encoding retro-HSP12.6 with a choice of restriction sites flanking the sequence for insertion into the vector pQE30 (Qiagen) to facilitate expression in fusion with a His 6 affinity tag. Finally, two forms of retro-HSP12.6 were created. (i) The first form, RETHSP-1, was a backbone-reversed form of the 110-residue-long parent sequence flanked by N-and C-terminal extensions. The N-terminal extension consisted of 12 residues incorporating a 10-residue affinity tag (MRG-SHHHHHH) and an additional two residues contributed by the cloning site (GS). The C-terminal extension of nine residues (VDLQPSLIS) was entirely due to the choice of restriction sites at the multiple cloning site of pQE-30, as the vector's own stop codon was used. (ii) To make the second form, RETHSP-2, the C-terminal extension was removed through inclusion of a stop codon immediately after the backbonereversed HSP12.6 sequence. Expression of proteins from both constructs was first checked in XL1Blue, which was the cloning host. The sequence of the insert in the positive clone was confirmed through automated DNA sequencing on an ABI 310 Prism sequencer, and the plasmid was transformed into the expression host M15pREP4. The sequences of HSP12.6, RETHSP-1, and RETHSP-2 are shown in Table I for reference. Expression, Purification, and Folding-The expression of both proteins in the expression host Escherichia coli M15pREP4 was low but this was compensated for by setting up larger culture volumes. For expression, the cells were grown overnight and a 1% secondary innoculum was added to an appropriate volume of LB. Cells were induced with 1 mM isopropyl-1-thio-␤-D-galactopyranoside at an optical density of 0.6 and harvested 4 h after induction. Harvested cells were suspended in lysis buffer containing denaturant (8 M urea, 0.1 M NaH 2 PO 4 , 0.01 M Tris-Cl, pH 8.0) and lysed by sonication. The lysate was centrifuged at 18,000 ϫ g for 1 h, and the supernatant thus obtained was loaded onto a nickel-nitrilotriacetic acid column in the presence of the denaturant, urea. Washing (with 8 M urea, 0.1 M NaH 2 PO 4 , 0.01 M Tris-Cl, pH 6.3) and elution (with 8 M urea, 0.1 M NaH 2 PO 4 , 0.01 M Tris-Cl, pH 5.9 and pH 4.5) were also done under denaturing conditions. Dialysis of the eluted protein to remove denaturant was done against 20 mM Tris. After dialysis, concentration was effected through centrifugation under vacuum to the point at which the protein started to precipitate after which the sample was centrifuged and supernatant was taken for an estimation of concentration of soluble protein. This was taken as the maximally concentrated solution of the protein since upon further concentration the protein precipitated. The change in molarity of buffer following such concentration was also estimated by reckoning for the change in volume effected by centrifugal vacuum concentration. Spectroscopy and Microcalorimetry-Protein concentrations were estimated through UV absorption measurements at 280 nm using a predicted molar extinction coefficient of 12,780 for proteins encoded by both RETHSP-1 and RETHSP-2. Fluorescence spectra were collected on a PerkinElmer LS-50B spectrofluorimeter with variable excitation and emission bandpasses as appropriate using excitation with light of 280 nm and scanning protein emission between 300 and 400 nm. CD spectra were collected at intervals of 1 nm on a Jasco J-810 spectropolarimeter through scanning of wavelengths from 250 to 200 nm using a protein concentration of 0.4 mg/ml and a cuvette path length of 0.2 cm. CD signals below 200 nm could not be collected because the spectra were noisy. Consequently, no attempt was made to estimate secondary structural contents from this data. Calorimetry was carried out using a Microcal MC-2-ultrasensitive microcalorimeter using a protein concentration of 80 M and a scan rate of 60 K/h. Fourier Transform Infrared spectra for the solid precipitate of RETHSP-1 obtained through concentration beyond the solubility limit were collected on a PerkinElmer Spectrum BX instrument with the protein sample placed between two calcium fluoride windows at a resolution of 1 cm Ϫ1 , taking an average of 32 scans. Chromatography and Non-denaturing Gel Electrophoresis-Gel filtration chromatography was performed on a Pharmacia SMART system using an analytical Superdex-200 column (bed volume Ϸ2.4 ml, void volume 0.8 ml) and a flow rate of 0.1 ml/min through use of 0.05-ml protein samples of concentration at 0.4 mg/ml. The fractionation range of the column was 6,00,000 Da, and the exclusion limit was 16,00,000 Da. Non-denaturing gel electrophoresis for the determination of the native molecular weight of RETHSP-2 was carried out by the standard procedures involving: (i) determination, plotting, and (linear) fitting of changes in the relative mobilities of protein samples as a function of gel acrylamide percentage by a least squares fitting method, followed by (ii) by plotting of the negative value of the slope thus obtained for each standard protein against its known native molecular weight (Ferguson plot), least squares fitting of this data to a straight line, and interpolation of the value of the slope obtained for RETHSP-2 into the plot. Electron Microscopy-Transmission electron microscopy studies were carried out through use of routine negative staining procedures employing phosphotungstic acid and uranyl acetate on a JEOL 1200 EX-2 microscope. RESULTS AND DISCUSSION Solubility of RETHSP-1 and RETHSP-2-No precipitation was observed during dialysis-based removal of denaturant from solutions of either RETHSP-1 or RETHSP-2 following His 6 tag-based affinity purification on nickel-nitrilotriacetic acid-agarose columns in the presence of urea. The protein samples obtained were found to be soluble up to a concentration of 1.2-1.5 mg/ml for RETHSP-1 and ϳ0.6 mg/ml for RETHSP-2. Concentration beyond these values led to protein precipitation. Spectrofluorimetric Characterization-The two forms, RE-THSP-1 and RETHSP-2, displayed wavelengths of maximal fluorescence emission ( max ) of 348 and 351, respectively, indicating only very nominal burial of their aromatic residues (two tryptophans and one tyrosine). The emission spectrum of RE-THSP-2 is shown in Fig. 1A. RETHSP-1 and RETHSP-2 Appear to Be Trimers/Tetramers at Low Concentrations-Because both proteins were soluble, we examined their quaternary structural status using gel filtration chromatography. On an analytical SMART Superdex-200 column, RETHSP-2 eluted at 1.47 ml (Fig. 1B), correspond- 1 The abbreviations used is: RETHSP, retro-heat-shock protein. ing to a molecular mass of ϳ52-53 kDa. In some preparations of the protein, a minor population was also found to elute close to the void volume of the column (0.8 -0.9 ml), which had a bed volume of 2.4 ml and a fractionation range of 10,000 -600,000 Da with an exclusion limit of 1.3 ϫ 10 6 Da. RETHSP-2 has a polypeptide molecular mass of 13,970 Da, indicating that the majority population eluting at ϳ 1.47 ml falling within the optimal fractionation range for the column is predominantly tetrameric. In comparison with the value of 52-53 kDa that was obtained through gel filtration, determination of the native molecular mass at low protein concentration by non-denaturing gel electrophoresis (see Fig. 6, C and D) yielded an estimated molecular mass of 45-46 kDa. Both of these estimates turn out to lie between the values expected for trimeric (ϳ42 kDa) and tetrameric (ϳ56 kDa) states of a ϳ14-kDa polypeptide like RETHSP-2. Thus, three possibilities apply. (i) The molecule is assembled into a tetramer that behaves as a smaller species on account of compactness. (ii) The molecule is assembled into a trimer that behaves as a larger species on account of being swollen, or (iii) the molecule forms a mixed population of trimers and tetramers existing in equilibrium. Because techniques used to determine multimeric molecular masses including non-denaturing gel electrophoresis, gel filtration chromatography, dynamic light scattering, and analytical ultracentrifugation are all influenced to varying extents by molecular shape and effective hydrodynamic volume, which need not correlate perfectly with size for non-spherical species, estimates of native molecular mass do not always correspond to expected multiples of subunit molecular weight. We emphasize that dynamic light scattering or analytical ultracentrifugation data could provide more accurate information concerning whether the population is mostly trimeric or tetrameric, and we are organizing to perform these experiments. Meanwhile, we have obtained preliminary plate-like crystals of the protein and attempts are being made to refine crystallization conditions toward eventual structure determination, which should resolve the issue. Notably, the occasional observation of a soluble higher order multimer at the void volume of the Superdex-200 column indicates that this assembly may also be capable of associating further into larger multimers approaching sizes of 600,000 or more, especially at high protein concentrations. Evidence of Formation of a Higher Order Multimer upon Concentration-As already mentioned, RETHSP-1 has a solubility limit of ϳ1.2-1.5 mg/ml. Beyond this protein concentration, protein precipitates are obtained. Whereas the gel filtration studies reported above used the sample remaining in the supernatant after concentration, examination of the precipitated protein using transmission electron microscopy and negative staining showed the presence of a globular, bead-like form with a diameter of roughly 18 -20 nm (Fig. 2, panels A and B). We are in the process of analyzing multiple images of these beads to attempt partial three-dimensional reconstruction. Fourier Transform Infrared spectra of these beaded samples clearly show the presence of secondary structure (Fig. 2C) with CϭO stretch band maxima at 1629 and 1651 cm Ϫ1 , indicative of predominantly ␤-sheet with some helical content. Evidence of Secondary Structure Formation-The CD spectrum of any protein is a linear combination of the contributions of peptide bonds in various secondary structures. The CD spectrum of RETHSP-1 (Fig. 3A) indicates ␤/␣-secondary structural content together with a substantial component of randomly coiled structure. It may be noted that for completely randomly coiled polypeptides, no negative ellipticity signal is observed between 250 and 210 nm. Only at wavelengths below 210 nm are spectra observed to show a negative band that dips to display a minimum at ϳ198 nm. Since out of a total of 131 residues RETHSP-1 contains 21 residues that comprise N-and C-terminal extensions that are in any case not expected to participate in structure formation (16% of the chain) and, furthermore, because any negative CD signal resulting from random coil is nearly 4 -5 times stronger than that due to a ␤-sheet configuration for any peptide bond (i.e. in a plot of meanresidue ellipticity), the CD spectrum of RETHSP-1 might arguably be expected to be dominated by a negative band at ϳ198 nm, however, with significant negative mean residue ellipticity being visible even at wavelengths above 210 nm. Gratifyingly, this is what is seen. When the C-terminal extension is removed as in RETHSP-2, the length of the chain reduces to 122 residues, bringing down the number of extra residues to 12 (only 9% of the chain) from 21. This would be expected to reduce the contribution of residues in a randomly coiled configuration to the CD spectrum and thus lead to a shift of the band minimum to a longer wavelength. As shown by the CD spectrum of RETHSP-2, such a shift is exactly what is seen. RETHSP-2 shows a band minimum at 210 nm together with another band minimum at ϳ230 nm (Fig. 3B). Notably, the mean residue ellipticity of the entire spectral range is enhanced by a factor of almost 2.5 over that seen with RE-THSP-1, indicating that this polypeptide is significantly more structured even though there is still some random coil component. Heating Causes Structural Consolidation Rather than Un-folding-To examine whether heating of the protein is characterized by an endothermic reaction resulting in a change in enthalpy associated with unfolding, differential scanning calorimetry was carried out with RETHSP-1. The scan showed a small, almost indiscernible endothermic reaction followed by a dramatic and unexpected exothermic reaction at high temperatures (Fig. 4). Such an exothermic reaction could only be due to the formation rather than destruction of non-covalent contacts. Such contacts could be because of either aggregation or further structura1 consolidation within the polypeptide. To examine whether any aggregation occurs upon heating, we heated the proteins at 90°C for 15 min and found no visible sign of aggregation. Gel filtration of cooled samples also showed no signs of high molecular weight species (see Fig. 6A). One further test excluding any possibility of aggregation was a monitoring of the HT voltage associated with the transmission of light through the sample in a CD spectrometer during heating of the sample. The HT voltage in a spectropolarimeter rises to compensate for reductions in light intensity not associated with differential absorption of left and right circularly polarized light when the detector is starved for light through absorption or scattering. The HT voltage, therefore, is extremely sensitive to changes in the level of scattering of transmitted light. We found no significant changes in HT voltage associated with heating, signaling a lack of aggregation during heating. Because there was no evidence of aggregation and suspecting that structural consolidation could indeed have occurred in the sample, we examined the CD spectrum of RETHSP-2 at high temperature (Fig. 5A). We also examined the nature of changes in ellipticity associated with heating and cooling of RETHSP-2 ( Fig. 5B) to further confirm the changes in CD signal associated with heating and investigate the extent to which such changes are reversible for this protein. As is evident from Fig. 5A, the protein is significantly more structured at high temperatures than at room temperature. Fig. 5B reinforces this conclusion, demonstrating that there is a gradual increase in the negative ellipticity at 218 nm during heating (effected at a rate of 5°C/ min) as well as a clear reduction in negative ellipticity associ- 3. Far-UV CD spectra of retro-HSP12.6 constructs. Panel A, the retroprotein, RETHSP-1, at room temperature. Panel B, the retroprotein, RETHSP-2, at room temperature. Panel C, the retroprotein, RETHSP-2, following heating to 90°C and cooling to room temperature. ated with cooling (effected at the same rate). This finding shows that the structural consolidation effected through heating is largely reversed upon cooling. However, some hysteresis is also clearly seen to be associated with the process (Fig. 5B) with the signal at 218 nm not returning to its original value upon cooling. Thus, the structural gains effected through heating are not entirely lost upon cooling, and as a result, the heated and cooled protein (Fig. 3C) shows a negative band maximum at 215-216 nm with a second band at ϳ230 nm, indicating a clear predominance of ␤-sheet configuration and a well folded state of the polypeptide even upon return to room temperature. It may be emphasized once again that perhaps because of the fact that the heat-induced changes do not reverse completely, the spectrum of the heated-cooled protein (Fig. 3C) is different from that of the unheated protein (Fig. 3B), which displays a band minimum at ϳ210 nm as well as from spectrum of the protein at the high temperature of 92°C (Fig. 5A), which most resembles the CD spectrum of a well folded naturally occurring protein. The differences in signal intensities among the various spectra, especially at lower wavelengths, conform to what is expected for enhancement of ␤-sheet content at the expense of unstructured content. As is well known, the negative ellipticity associated with a peptide bond in a ␤-sheet configuration is much lower than that associated with a peptide bond in a random coil. The intriguing semi-reversible thermally induced consolidation of structure described above caused us to carry out a further comparative examination of RETHSP-2 in the unheated, heated state, and cooled states. The unheated sample and the heated-cooled sample were chromatographed through gel filtration (Fig. 6A) and seen to contain the same dominant trimeric/tetrameric population eluting at 1.47 ml on a Superdex-200 column. As can be seen, there was no evidence of any additional species eluting at the column's void volume (0.8 -0.9 ml), indicating that there was no generation of high molecular weight aggregates through the process of heating and cooling. However, a slight difference can be seen in the width-at-halfheight of the elution as well as in the volume at which the elution begins for the heated-cooled sample. Thus, the gross quaternary structural status of RETHSP-2 is not changed through heating and cooling, although as already pointed out clearly some of the structural gains effected through heating are retained by the molecule after cooling, evident from the hysteresis seen in the ellipticity signal as a function of temperature (Fig. 5B) as well as from differences in the CD spectral shapes of unheated, heated, and heated-cooled samples (Figs. 3B, 5A, and 3C, respectively). Fluorescence emission spectroscopy at different temperatures did not shed light on structural transitions. The 351-nm emission ( max ) of the protein alluded to earlier (Fig. 1A) remained at 351 nm, even at 90 -92°C as well as upon cooling to room temperature, displaying only a reversible reduction in intensity with increased temperature but no change in other spectral characteristics (Supplemental Fig. 1). Therefore, it would appear that the exposed tryptophan residues of the protein remains largely exposed to the solvent, even in course of the heat-induced structural consolidation and resettlement into a more structured state upon cooling. Fluorescence quenching carried out for unheated and heated-cooled samples reinforce this conclusion. Stern-Volmer plots (Fig. 6B) show that the accessibility of the fluorescing aromatic residues is virtually unaltered between the unheated and the heatedcooled samples. To further investigate the conclusion from gel filtration data, which indicated that the unheated and heatedcooled samples have similar sizes despite their structural differences, non-denaturing gel electrophoresis was carried out. The gels of four different percentages of acrylamide were run. Variations in the relative mobilities of five different protein standards as a function of varying gel density were analyzed and used to construct a Ferguson plot. A representative gel (10% acrylamide) is shown (Fig. 6C) with unheated and heatedcooled samples run in the last two lanes, establishing that the hydrodynamic volumes of these two forms are entirely similar. The Ferguson plot (Fig. 6D) shows that both forms correspond to a molecular mass of 45-46 kDa as already mentioned earlier. CONCLUSIONS Retro-HSP12.6 appears to fold and assemble into multimeric states that further associate to form large globular structures. At low protein concentrations, the polypeptide displays secondary structural content and no tendency to aggregate, despite possessing solvent-exposed aromatic residues. Secondary structural content is enhanced through heating and lost through cooling as evidenced by CD spectroscopy with calorimetry showing an exothermic reaction to be occurring upon raising of temperature without attendant molecular aggregation. Thus, heating of this backbone-reversed all-␤ heat-shock protein results in enhancement of structural content, perhaps because of improved hydrophobic interactions among residues at high temperatures facilitating further hydrogen-bonding interactions and greater structure and stability. At the high temperature of 92°C, the protein shows a CD spectrum not unlike that of any folded naturally occurring protein. Upon cooling, there is loss of most of the structural content gained through heating, but nevertheless some hysteresis is seen and the process is not fully reversible, such that the heated-cooled protein shows greater structural content than the unheated protein and shows a CD spectrum indicative of a considerably better folded state. Most intriguingly, heating is not associated with any aggregation. Independently, concentration of the protein was observed, to lead to assembly of the molecule into larger bead-like structures, which are precipitation-prone and show high secondary structural content. Preliminary plate-like crystals of the protein have been obtained and attempts are being made to refine crystallization conditions to carry out further structural analysis. FIG. 6. Characteristics of RETHSP-2 prior to heating and following cooling to room temperature after heating at 90°C. Panel A, elution from a Superdex-200 column of unheated and heated-cooled samples as indicated. Panel B, Stern-Volmer plots of acrylamide quenching of protein fluorescence for unheated (open triangles) and heated-cooled (open circles) samples. Panel C, representative non-denaturing PAGE (10% acrylamide, stacking/resolving gels of pH 8.8). The first five lanes on the left show the markers, carbonic anhydrase, chicken egg albumin, bovine serum albumin, urease, and ␣-lactalbumin, respectively, with isoforms visible where present. The last two lanes, respectively, correspond to unheated and heated-cooled samples of RETHSP-2. Panel D, Ferguson plot (both axes in log 10 scale) showing five standard protein molecular masses in kDa plotted against the negative values of slopes of individual linear (least square) fits obtained for each protein (K r ) through initial plotting of relative mobility versus gel acrylamide percentage. The interpolation of the value of the slope obtained for RETHSP-2 is shown by horizontal and vertical lines.
6,146
2003-07-18T00:00:00.000
[ "Biology", "Chemistry", "Materials Science" ]
Collision Avoidance of 3D Rectangular Planes by Multiple Cooperating Autonomous Agents We develop a set of novel autonomous controllers for multiple point-mass robots or agents in the presence of wall-like rectangular planes in three-dimensional space. To the authors’ knowledge, this is the first time that such a set of controllers for the avoidance of rectangular planes has been derived from a single attractive and repulsive potential function that satisfies the conditions of the Direct Method of Lyapunov. The potential or Lyapunov function also proves the stability of the system of the first-order ordinary differential equations governing the motion of the multiple agents as they traverse the three-dimensional space from an initial position to a target that is the equilibrium point of the system. The avoidance of the walls is via an approach called the Minimum Distance Technique that enables a point-mass agent to avoid the wall from the shortest distance away at every unit time. Computer simulations of the proposed Lyapunov-based controllers for the multiple point-mass agents navigating in a common workspace are presented to illustrate the effectiveness of the controllers. Simulations include towers and walls of tunnels as obstacles. In the simulations, the point-mass agents also show typical swarming behaviors such as split-and-rejoin maneuvers when confronted with multiple tower-like structures. The successful illustration of the effectiveness of the controllers opens a fertile area of research in the development and implementation of such controllers for Unmanned Aerial Vehicles such as quadrotors. Introduction e motion planning and control (MPC) of mobile robots or agents is a challenging task and an interesting problem attracting considerable attention to the robotic community over the last couple of decades. e design of a particular robotic system and motion planning are usually treated independently [1]. Typically, MPC algorithms are applied to systems with fully fixed geometric and kinematic features, while the system design in robotics takes into account robustness, stiffness, workspace volume, obstacle avoidance schemes, and other performance features. e principle goal for any MPC problem is to find the most optimum design to optimise the motion between given configurations [2][3][4][5][6][7]. In an MPC problem, multiple robots are favoured as they are able to cooperate for faster and more efficient results [4,5,[8][9][10], including other fields where multiagent operations are always preferred [11]. Path planning or MPC algorithms for mobile robots operating in an environment cluttered with obstacles are usually grouped according to the methodologies used to generate the geometric path, namely, the road map techniques, cell decomposition algorithms, and artificial potential field (APF) methods [4,12]. ese path planning algorithms have a common objective, which is to find the shortest and most optimal geometric path taking into account the moving objects and obstacles in the workspace [13][14][15]. While the calculation of a hindrance-free way may take care of numerous significant issues in industrial settings where the robot may move cautiously, it is inadequate and practically futile when the robot needs to move at sensibly high speeds, for example, multiple mobile robots navigating through dynamic cluttered situations and autonomous vehicles navigating in a highway traffic situation. In this research article, we use the Lyapunov controllers, constructed via the Lyapunov-based Control Scheme (LbCS), essentially an APF method, for the control and stability of a system point-mass mobile robots that, in theory, can take on reasonably high velocities. e LbCS has been employed to warrant point and posture stabilities in the sense of Lyapunov for MPC for various robotic systems, such as car-like mobile robotic systems [4], mobile manipulators [16], tractor-trailer systems [12,17], and swarming [18]. We utilise the control scheme to derive and extract centralised velocity-based control laws for pointmass mobile robots. Contributions. e novelty of this paper is the ease in developing autonomous controllers for the avoidance of three-dimensional wall-like rectangular planes by a mobile robot or agent while it is in motion using a technique known as the Minimum Distance Technique (MDT). e ability to do this opens up many possibilities. Walls can be used to model buildings and towers, windows, and doors. ey can be used to model highways and tunnels. When we deal, for instance, with autonomous Unmanned Aerial Vehicles (UAVs), it is now possible to model a drone's performance in the face of such obstacles as buildings and tunnel walls, and its maneuverability inside buildings clustered with rectangular objects and exited. For disaster surveillance and in an urban war simulation and situation, this maneuverability is critical [19,20]. e MDT was introduced by Sharma et al. [21] to create parking bays for the posture control problem of robotic systems and avoid the sides of a bay, modelled as straight lines. e MDT uses APF functions for the avoidance of the boundaries of the parking bay. In this paper, we extend the methodology to encompass rectangular planes. e MDT involves the computation of the minimum distance from the centre of the point-mass mobile robot to the surface of the rectangular plane and the avoidance of the resultant point on the st ≥ 0urface of the rectangular plane. e avoidance of the nearest point on the surface of the rectangular plane at any time t ≥ 0 ensures that the point-mass mobile robot avoids the whole plane. As we shall see, this algorithm helps in simplifying the navigation laws. Surely, there are other methods of obstacle avoidance of polygons. e most recent one was proposed by Arantes et al. [22] who discussed path planning approaches for dynamic systems to handle nonconvex constraints to be formulated as model-predictive control, which planned discrete time control and state sequences simultaneously through a constrained optimisation. e optimisation problem that needs to be solved in this case is the mixed-integer linear programming (MILP) when the dynamics are linear and the obstacles are represented by combinations of polytopes, with no uncertainty presence. e problem that lies in this particular approach is the jumps between the time steps, which could result in a trajectory cutting through the obstacle, given that the method is only concerned with satisfying the constraints at a discrete point in times, as shown in Figure 1(a). Arantes et al. devised a new approach to suppress this problem by imposing constraints that require every pair of adjacent states to be on the same side of an obstacle, as shown in Figure 1(b) [23]. Furthermore, comparing Arantes et al. approach to the MDT, the latter results in a smooth, continuous path for the avoidance of irregular shaped (rectangular plane) obstacles. An illustration of the MDT for the avoidance of a rectangular plane is shown in Figure 1(c). e main contributions of this paper are summarised as follows: (1) e design of the velocity algorithm for a point-mass mobile robot which is based on a Lyapunov function that acts as an energy function of the system. e velocity algorithm ensures safe, collision-free trajectories that converge to the intended target. (2) e design of the velocity algorithm for the pointmass mobile robot which is based on the development of a Lyapunov function that acts as an energy function of the system. e velocity algorithm applied here is altogether not quite the same as the ones in the literature. Consistently enduring velocities are utilised; nonetheless, the robot needs to stop after it has accomplished its objective. is stop should not be unexpected by a truncation of speed; rather, the robot should slow down its motion and afterward come to rest. e velocity algorithm and the objective target intended for the robot guarantee a protected and safe stop at the goal objective and furthermore guarantee that the robot stays there. (3) A three-dimensional rectangular-plane obstacle avoidance scheme using the MDT. While in motion, the distance between the point-mass robot and the closest point on the surface of the wall is computed and the point-mass robot avoids this point on the surface of the wall, resulting in the avoidance of the entire wall. In addition, we only consider the wall closest to the point-mass robot en route to its target. Subsequently, our obstacle avoidance scheme is more straightforward contrasted with, for instance, the avoidance schemes used in the artificial potential strategies where all of the obstacles are considered in parallel [4,17]. (4) Stability analysis pertaining to the kinodynamic system. We use the Direct Method of Lyapunov to carry out the stability analysis, proving that the equilibrium point of the system, representing the target of a point mass, is stable. e paper is organised as follows: In Section 2, we define the kinematic model of the point-mass robot; in Section 3, the APF functions are defined; in Section 4, the Lyapunov function is constructed and the robust nonlinear continuous control laws for the mobile robot are extracted; in Section 5, the stability of the system is discussed; in Section 6, the simulation results are presented to show the robustness and effectiveness of the proposed control inputs and followed by conclusion in Section 7. Modelling a Point-Mass Robot or Agent in 3D e modelling process of a robotic system involves the conceptualisation of the problem, residing on the abstraction level. Simulation, however, mainly focuses on the implementation of the execution of the model to study the behavior and performance of an actual or theoretical system. is section proposes a simple kinematic model for the moving point-mass robot, an abstraction of a simple form of a robotic system. A two-dimensional schematic representation of a point-mass robot with and without rectangular obstacle avoidance is shown in Figure 2. We begin with the following definition. Definition 1. A point mass, P i , is a sphere of radius rp i and centred at (x i (t), y i (t), z i (t)) ∈ R 3 for t ≥ 0. at is, it is the set At time t ≥ 0, the instantaneous velocity of the point mass will be given as . Assuming the initial conditions, a system of the first-order ODEs governing P i is Next, we will formulate the components that form the Lyapunov function, essentially the attractive and repulsive potential field functions. Construction of the APF Functions In this section, we construct the components of the Lyapunov function. We assume that P i has a priori knowledge of the entire workspace. e principle objective is to construct the Lyapunov function from which we derive the nonlinear velocity control inputs v i (t), w i (t), and u i (t) for i � 1, . . . , n such that P i navigates and reaches its target configuration, avoiding any obstacle, whether fixed, moving, or artificial, while it is in motion. e design of the nonlinear control inputs is captured in Figure 3, clearly illustrating the roles of the individual components in the design of the control scheme. Attractive Potential Field Functions. We introduce basic mathematical notions to design and construct attractive functions for target attraction for P i . Attraction to Target Function. To initiate movement and ensure convergence, we propose to have a target T i for each of the point-mass mobile robots P i . e convergence of P i to T i will be guaranteed by the Lyapunov function. Definition 2. e assigned target for the point-mass mobile robot of P i is a sphere with centre (τ i1 , τ i2 , τ i3 ) and radius rτ i . at is, it is the set e next function will measure the Euclidean distance of P i from its designated target T i at time t ≥ 0. It will be used as an attraction function: An illustration of the total potentials for the target attraction function is shown in Figure 4(a), while Figure 4(b) shows the analogous contour plot generated over a workspace 0 < Z 1 < 100 and 0 < Z 2 < 100 for the point-mass mobile robot. For simplicity, we consider the target function in a 2-dimensional environment. e disk-shaped target for the point-mass mobile is fixed at (τ 01 , τ 02 ) � (50, 50) with a radius of rp 0 � 1. Auxiliary Function. In the MPC problem, P i starts from an initial position and navigates towards its target. While navigating, the motion of P i is such that it will avoid all obstacles, whether it is fixed or moving, with respect to the kinodynamic constraints that are tagged with the robotic system including the constraints on velocity and angles before reaching its objective target. Once it has reached the target, it essentially means that it has accomplished the task that was given to the robot, and hence it needs to stop at the target configuration. Intuitively, this means that the energy of the robotic system needs to be zero at the target configuration; that is, the nonlinear controllers need to vanish at the target. Hence, to achieve this and to ensure the convergence of P i to its target configuration, we consider the auxiliary function of the form Workspace Boundaries. We shall confine the motion of P i in a cuboid constrained by the dimensions η 1 × η 2 × η 3 . Since the motion is confined within these boundary walls, the walls are hence treated as fixed obstacles. erefore, for the avoidance of these walls, the following functions are proposed: for i � 1, . . . , n, which are positive within the rectangular cuboid. Rectangular-Plane Obstacle Avoidance. Disks in 2D and spheres in 3D are the simplest models of obstacles. However, they encompass extraspaces that are not needed for avoidance. For example, enclosing a rod-like structure within a sphere introduces spaces that need not be avoided. As an illustration, Figure 5(a) shows the contour plot of the total potentials and the corresponding collision-free path of a point mass over the workspace 0 < Z 1 < 40 and 0 < Z 2 < 40 encompassing a rod-shaped obstacle, while Figure 5(b) showcases the contour plot of the total potentials and the resulting path if the rod is replaced by a disk-shaped obstacle. e initial and final coordinates of the rod are (10,20) and (30, 20), respectively. e disk portrayal of the rod has a diameter of 10, which matches the length of the rod, and is centred at (20,20). e path generated in the presence of the rodshaped obstacle is optimal in terms of the distance traversed since the obstacle space is small in contrast to the disk portrayal of the rod. erefore, in this article, we introduce rectangular obstacles. To avoid the rectangular obstacles via the MDT, the surface wall of the rectangular plane is classified as a fixed obstacle. Let us fix ℓ � 1, . . . , � m, � m ∈ N rectangular-planeshaped obstacles within the workspace. An illustration of a rectangular plane is showcased in Figure 6, which we shall use to derive the new mathematical equations for its avoidance. ree points are sufficient for deriving the saturation functions and hence designing the rectangularplane avoidance functions. We begin with the following definition. Definition 3. Assume that the three-dimensional ℓth planar obstacle has the three coordinates (a ℓ1 , b ℓ1 , c ℓ1 ), (a ℓ2 , b ℓ2 , c ℓ2 ), and (a ℓ3 , b ℓ3 , c ℓ3 ), ℓ � 1, . . . , � m, � m ∈ N (see Figure 6). A single point in the plane is defined by en the plane can be precisely described by the set en the set of ℓ planes, ℓ ∈ � m, is where ) are the parametric representation for 0 ≤ λ iℓ1,2 ≤ 1, ℓ � 1, . . . , � m, and i � 1, . . . , n. e MDT necessitates that we identify the closest point on each of ℓ, the rectangular plane measured from the centre of P i . We compute the minimum Euclidian distance from the centre of P i to the surface of the ℓth rectangular plane. e avoidance of the closest point of the surface of the rectangular plane at any time t ≥ 0 results in the avoidance of the entire plane by P i . Minimising the Euclidean distance between the points (x i , y i , z i ), which is the centre of P i and the ℓth rectangular plane, yields Journal of Advanced Transportation where e saturation functions are λ iℓ1,2 : e new obstacle avoidance functions are therefore of the form for i � 1, . . . , n and ℓ � 1, . . . , � m. e function RP iℓ (x) is the measure of the distance between the closest point on the surface of the ℓth rectangular-plane-shaped obstacle and the centre of P i . Moving Obstacles. While in motion, each moving robot itself becomes a moving obstacle to every other mobile robot. For P i to avoid P j , we consider the following function: for i, j � 1, . . . , n, i ≠ j. In a nutshell, all these components will now be incorporated to form a Lyapunov function, which will eventually lead to the design of the control inputs for the robotic system. Design of the Control Inputs In this section, we will first construct the Lyapunov function, followed by its time derivative from which we will ultimately extract the nonlinear control inputs for system (2). Lyapunov Function. e Lyapunov function, the total potentials that guarantee target convergence and obstacle and collision avoidance, is the sum of the attractive and repulsive potential fields. We begin first by introducing the control/tuning parameters: (i) ℘ i� s > 0 and � s � 1, . . . , 6, for the avoidance of the � sth boundary of the workspace (see Section 3.2.1). (ii) ς iℓ > 0ς iℓ > 0 and ℓ � 1, . . . , � m, for the avoidance of the surface wall of the ℓth rectangular plane (see Suitably combining all the attractive and repulsive potential filed functions using these tuning parameters, we define a Lyapunov function for system (2) as x y Control Inputs. Next, we differentiate the various components of L(x) separately with respect to t to obtain (on suppressing x) the control inputs for system (2): for i � 1, . . . , n. In order to ensure stability in the sense of Lyapunov of system (2), we define the accompanying continuous velocity control laws as follows: for i � 1, . . . , n. Our main theorem, given next, uses these laws to prove the stability of our system. Theorem 1. A stable equilibrium point of system (2) is x e ∈ D(L(x)). Furthermore, with the design of the new controllers and the stability analysed for the robotic system, the effectiveness of the control scheme is verified using computer simulations. Simulation Results e three situations given in this section capture realistic situations to illustrate the adequacy, effectiveness, and robustness of the velocity-based controllers and the control scheme. In the following scenarios, the data use international units in the sense that parameters are unitless whereas the times can be treated consistently. For instance, the units of time can follow the international units like seconds or minutes and the distance can be in centimeters or meters. Scenario 1. In this scenario, we consider a simple setup where P i navigates itself from its initial position to its predefined target in the presence of a fixed rectangularplane obstacle. ere are 3 point-mass mobile robots and a rectangular-plane obstacle. Each of the point-mass mobile robots avoids each other as well as the rectangular-plane obstacle while en route to its target. It is very interesting to observe the proximity of the point-mass mobile robots to the wall as it tries to evade it, exerting just enough energy to move above the wall and converge to its targets. e behavior exhibited by P i is quite intriguing as it mimics a similar behavior that a swarm of birds exhibits while the swarm approaches a wall. Figure 7(a) shows the default 3D view and Figure 7(b) shows the top 3D view while Figure 7(c) shows the front 3D view of the motion of the point-mass mobile robots. e obstacle has transparency to allow us to view the position and path of P i . e blue sphere represents the motion of P i at t � 0 unit of time, red sphere at t � 700 units of time, green sphere at t � 3500 units of time, and purple sphere at t � 15000 units of time. Table 1 provides all the values of the initial conditions, constraints, and different parameters utilised in the simulation. Scenario 2. Here we model rectangular towers, which could represent tall buildings in cities. ese towers, constructed with 15 planes, block the path of a swarm of 5 pointmass mobile robots. e agents are observed to start from their initial positions and maneuver themselves to their predefined targets, while ensuring avoidance of the towers as well as interindividual collision avoidance. Each P i computes the shortest and a collision-free path to its destination. Split maneuvers are observed while the robots are en route along their paths. Such an example with multiple towers can be used to model the obstacle avoidance capability of UAVs. Table 2 provides all the values of the initial conditions, constraints, and different parameters utilised in the simulation, if different from the previous scenario. For the construction of the towers, the reader is referred to the figures for the extraction of the coordinates. Scenario 3. An interesting research domain involves tunnel passing maneuvers. In this scenario, there are 3 point-mass mobile robots and we design tunnels using rectangular planes. We use 8 rectangular planes to construct the tunnel. In addition, the top and one of the side views have been strategically made transparent to show the trajectory and the position of P i as it maneuvers through the tunnel. e snapshots show the way the pointmass robots strategize their motion to allow which robot will pass through the tunnel first and how they will converge to their respective predefined targets. Drones could be deployed in areas that are deemed to be "dull, dirty, and dangerous" as well as "difficult" such as that of collapsed tunnel passages to capture, store, check, and send data for analysis. Figure 8(a) shows the default 3D view and Figure 8 Interestingly, the behavior exhibited in this scenario can be seen in nature, namely, the leader-follower strategy, where the leader guides the followers to food sources, safety, and so on. We note that the leader-follower strategy, cooperative hunting, and avoidance in the military are dronebased applications considered common nowadays. Conclusions and Future Work Mathematical modelling and the design of motion planners for robotic systems are a complex, computationally expensive yet a fascinating research area. In this paper, the LbCS was applied to derive a set of robust, unique continuous time-invariant velocity-based control inputs that effectively handle the problem of MPC of point-mass mobile robots in a dynamic environment that, for the absolute first time, incorporates rectangular-plane obstacles. e convergence of the mobile robots to a neighborhood of a predefined target is ensured by the Lyapunov direct method. e effectiveness and robustness of the control scheme were illustrated via computer simulation of virtual scenarios that depicts real-life situations. To the authors' knowledge, this is the first time in literature whereby the MDT was used to derive the mathematical functions for the successful avoidance of rectangular-plane-shaped obstacles. e introduction of the rectangular-plane obstacle into the MPC problem has created new dimensions and potentials for research. e advantages of the MDT are numerous such as making it possible for plane-shaped (and other irregular) obstacles to be treated within the motion planners, help in simplifying collision-avoidance algorithms, and permit maximum freespace for the robots traversing the workspace. is work paves the way for numerous future directions. Our principal objective is to extend the rectangular-plane obstacles in a workspace for the MPC of a flock of quadrotors performing hovering maneuvers and undergoing split-and-rejoin maneuvers when encountered with towers and tunnels. Data Availability e research data for this article are purely simulationbased. ese would be available upon request.
5,643.4
2020-10-26T00:00:00.000
[ "Mathematics" ]
Immunogenicity of plant‐produced African horse sickness virus‐like particles: implications for a novel vaccine Summary African horse sickness (AHS) is a debilitating and often fatal viral disease affecting horses in much of Africa, caused by the dsRNA orbivirus African horse sickness virus (AHSV). Vaccination remains the single most effective weapon in combatting AHS, as there is no treatment for the disease apart from good animal husbandry. However, the only commercially available vaccine is a live‐attenuated version of the virus (LAV). The threat of outbreaks of the disease outside its endemic region and the fact that the LAV is not licensed for use elsewhere in the world, have spurred attempts to develop an alternative safer, yet cost‐effective recombinant vaccine. Here, we report the plant‐based production of a virus‐like particle (VLP) AHSV serotype five candidate vaccine by Agrobacterium tumefaciens‐mediated transient expression of all four capsid proteins in Nicotiana benthamiana using the cowpea mosaic virus‐based HyperTrans (CPMV‐HT) and associated pEAQ plant expression vector system. The production process is fast and simple, scalable, economically viable, and most importantly, guinea pig antiserum raised against the vaccine was shown to neutralize live virus in cell‐based assays. To our knowledge, this is the first report of AHSV VLPs produced in plants, which has important implications for the containment of, and fight against the spread of, this deadly disease. Introduction African horse sickness (AHS) is a devastating illness of horses, which is frequently fatal in susceptible hosts, and has been part of South Africa's veterinary disease landscape for several centuries. It is widely recognized as one of the most lethal viral diseases of horses worldwide (Coetzer and Guthrie, 2004;Weyer et al., 2013). The disease is caused by a number of distinct serotypes of African horse sickness virus (AHSV), a group of nonenveloped isometric dsRNA viruses (genus Orbivirus, family Reoviridae) which are transmitted by biting midges of the Culicoides genus. AHS is infectious but noncontagious, and is endemic to sub-Saharan Africa (Mellor and Hamblin, 2004;Sanchez-Vizcaino, 2004). South Africa is one of the few countries where all nine serotypes of the virus have been isolated (von Teichman et al., 2010). However, the virus has occasionally escaped its geographical limitation and extended further afield to countries in North Africa, the Middle East, the Arabian Peninsula and the Mediterranean region (MacLachlan and Guthrie, 2010). Global climate change is thought to be contributing to the gradual northward migration of the midge vector, which has led to a sobering international awareness that AHS-free countries with milder climate conditions are possibly increasingly at risk for outbreaks of the disease or even the establishment of endemicity (Herholz et al., 2008;Hopley and Toth, 2013;de Vos et al., 2012). The emergence of the generically related bluetongue virus (BTV) in north-western Europe in 2006 (Darpel et al., 2007), as well as the extended AHS outbreak that occurred in western Mediterranean countries between 1987 and 1991 (Rodriguez et al., 1992), has only served to reinforce these concerns. Disease control in South Africa has largely been effected by immunization with live-attenuated vaccines (LAVs) produced by Onderstepoort Biological Products, based on products developed in the 1930s (Alexander, 1934). The currently used LAV is supplied in two polyvalent vials containing three and four AHSV serotypes each, but neither AHSV-5 nor AHSV-9 is included in the vaccine (von Teichman and Smit, 2008). Although the LAV is currently the best option in the fight against AHS, its use has raised concerns with regard to reversion to virulence, gene segment reassortment between outbreak and vaccine strains (Weyer et al., 2016), and the absence of DIVA, which is the ability to Differentiate between Infected and Vaccinated Animals. Most importantly, the LAV is not licensed for use outside of the African subcontinent. As is presently the case in South and southern Africa, AHS outbreaks in Europe would result in large economic losses to the equine industry and would have an enormous emotional impact on owners and lovers of horses. There is thus a pressing need to develop new, safe, efficacious and cost-effective DIVA vaccines which would primarily address the concerns of the South African equestrian community, as well as being acceptable prophylactic or rapid response vaccines in the European and other emerging outbreak contexts. The AHSV genome consists of 10 segments of linear doublestranded RNA, encoding seven structural and five nonstructural proteins (Roy et al., 1994). The virion is nonenveloped and is composed of three distinct protein layers (Figure 1a). The inner core is formed by VP3, while the outer core layer is formed by protein VP7, which is also the group-specific antigen used in ELISA-based diagnostic tests (Chuma et al., 1992). VP7 assembles into trimers that attach perpendicularly to the VP3 surface. In native virions, these two layers enclose the subcore, comprising the 10 dsRNA segments together with the transcription complex, to form a stable icosahedral core particle around 78 nm in diameter. The outer capsid is composed firstly of a layer of VP5 and then one of VP2, which is the protein containing the antigenic determinants that induce serotype-specific neutralizing antibodies. Due to raised international awareness and local dissatisfaction with the current vaccine, AHSV research has focussed in recent years on the development of recombinant vaccines based on selected antigenic AHSV proteins, particularly the outer capsid proteins VP2 and VP5. Baculovirus expression systems (Kanai et al., 2014;Roy and Sutton, 1998) and poxvirus vectors (Alberca et al., 2014;Calvo-Pinilla et al., 2014, 2015El Garch et al., 2012;Guthrie et al., 2009) have been used to produce vaccines that induce protective immunity against various AHSV antigens. Disadvantages inherent to these types of vaccines include firstly, that recombinant soluble antigens are generally poorly immunogenic and require potent adjuvants or repeated boost inoculations to enhance immunogenicity; secondly, that pre-existing immunity against the viral vector may compromise vaccine efficacy. Viruslike particles (VLPs) that mimic the structure of intact virions, on the other hand, provide an attractive alternative vaccine platform. Sharing certain key characteristics with live viruses, VLPs are safe nonreplicating protein assemblies with the advantage of being highly immunogenic, as epitopes are displayed in ordered repetitive arrays on the particle surface (Noad and Roy, 2003). Such vaccines present no risk of reversion to virulence nor of dsRNA segment reassortment with wild virus strains because they do not contain viral RNA or nonstructural proteins, which also makes it possible to distinguish between vaccinated and infected animals using molecular diagnostic techniques. In addition to expressing the individual AHSV VP2 and VP5 proteins, Roy and Sutton (1998) also used the baculovirus expression system in insect cells to synthesize AHS virus-like particles. More recently, reverse genetics systems have been used to generate replication-deficient AHSVs (Lulla et al., 2016;Vermaak et al., 2015;van de Water et al., 2015), which have the potential to be used as Disabled Infectious Single Animal (DISA) vaccine strains. Although this technology looks promising, the associated cost and upscaling requirements have thus far prevented any of these potential vaccine candidates from being commercialized. Over recent years, the use of plant systems to express recombinant viral structural proteins, with the resulting selfassembly of VLPs, has become increasingly popular as the method is both cost-effective and free from the risk of contaminating animal pathogens (Lomonossoff and D'Aoust, 2016;Rybicki, 2010Rybicki, , 2014Steele et al., 2017;Topp et al., 2016). Thuenemann et al. (2013) Following on from these studies, here we report the expression and complete assembly of AHSV serotype 5 VLPs in N. benthamiana using Agrobacteriummediated delivery of constructs encoding the four major AHSV-5 capsid proteins transiently expressed in N. benthamiana leaves self-assemble into VLPs To investigate whether AHSV-5 capsid proteins could be transiently co-expressed and lead to spontaneous self-assembly of intact VLPs within individual plants, recombinant plasmids containing the VP2, VP3, VP5 and VP7 genes were constructed. A consensus sequence of each gene was obtained by aligning all the known sequences listed in GenBank; these were codonoptimized for Nicotiana spp. translation and synthesized with flanking AgeI and XhoI restriction enzyme sites by GenScript Biotech Corporation, China. The genes were cloned into the multiple cloning site of the pEAQ-HT vector (Sainsbury et al., 2009; obtained from G. Lomonossoff, John Innes Centre, UK) to yield four different constructs, pEAQ-AHS5-VP2, pEAQ-AHS5-VP3, pEAQ-AHS5-VP5 and pEAQ-AHS5-VP7 ( Figure 1b). Transient expression of the AHSV proteins in N. benthamiana was tested by small-scale syringe infiltration of five leaves per experiment with Agrobacterium strains carrying individual constructs, or co-infiltration of the same plant with all four recombinant-carrying strains. All infiltrated leaf tissue exhibited chlorosis, but little if any necrosis was observed ( Figure 2a). Agrobacterium suspensions carrying recombinants in two different VP2 : VP3 : VP5 : VP7 ratios were tested, namely 1 : 1 : 1 : 1 and 1 : 1 : 2 : 1, as the latter ratio has been previously shown (van Zyl et al., 2016) to give a better yield of bluetongue virus (BTV) VLPs. Three leaf discs were clipped per expression test using an Eppendorf vial lid, and extracted on 3, 5 and 7 days postinfiltration (dpi), to determine the optimal expression conditions. Western blots showed that crude leaf extracts infiltrated with Agrobacterium-carrying recombinants at an OD 600 of 0.5 each and prepared 7 days after infiltration yielded good protein expression ( Figure 2b). There was no apparent difference in expression between the two construct mixture ratios used ( Figure S1); therefore, plants were infiltrated with each recombinant at OD 600 = 0.5 in all subsequent experiments. Expression of VP2 (123 kD) and VP7 (37 kD) as well as the VP7 trimer (135 kD) was demonstrated, the proteins being visualized as distinct bands of the correct expected molecular weight in SDS-PAGE analyses of crude extracts. Apparent differences in gel loading are sometimes observed as a result of natural leaf-to-leaf and plant-to-plant total soluble protein (TSP) variation. Bands corresponding to VP3 (103 kD) and VP5 (57 kD) were not detected due to a peculiarity of the available antiserum, which has been shown to detect only VP2 and VP7. However, fully formed AHSV-5 VLPs were imaged by TEM analysis of these crude extracts, indicating that all four capsid proteins were expressed and indeed had self-assembled into complete particles ( Figure 2c). As such, this is the first known report of AHSV VLPs being produced in plants. Density gradient ultracentrifugation is a suitable purification method for plant-produced AHSV-5 VLPs To produce an AHS VLP preparation of sufficient purity and concentration for immunization of guinea pigs, several modifications were made to the small-scale expression protocol. Firstly, the process was upscaled to infiltrate 24 whole plants with the recombinant constructs at an OD 600 of 0.5 each. (1 : 1000), which was unable to detect either VP3 or VP5, was used as the primary antibody. VP7 trimeric proteins (135 kDa), VP2 (123 kDa) and VP7 monomeric proteins (38 kDa) are indicated by arrow heads. Colour-prestained protein standard, broad range (New England Biolabs, Massachusetts, USA) indicated to the right of the blot was used as a molecular weight marker. (c) Fully assembled AHSV 5 virus-like particles imaged by TEM analysis of crude extracts from plants coinfiltrated with pEAQ-AHS5 VP2, pEAQ-AHS5 VP3, pEAQ-AHS5 VP5 and pEAQ-AHS5 VP7. Scale bar, 100 nm. Secondly, AHSV VP7 is known to form trimers which aggregate into crystalline structures in the cytoplasm of infected cells (Burroughs et al., 1994), and there is evidence to suggest that these crystals impede VLP formation by sequestering available soluble VP7 trimers and preventing them from incorporating into the core particle (Bekker et al., 2017;Maree et al., 2016). Therefore, a mutated version of the VP7 gene containing seven amino acid substitutions near the 3 0 end (Bekker, 2015) was also synthesized and cloned into pEAQ-HT to yield pEAQ-AHS5-VP7 mu. Co-infiltration with Agrobacterium strains carrying the VP2, VP3 and VP5 recombinants together with the mutated construct as opposed to the wild-type VP7 construct yielded an increased concentration of VLPs ( Figure S2). Therefore, the mutated VP7 construct was used in all further experiments. Thirdly, a vacuum infiltrator was used to introduce the Agrobacterium suspension into the leaf intercellular spaces as this was much less labour intensive than syringe infiltration and resulted in more uniform infiltration of plant leaves. Clarified leaf extracts were purified by iodixanol density gradient ultracentrifugation. Green-coloured impurities settled in the upper 30% region of the gradient, while a single iridescent band was observed at a higher density, near the 30%-40% interface ( Figure 3a). Fractions were collected from the bottom of the tube and four distinct bands corresponding to the correct molecular weight sizes of the AHSV capsid proteins were observed following separation of fractions 6-8 by SDS-PAGE and Coomassie blue staining (Figure 3b). These fractions also corresponded to the observed iridescent band. Expression of VP2 and VP7 as well as the VP7 trimer was demonstrated by Western blotting using the available AHSV-5 antiserum ( Figure S3). As AHS serotype 5 is one of the serotypes not included in the LAV, we did not have access to a positive control for this study. Therefore, the identity of the four protein species was further confirmed by mass spectrometry ( Figure S4). The co-sedimentation of all four proteins was highly suggestive of the presence of VLPs, and this was confirmed by TEM analysis (Figure 3c). An estimated 40%-50% of the viral structures were seen to be complete AHSV VLPs (white arrows in Figure 3c) or contained at least a partial VP2 outer layer, although some particles appear to have been slightly damaged during the purification process. Assembly intermediates representing corelike particles (CLPs) or CLPs in the process of acquiring the two outer coat proteins were also observed (yellow arrows in Figure 3c). Gel densitometry was used to estimate the VLP concentration ( Figure S5). The purification was repeated several times and typically, 70 g infiltrated leaf material yielded AE 0.4 mg highly purified VLPs, which equates to AE 5.7 mg purified VLPs/kg leaf biomass. Plant-produced AHSV-5 VLPs induce a strong immunogenic response in guinea pigs Guinea pigs were used as a small animal model to test the ability of the plant-produced AHSV-5 VLPs to induce an immune response. On day 0, five guinea pigs (V1-V5) were each vaccinated with AHSV VLPs mixed with 5% Pet Gel A adjuvant (Seppic, Paris, France), while five control animals (C1-C5) were immunized with PBS plus adjuvant. Prior to the boost inoculation, a further purification was carried out which yielded sufficient AHS VLPs to increase the amount of the next inoculum. As this was a first vaccination study, the optimal dose required was unknown and animals were therefore boosted on day 13 with a greater amount of VLPs than used for the prime inoculation, or PBS. Due to accidental traumatic injury, guinea pig V1 was euthanized on day 15 and, together with control guinea pig C1, was excluded from the main study results. Sera from all other animals were collected on day 41 and final and prebleed sera (1 : 10 000) were used to probe a Western blot of VLPs used in the initial inoculations. Strong signals for VP2, VP5 and VP7 (both monomers and trimers) were detected by final bleed sera but not by the control guinea pig sera (C2-C5) nor the prebleed sera from all VLP-vaccinated guinea pigs (Figure 4). Antiserum from guinea pigs immunized with plantproduced AHSV VLPs neutralized live virus To test the ability of the sera to neutralize live virus, serum samples from all guinea pigs were sent to the Equine Research Centre, Faculty of Veterinary Science, University of Pretoria, for serum neutralization tests. Sera were assayed against AHSV-5 and AHSV-8 as serological cross-protection has been shown in vitro between serotypes 5 and 8 (von Teichman et al., 2010), and AHSV-4 for which no cross-protection has been shown. All VLPvaccinated guinea pig sera showed a high level of neutralization capability against AHSV-5 and neutralized AHSV-8 to a lesser extent, but to a similar degree compared to the AHS-positive control ( Table 1). The sera did not neutralize AHSV-4 and control guinea pig sera did not neutralize any of the AHSV serotypes. These results indicate that plant-produced AHSV-5 VLPs stimulate a highly protective immune response in guinea pigs. Discussion The results presented here show that the four AHSV 5 capsid proteins, VP2, VP3, VP5 and VP7 mu , spontaneously assemble into virus-like particles (VLPs), following recombinant DNA expression of these proteins within transiently transfected plant cells. The VLPs are noninfectious and inherently safe, as they assemble in the absence of any viral genetic material or other viral proteins. The proteins were each expressed from different recombinantcarrying Agrobacterium strains co-infiltrated in equal concentrations, and particles formed within 7 days. VLPs seen in crude extracts resulting from small-scale expression in a single plant were almost all fully formed, whereas larger-scale expression and subsequent iodixanol gradient purification produced a more heterogeneous mix of particles including assembly intermediates as well as some partially damaged particles. This phenomenon was observed regardless of whether particles contained the wildtype or mutated VP7 protein, indicating that minor modification of VP7 by seven amino acid substitutions did not impact on the nature of viral particle formation. However, modifying VP7 in this way did appear to increase the number of particles produced relative to use of the wild-type VP7. The average number of particles counted over 10 fields of view at a magnification of x54 000 was 9.2 for particles containing the wild-type VP7 protein compared to 30.7 for particles incorporating the mutated VP7. Furthermore, a comparison of Western blot analyses of gradient fractions containing the two VLP species also seemed to indicate greater VLP formation with VP7 mu expression ( Figure S2). VP2 Figure 4 Immunogenicity of plant-produced AHSV-5 VLPs in guinea pigs. Vaccine and control guinea pig groups (n = 4) were vaccinated with plantproduced AHSV-5 VLPs (guinea pigs V2-V5) or PBS (guinea pigs C2-C5), respectively. Both vaccines were formulated with 5% Pet Gel A adjuvant (Seppic, Paris, France). Guinea pigs V2-V5 were immunized with a dose of 16.5 lg AHSV-5 VLPs on day 0 and boosted with a dose of 50ug VLPs on day 13, while guinea pigs C2-C5 were vaccinated with PBS as per the same schedule. Serum was collected on day 41 and antisera (1 : 10 000 dilution) from guinea pig V2 (lane 1), V3 (lane 2), V4 (lane 3) and V5 (lane 4) final bleeds were used to detect AHSV-5 VLPs on standard Western blots separately. AHSV proteins were not detected by control guinea pig sera nor in any of the prebleed sera. The location of the AHSV viral proteins VP2, VP5 and VP7 mu as well as the VP7 mu trimer is indicated to the left of the blots, while the molecular weight marker sizes are shown on the right. No signal was detected for the innermost core protein VP3. Table 1 Virus neutralizing antibody titres of serum samples from vaccinated (V) and control (C) guinea pigs. The guinea pig sera were assayed for neutralization capability against AHSV-5, AHSV-4 and AHSV-8, as serological cross-protection has been shown in vitro between serotypes 5 and 8, but not between serotypes 5 and 4. Horse serum from animals vaccinated with the AHSV live-attenuated vaccine produced by Onderstepoort Biological Products (OBP) was used as a positive control was detected to a greater degree when VP7 was more available for incorporation into viral cores, implying increased particle formation. This is in agreement with work performed by others (Maree et al., 2016) who demonstrated an improvement in the efficiency of core-like particle (CLP) production following coexpression of VP3 with VP7 containing a 6-mer peptide insertion mutation in a surface-exposed loop located in the lower part of the top domain. Both their and our studies suggest that AHSV VLP formation can be enhanced by genetic modification of VP7 leading to increased availability of VP7 trimers for incorporation into the AHSV core particle. Density gradient ultracentrifugation is a useful purification strategy for confirming the successful assembly of VLPs within the plants, but it is possible that the high centrifugal force may be damaging to the particles. Indeed, it has been our observation that crude plant extracts contain a much higher percentage of fully formed VLPs, albeit at a much lower concentration, compared to VLPs in iodixanol gradient fractions. This, together with the high cost of iodixanol, the expensive and sensitive equipment required and the labour-intensive centrifugation and fractionation step, makes it important to consider alternative purification strategies. These could include depth filtration and tangential flow filtration for large-scale centrifuge-free production. However, although some of the purified particles appeared incomplete and thus probably did not contain the full complement of VP2, sufficient VP2 was present to elicit a strong serotype-specific neutralizing antibody response when injected into guinea pigs. The guinea pig has been shown by others to be a useful laboratory model for evaluating the antigenic properties of AHS vaccines (Erasmus, 1978;Lelli et al., 2013;Ronchi et al., 2012). We therefore tested the efficacy of our candidate vaccine in guinea pigs prior to testing it in horses, and have demonstrated that all the animals vaccinated with the plant-produced AHSV-5 VLPs seroconverted after two doses of the vaccine. Vaccination did not cause any adverse reaction in the guinea pigs which, aside from the guinea pig which was injured during handling and had to be euthanized, all remained healthy to the end-point of the study. The guinea pigs vaccinated in this study developed virus neutralizing antibody (VNAb) titres ranging between 640 and 5120 (2.8 and 3.7 log10) against homologous AHSV-5 VLPs. These results exceed VNAb titres obtained in studies by others using poxvirus vectors expressing the outer capsid proteins VP2 and VP5 (Alberca et al., 2014;Guthrie et al., 2009). In the study by Guthrie et al. (2009), horses vaccinated with 10 7.1 TCID50 of a canary pox-based AHSV vaccine (ALVACâ-AHSV) expressing VP2 and VP5 of AHSV-4 developed serum VNAb titres of 20-40 (1.3-1.6 log10). Alberca et al. (2014) vaccinated horses with 10 8 pfu/ mL of a modified vaccinia Ankara (MVA) virus expressing VP2 of serotype 9 and reported VNAb titres of 1.6-2.4 log10. In this study, guinea pigs were first each vaccinated with 16.5 lg plantproduced VLPs as this was the amount of highly purified VLPs available at the start of the study. Subsequent to the initial vaccination, further purifications yielded sufficient VLPs to increase this dosage. Because no previous studies using plantproduced AHSV VLPs have been described and as Theunemann et al. showed that 50 lg plant-produced BTV VLPs gave good protection against BTV in sheep, we decided to boost inoculate the guinea pigs with 50 lg VLPs instead of a second dose of 16.5 lg. The unfortunate accidental injury and subsequent euthanization of one of the guinea pigs (V1) 2 days after the boost vaccination (day 15) gave us the opportunity to test the immune response prior to the end of the study. Even at this early stage, sera from this guinea pig developed a VNAb titre of 224, twice that of the AHS-positive control (112) (Table S1). It is therefore feasible to suggest that 50 lg was probably considerably in excess of the VLP dose required for guinea pigs, and could quite possibly be a sufficient dose for horses even though they are much larger animals. Our results thus demonstrate that fully assembled AHSV-5 VLPs could be produced in plants via a transient expression strategy and that plant-produced AHSV 5 VLPs elicit a strong serotypespecific neutralizing antibody response in guinea pigs. Our study also confirms the suitability of the guinea pig as a small animal model for preliminary testing of recombinant AHSV vaccines. The study will now be extended to include safety and immunogenicity testing in horses, the main target animals, with a view to producing a supplemental vaccine to AHSV-5 to complement the standard live vaccine mixture. Transient expression in plants Expression of the AHSV-5 capsid proteins was achieved by agroinfiltration of 5-to 6-week-old N. benthamiana plants. Agrobacterium transformants each carrying one of the AHSV-5 capsid protein genes were subcultured and grown overnight with agitation at 27°C in Luria-Bertani broth (LBB) base supplemented with 50 lg/mL kanamycin, 20 lM acetosyringone and 2 mM MgSO 4 . The cultures were diluted in resuspension solution (10 mM MES, pH 5.6, 10 mM MgCl 2 , 100 lM acetosyringone) to the desired optical density and incubated for 1 h at 22°C to allow for expression of the vir genes. For single infiltrations, each AHSV-5 Agrobacterium recombinant suspension was diluted to OD 600 = 0.5 or 1.0, while co-infiltration suspensions contained all four AHSV-5 recombinants in a ratio VP2 : VP3 : VP5 : VP7 of 1 : 1 : 1 : 1 or 1 : 1 : 2 : 1. Plants were grown at 22-25°C under 16-h/8-h light/dark cycles. Agrobacterium suspensions were infiltrated into the leaf intercellular spaces using either a blunt-ended syringe or a vacuum infiltrator, applying a vacuum of 100 kPa. For optimization of the expression, three leaf discs were obtained from each plant, clipped with the lid of a microcentrifuge tube on 3, 5 and 7 days postinfiltration (dpi) and homogenized in three volumes of PI buffer [phosphate-buffered saline (PBS), pH 7.4, containing 19 Complete protease inhibitor cocktail (Roche, Basel, Switzerland)] using a micropestle. The homogenate was incubated on ice for 30 min and then clarified by centrifugation at 13 000 rpm for 15 min in a benchtop microfuge. SDS-PAGE samples were prepared by adding 50 lL 59 sample application buffer to 200 lL clarified crude extract and heat-denatured at 95°C for 5 min before loading 40 lL per gel lane. For large-scale expression, leaf tissue was harvested 7 dpi, as this time span was shown to be optimal for expression of all four capsid proteins. Harvested leaves were immediately homogenized in three volumes PI buffer using a Moulinex TM juice extractor. The homogenized leaves were re-incubated with the extracted juice and incubated at 4°C for 1 h with gentle shaking. Crude plant extracts were filtered through four layers of Miracloth TM (Merck, Darmstadt, Germany) and the filtrate was clarified by centrifugation at 13 000 rpm for 15 min at 4°C. Purification and Western blot analysis AHSV-5 VLPs were purified by iodixanol density gradient ultracentrifugation. Iodixanol (Optiprep TM ; Sigma Aldrich, St Louis, MO) solutions (20%-60%), prepared in PBS, were used to create a 12 mL step gradient (2-3 mL of each gradient in 10% incrementing steps) under 27 mL clarified plant extract and centrifuged at 32 000 rpm for 2 h at 4°C in an SW 32 Ti rotor (Beckman, Brea, CA). Fractions of 1 mL were collected from the bottom of the tube and 30 lL from fractions representing the 30%-40% region of the gradient was electrophoresed on a 10% SDS-polyacrylamide gel, followed by Coomassie blue staining. Particle quantification was achieved by visual comparison of the four capsid protein bands to known amounts of bovine serum albumin (BSA) run in separate lanes on the same SDS-PAGE gel. To further purify and concentrate VLP samples for use in animal studies, VLP-containing fractions were diluted with PBS to 20% iodixanol and subjected to a second round of ultracentrifugation as per the same protocol described above. Both crude plant extracts and gradient-purified VLPs were analysed by Western blot: heat-denatured samples were separated on 10% polyacrylamide gels and then transferred onto HyBond TM C Extra nitrocellulose membranes (AEC-Amersham, Gauteng, South Africa) using a Trans-blotâ SD semi-dry transfer cell (Bio-Rad, Irvine, CA). Membranes were first probed with a 1 : 1000 dilution of AHSV-5specific horse serum (received from Dr C. Potgieter, Deltamune, Pretoria, South Africa), washed four times with PBS containing 0.05% Tweenâ 20 (Sigma Aldrich, St Louis, MO) (PBS-T) and then probed with 1 : 5000 dilution of anti-horse alkaline phosphataseconjugated secondary antibody (Sigma Aldrich, St Louis, Missouri). After washing again, proteins were detected with 5-broo-4-chloro-3-indoxyl-phosphate (BCIP) and nitroblue tetrazolium (NBT) phosphatase substrate (BCIP/NBT 1-component, KPL, SeraCare, Milford, MA). Mass spectrometry The identities of the protein species observed on the Coomassiestained gel were independently determined by the Centre for Proteomic and Genomic Research (CPGR, Cape Town, South Africa). Gel pieces were washed and fragmented by in-gel trypsin digestion as per the protocol described by Shevchenko et al. (2007). The peptide solution was analysed using a Dionex Ultimate 3000 nano-HPLC system (ThermoFischer Scientific, Waltham, MA) coupled to a Q Exactive TM Hybrid Quadrupole-Orbitrap Mass Spectrometer (ThermoFischer Scientific). Byonic software (Protein Metrics, San Carlos) was used for comparison of the spectra with sequences retrieved from the UniProt Swissprot protein database. Samples were interrogated against Nicotiana spp, Agrobacterium spp and African horse sickness virus proteomes. Transmission electron microscopy Glow-discharged copper grids (mesh size 200) were floated on 20 lL crude plant extract or 20 lL density gradient fractions for 3 min and then washed successively by floating on five drops of sterile water. Particles were negatively stained for 30 seconds with 2% uranyl acetate and then imaged using a Technai G2 transmission electron microscope (TEM). Immunization of guinea pigs Approval for the immunization experiments was obtained from the Faculty of Health Sciences Animal Ethics Committee, University of Cape Town (FHS AEC ref No.: 016/019). Prior to the study, 100 lL of blood was drawn from each of 10 female guinea pigs (Hartley strain). Guinea pigs (n = 5) were injected subcutaneously with 16.5 lg purified AHSV-5 VLPs or 30% iodixanol in PBS, both formulated in 5% Montanide PET Gel A adjuvant (Seppic, Paris, France). Animals were boosted once on day 13 with 50 lg VLPs or PBS, and on day 41, they were euthanized by anaesthesia with ketamine/xylazine and exsanguinated. Serum was tested for antibodies by Western blot analysis, where guinea pig antisera were used at a dilution of 1 : 10 000 as per the protocol described above. Neutralization assays The serum neutralizing antibody titres of individual guinea pig sera were assayed against three different AHSV serotypes, namely serotypes 4, 5 and 8 using a serum neutralization test (SNT) as previously described (House et al., 1990). This was carried out by Ms Carina Lourens at the Equine Research Centre, Faculty of Veterinary Science, University of Pretoria. Supporting information Additional Supporting Information may be found online in the supporting information tab for this article: Figure S1 Optimization of plant-based expression of recombinant AHSV-5 structural proteins. Figure S2 Increased formation of AHSV-5 VLPs incorporating a mutated version of VP7. Figure S3 Purification of AHSV-5 VLPs. Figure S4 Mass spectrometry analysis of the 4 protein bands recovered from SDS-PAGE separation of density gradient fractions from leaves co-infiltrated with Agrobacterium AGL1 pEAQ recombinants for co-expression of AHSV capsid proteins VP2, VP3, VP5 and VP7 mu . Figure S5 Quantification of AHSV-5 VLPs by gel densitometry.
7,113.4
2017-08-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
The Impact of 900VA Electricity Tariff Adjustment on Household Consumption The aim of this study was to analyze the impact of 900 VA electricity tariff adjustments on household consumption patterns in East Borneo. This policy potentially increased the poverty, considering that in the last few years, East Borneo had experienced a contraction in economic growth. The analysis of this study used the Linear Approximation of Almost Ideal Demand System (LA/ AIDS), and the concept of elasticity to reach the objectives of this study using Susenas in 2016 and 2017. The results of the analysis showed that the policy indirectly had more impact on all residential electricity customers rather than on 900 VA and above customers. The residential electricity customers would generally be more responsive to reduce the non-staple consumption in addition to respond the subsidies revocation, compared to 900 VA and above users. This circumstance was certainly related to the economic condition of 900 VA and above residential electricity customers who were more capable, so the food needs were no longer a household staple. Meanwhile, the middle economic households would continue to maintain the nutritional status of the household by continuing to consume high protein food sources (fish / meat / eggs / milk). Meanwhile, based on the type of region, the revocation of 900 VA subsidies and the increase in household non-subsidized tariffs for rural was more responsive than urban households. This was understandable since the level of electricity dependence of the urban community was quite high than the rural area. . INTRODUCTION Nowadays, Indonesia is experiencing the growth in electricity consumption which tends to be wasteful and unproductive. This is indicated by the ratio of Gross Domestic Product (GDP) per capita to electricity consumption per capita which is still relatively low. The relationship between GDP per capita with electricity consumption per capita is that Indonesia's position is still below Thailand and Malaysia, and slightly below the average of ASEAN countries, even far behind Brunei Darussalam and Singapore in terms of efficient utilization of electricity (Mulyani & Hartono, 2018). Thus, as an effort to implement energy efficiency, the government gradually adjusts electricity tariffs. This is the main problem of research in which this policy potentially has consequences for increasing poverty and decreasing welfare in East Borneo province considering that in the last few years this province has experienced negative economic growth. The decline phase occurred since 2014, which was 1.71% from the previous 2.25% in 2013. On the contrary, in 2015, East Borneo experienced a contraction in the rate of economic growth of -1.2% and -0.36% in 2016. Therefore, the government needs to be aware of the negative economic impact due to the increase electricity tariffs. The basic electricity tariff is the selling price of electricity applied by State Electricity Company (PLN) customers imposed by the government. Basic electricity tariffs are also commonly referred to electricity tariffs or electricity power tariffs (TTL). In January 1, 2017, the government through the Ministry of Energy and Mineral Resources (ESDM) and PLN officially revoked electricity subsidies for some groups of 900 VA electricity customers. This policy has been regulated in the Minister of Energy and Mineral Resources Regulation No. 28 of 2016 concerning PT PLN electricity tariffs which regulates the application of non-subsidized tariffs for 900 VA households, and Minister of Energy and Mineral Resources Regulation No. 29 of 2016 concerning the mechanism for providing subsidized TTL for households. The government imposes a tariff adjustment in which the new mechanism that was set adjusts to the cost of electricity supply. It is influenced by fuel prices, the rupiah exchange rate and monthly inflation. This policy was issued in order to make the subsidies right on target. In fact, the survey results showed that around 18.99 million of 23.09 million of 900 VA electricity customers were not eligible to receive electricity subsidies (economically capable households). This is in line with the data provided by the Integrated Data from Penanganan Program Fakir Miskin which states that only around 4.10 million of 900 VA customers are eligible to be subsidized (Table 1). Energy and Mineral Resources, 2017) For the government, the tariff adjustment is intended to make the funds coming into PLN can be used to improve aspects of electricity distribution in every corner of Indonesia. Nowadays, the electrification ratio of Indonesia has only reached 89.5%. In other words, it is still lagging behind neighboring states such as Singapore, Malaysia, Thailand, and Vietnam which have reached 98%. The subsidies which were considered not on target later will be diverted to seven million households to increase the national electrification ratio. PT PLN data showed that there are 408,650 of 900 VA customers in East Borneo and only 44,756 customers are eligible to receive subsidies. These customers are categorized as poor according to the verification results of the National Team for the Acceleration of Poverty Reduction (TNP2K). TNP2K imposes up to 100 criteria for poverty compared to BPS (Central Bureau of Statistics) which uses only 14 indicators. Therefore, as many as 363,894 900 VA electric customers' subsidies have officially been revoked. They are no longer eligible to receive government subsidies, each amounting to Rp 110 thousand per customer. It means that households must pay higher electricity costs than formerly or their spending on electricity will be raised because of bills rising. In fact, the household sector (in terms of expenditure) has a significant contribution in economic growth in the province of East Borneo. The previous explanation is proved by PDRB Data based on Current Prices which reveals the distribution of household consumption expenditure in 2013 that reached 14.14% and increased in 2016 until 17.90% (BPS East Borneo, 2018a). As a result, if there is a change in government policy such as an adjustment (revocation of subsidies) of 900VA electricity tariffs, in case it is assumed a fixed income level, the community must reduce other non-essential costs so as to reduce people's purchasing power. This condition potentially increases poverty. Especially considering that 900 VA electricity customers are a group of people who are nearly on the poverty line and most of them are the informal workers. Further, the problem of poverty is not only related to the number and percentage of poor people are. However, the most important dimension is related to the depth and severity index of poverty. The poverty depth index in the province of East Borneo which has been increasing in the last few years proves that poverty will be increasingly difficult to alleviate since the expenditure of the poor people is getting further away. Subsequently, the poverty severity index in 2015 increased to 0.197 in 2018 from 0.167. This rising index showed that the poor people spending is getting unbalanced. This imbalance is also showed by the increasing Gini Ratio of East Borneo province (Table 2). Based on data household consumption patterns, the majority of population expenditure in East Borneo in 2017 was to meet non-food consumption needs, while the rest was for food consumption. The average expenditure per capita a month in East Borneo Province in 2017 was Rp. 1,443,928, -(BPS of East Borneo, 2017b) in which Rp. 663,535, -was used to meet food consumption and Rp. 780,393, -was for nonfood consumption. Admittedly, consumption expenditure is one aspect in measuring the level of public welfare. The higher the level of household income, the smaller proportion of expenditure for food to all household expenses will be. In other words, the households will be more prosperous if the percentage of expenditure for food is much smaller than the percentage for non-food (Sari, 2016). On the other hand BPS data indicates that pre-prosperous families in East Borneo Province have increased in recent years (Table 3). Pre Prosperous family is defined as a family that has not been able to meet the minimum basic needs. Incorrect application of policies will make it increasingly difficult to alleviate them from preprosperous to be prosperous families. Additionally, studies conducted by (Isdinarmiati, 2011), (Akili, 2014), (Rahmi, 2001), (Sahara, 2003), (Vihara, 2003), and (Komaidi & Rakhmanto, 2010) noted the negative impact on consumption because of the increasing electricity tariff. Hence, a simulation is urgently needed to see the extent of the impact of government policies on household consumption patterns, especially changes in commodity demand. Above all, the contribution and the novelty of this study compared to previous researches is on the different objects with the special scope of the province, namely East Borneo and the evaluation of policies that have just been implemented per January 1st, 2017 with the LA-AIDS method that has never been done before. The understanding of the impact of the 900 VA power adjustment on consumption patterns was expected to be beneficial for policy makers, especially related to poverty alleviation and food security. Based on the previous description, the main objective in this study was to analyze the impact of the 900 VA electricity tariff adjustment policy on household consumption patterns in East Borneo province. RESEARCH METHODOLOGY This study used East Borneo Province as the object with secondary data collected by the Central Statistics Agency, namely consumption module and core data in the National Socioeconomic Survey (Susenas) in March 2016 and 2017 which were cross section data with household sampling units. In March 2016 the number of households as the research sample were 2398 households and in 2017 there were 2864 households. Susenas collected the core data and consumption / expenditure module data and household income. The collected data in the core included information on household members, health, education, housing, and other socioeconomic matters. Meanwhile, the Susenas consumption module contains the quantity and value of food consumption which includes 215 commodities with 14 commodity sub-groups. The Marshallian Demand Function developed by Marshall mentions that a quantity of consumption or demand for a commodity by a consumer is influenced by the price level of that commodity, the price of other commodities, and income. Consumers were assumed to have a rational nature which was aimed to maximize its utility based on the limits of the amount of income or budget that is owned. They tended to choose various combinations of items with budget constraints. The consumer demand function which was adjusted for budget constraints is written mathematically as follows: With y is income (constant), i p is the price of the goods to i , i q is the quantity of goods to i , and  is the marginal utility of income. The amount of commodity consumption is not only influenced by economic factors (income and prices), but is also influenced by social characteristics. Differences social characteristics can cause differences in preferences for a commodity that result in differences in consumption patterns. Social characteristics include the level of education of household heads, location of residence, number of household members, and so on. One of approaches to include these socio-economic variables is to make the socioeconomic variable one of the independent variables, which can be stated in the following formula: Price, Y: Income, and SE: Social Variables. The large influence of socioeconomic characteristics will have an impact on the magnitude of differences in preferences for certain types of commodities. One model to analyze the consumption functions with socioeconomic variables is the Linear Approximation-Almost Ideal Demand System (LA-AIDS) model developed by (Deaton, 1980 The flexible AIDS cost function results in the demand function equation (2) becomes the first order approximation of consumer behavior in maximizing satisfaction. If the satisfaction is not met or not assumed to occur, the LA-AIDS demand function remains a function related to …(2) income and prices, so without homogeneity and symmetry restrictions, the function is still the first order approximation of demand function in general. Moreover, the LA-AIDS model uses a restricted model with the expectation of fulfilling several assumptions of the demand function, such as adding up, homogeneity, and symmetry. Based on reason that the intergroup food is an econometric equation system, this estimation approach is used with Seemingly Unrelated Regression (SUR) through Generalized Least Square (GLS) procedures. The GLS procedure is carried out to increase the efficiency of the allegations and does not require a classic assumption test. There are two problems in the LA-AIDS demand function model, namely simultaneous bias and selectivity bias. An equation containing simultaneous biases will produce a biased estimator. Simultaneous bias can be overcome by using the instrument price variable as an independent variable that is the unit value that is corrected by the quality of goods purchased (quality effect) and the amount purchased (quantity premium). Variable price corrected instruments are obtained through price deviation regression. Furthermore, one way to overcome selectivity bias is by grouping food commodities (Sari, 2016). The formation of commodity groups by researchers is usually based on prior research, study needs, local food, food nutrient content, policy objectives, and other considerations. Meanwhile in this study, food groups were formed based on the nutrient content of the commodities analyzed, which were divided into 5 large groups. The grouping included grains / tubers (carbohydrates), fish / meat / eggs / milk (animal protein source), vegetable / fruit (vegetable protein sources, vitamins and minerals), other foods groups consisting of fat / legume commodities, beverage ingredients , spices and cooked foods and the last group was electricity. Further in this study, the LA-AIDS model referred to Deaton and previous studies as follows: Notes: i,j = the commodity group (1, 2, 3, 4,5) = the proportion of household expenditure for i th group commodity consumption to total household expenditure = the unobserved commodity price j th (proxied by unit value) = Total amount of household expenditure P = Stone price index, = ∑ Education = Household expenses for the education costs of household members ( This study used descriptive analysis methods and econometrics. The data were processed with the STATA application program package version 13. The econometric model used was a function model of the LA-AIDS request system. This model was used to reach the research objectives. Meanwhile, responses to changes in household food consumption due to changes in prices and income were reached by using the elasticity value calculated from the estimation coefficient of the model. The elasticities used in this analysis included income elasticity, own-price elasticity and cross-elasticity. Elasticity was defined as a measure of the percentage change in a variable k S caused by a change in one percent for another variable. Further, the demand elasticity showed the percentage change in the quantity of goods demanded due to changes in one percent of the variables that affect it, while other conditions were assumed to be unchanged (ceteris paribus). Above all, in this study income was proxied for household expenditure so that the elasticity used was the expenditure elasticity approach. According to (Deaton, 1980), the elasticity is calculated by the following formula: 1. The price elasticity is formulated as follows: The own-price elasticity is the percentage change in the quantity of goods demanded due to changes in the price of the goods. The value of elasticity can distinguish goods into several properties: value | ε | <1 (inelastic goods), | ε | = 1 (unit elastic item), and | ε | > 1 (elastic item). 2. The cross elasticity is formulated as follows: The cross price elasticity shows the percentage change in the quantity of goods requested due to changes in the prices of other goods. The value of the cross-price elasticity depends on the relationship of two goods in which it has complementary goods with a value of elasticity <0, substitute goods (substitution) with an elasticity value > 0, or there is no useful relationship in the two goods (neutral) unless the value cross price elasticity = 0. 3. The income elasticity is formulated: The income elasticity shows the measurement of the response to consumer demand for a commodity due to changes in consumer income. The value of income elasticity can be used to classify an item whether it is inferior, normal, or luxury. The elasticity value can be divided into: ε <0 (the item includes inferior goods), 0 <ε <1 (the item includes normal or basic goods) and ε> 1 (the item includes luxury goods). RESULTS AND DISCUSSION Electricity Tariff (TTL) is the selling price of electricity applied by the government for the customers of PLN. PLN has subsidized and nonsubsidized customer tariff groups. In 2015 and 2016 there were two groups received subsidies from the government, namely R1 450VA and R1 900 VA groups. Meanwhile, 1300 VA and above groups did not received the subsidies or called as non-subsidized groups. In January 1, 2017 the first subsidy revocation was conducted. It was for the 900 VA group which was divided into 900 VA subsidy and 900 VA non-subsidy. Through tariff adjustment mechanism, starting from 1 January 2017, the TTL of 900 VA subsidy was set at Rp 791 per VA, while the TTL of 1,300 VA subsidy and above increased to Rp 1,467.28 per VA. For more, the TTL of 900 VA non-subsidy was slowly adjusted to be closer to other non-subsidized tariff groups, namely Rp 1,304 per VA. In this study, the impacts of subsidy revocation policies on some 900 VA electricity customers were identified in accordance with the impact before and after the policies were applied. The impact of TTL before the policies were applied was displayed using a model containing 2016 data, while the impact after the application of the policies used a model containing 2017 data. Moreover, LA/ AIDS models used in the analysis of this study were divided into two models. The first model was a model which used all observation units of residential electricity customers. The second model was a model whose observation units were 900 VA and 900 VA and above residential electricity customers. The first model was aimed at seeing indirect impacts of subsidy revocation for 900 VA electricity customers, and non-subsidized TTL increase on all residential electricity customers. Meanwhile, the second model was aimed at seeing direct impacts on the residents with revoked subsidy or the residents whose TTL increased. There are many approaches can be used to estimate LA-AIDS demand function model. They were such as OLS< 2SLS, GLS, and Seemingly Unrelated Regression (SUR). Each estimation has strengths and weaknesses. By considering each food group has a strong relationship one another, and the ease of no classical assumption test, so this study used SUR estimation approach. SUR consists of a set of equations in which each endogenous variable is interconnected with each other because of the correlation between residuals for each group of equations. SUR method uses GLS procedures and can improve prediction efficiency by explicitly considering that there is a residual correlation. GLS procedures (Generalized Least Square) is used in a case when OLS classical assumption such as homoscedasticity (constant variance) and non-autocorrelation (uncorrelated residual) are not met, so there is no need for classical assumption test. The whole test on SUR model uses Chi-Square test (χ2). From the output (Table 5, Table 6, Table 7, Table 8) it can be seen that the P values in both 2016 and 2017 were all less than α = 0.05, so it could be concluded that the null hypothesis was rejected. It meant that in all commodity groups, both in 2016 and 2017, the variables of food price groups and electricity tariff (PredLnP1, PredLnP2, PredLnP3, PredLnP4, PredLnP5), real household expenditure (LnYriil), health expenditure (LnHealth) ), education expenditure (LnEduct), highest education of household head (IJASAH), type of region (TYPE), employment of head of household (LAPEK), number of household members (LNART), proportion of children under five (PROBLT), and proportion of household members who go to school (PROSEK) simultaneously affected the budget share (W). Partial test (t test): the independent variables in commodity groups, both in 2016 and 2017 mostly had a significant influence (marked*). The total of household real expenditure (LnYriil) totally had significant influence on budget share. In addition, almost all food prices had significant influence on their budget share. However, the coefficient values on LA-AIDS model are not easy to interpret. The values would be more valuable when being interpreted in elasticity indicator. The coefficient sign indicates the direction of the relationship of the influence of the independent variable on the dependent variable. The direction of this relationship will be more easily interpreted in elasticity. In this study, the impacts of subsidy revocation policies on several 900 VA customers and the increase of TTL were seen from the condition before and after the policies were implemented. The impacts were showed by changes in price elasticity/ TTL before and after the policies were implemented. The price elasticity value can be seen in Table 9. The researchers distinguished the price elasticity into two categories based on the level of electricity dependence, namely urban and rural area. It was made by considering people in urban area depend more on electricity than those in rural area. Further, Table 9 showed that the price elasticity before and after the subsidy revocation / TTL increase was negative. It meant that if TTL increases, household or residents would respond it by reducing electricity usage. It is in line with the theory of consumer demand that there is an inverse relationship between the price and the amount requested, meaning that if the price of a commodity increases, the demand for the commodity will decrease. Based on table 9, it was known that the value of price elasticity before and after subsidy revocation/ TTL increase has increased, namely from -1.6277 to -1.8704 or it can be said that all residential electricity customers were more responsive in responding 900 VA subsidy revocation and changes in non-subsidized TTL (1,300 VA and above) compared to before the implementation of the policies. The elasticity value of -1.6277 meant that when TTL increases 1 percent, the community would respond it by reducing their electricity usage of 1.6277 percent. Meanwhile, the subsidy revocation/ nonsubsidized TTL increase caused household be more reactive than before, namely in 1 percent TTL increase, households would respond by reducing electricity usage by 1.8704 percent. On the other hand, the elasticity value produced by LA/ AIDS model showed that the groups of 900 VA and above customers were economically capable, not too responsive to the subsidy revocation or non-subsidized TTL increase. It was understandable since electricity has become the part of their household life needs. As long as the tariff is affordable, they would still pay it. One thing to note was that the 900 VA and above customers were more resistant to TTL changes than common people. Some of 900 VA electricity customers who were economically capable were more resilient in dealing with TTL changes. It was proved from the values of elasticity before and after the revocation/ TTL increase, namely -1.8571 to -1.2626. These data showed that even though there was an increase in TTL in 2017, the economically capable households were not as responsive as the condition before the policies were implemented. Based on the area categorization, the revocation of electricity and TTL increase in rural areas were more responsive when compared to those in urban areas, both for the common residential electricity customers, and 900 VA or above residential electricity customers. This phenomenon was understandable because the level of electricity dependence of the urban community is quite high compared to that in the village. The urban community seems inseparable from electronic equipment to do various things, such as as a means of lighting, air conditioning, entertainment, cooking fuel, and others. Meanwhile, TTL increase occurred in rural area was spontaneously responded by reducing the electricity usage as an effort to reduce electronic equipment. Reducing electricity usage in rural areas are not complicated because of the abundant stock of cooking fuels and possible environmental condition. Similar to own-price elasticity, cross price elasticity was also divided using types of area for both common electricity customers, and 900 VA and above customers. The data of this price elasticity are presented in Table 10. Both positive and negative values of cross price elasticity described 2 kinds of relations between commodity groups. It is positive when there is a substitution relation, while the negative one shows a complementary relation. Table 10 showed that the elasticity of electricity tariffs on fish/ meat/ egg/ milk, and vegetable/ fruit group, and other food groups had negative mark, while grains/ tubers group gained positive mark both in urban, and rural area. It was understandable because grins/ tuber group consisted of a group of staples, so an increase in TTL did not make households sacrifice (reduce) its consumption, but they would prefer to reduce consumption on other needs than on the group of staples. The consumption of staple foods will keep increasing since there was an increasing population and energy needed. Thus, households responded the increase and revocation by consuming grins/ tubers whose responsibility level of 0.7627. It meant that whenever there is 1 percent of TTL increase, the consumption of grins/ tubers will also increase of 0.76 percent. Meanwhile, the 900 VA and above residential electricity customers had higher responsibility level, namely 1.7633. Table 11 showed that the electricity customers commonly tended to be responsive in reducing food consumption outside the staple food in dealing with subsidy revocation or TTL increase compared to 900 VA and above residential electricity customers. If subsidy was revoked or TTL was increased, the household consumption on vegetable/ fruit would increase 48.62 percent, while for those who used 900 VA and above would increase 40.09 percent. For more, the common household responsibility on other food groups' consumption generally would increase 21.09 percent, while for those who subscribed to 900 VA and above would increase 3.66 percent only. Further, the common household responsibility on the consumption of fish/ meat/ egg/ milk would increase 33.67, while the consumption of household which used 900 VA and above decreased 29.22 percent. These phenomena were surely related to the economic condition of the 900 VA and above customers who were economically more stable, so the need of food was no longer considered as a household staple. Meanwhile, the middle economic household would continue to maintain nutrition by continuing to consume high protein sources (fish/ meat/ egg/ milk), although there was expense of other foods consumption. In relation to income elasticity, the analysis in this study compared the proxy of income and the proxy of expenditure, so the elasticity of income was analyzed using the proxy expenditure elasticity. The values of income elasticity can be seen in Table 12. In income elasticity, items were categorized into 2, namely inferior and normal. It was called inferior if the income elasticity of the items <0 and if ≥0, then the items were included in the category of normal. Whereas, normal items were divided into basic items (necessities) and luxurious. It belonged to staple category if the elasticity value was 0-1, while the luxury category had the value of > 1. Based on Table 12, it was known that all income elasticities were positive. It implied there was no groups of food categorized as inferior. In other words, all groups of food were normal items either before or after the provision of subsidy to 900 VA customers and TTL increase on non-subsidized customer. Positive value means if the income increases, the number of demand also increases. Generally, the elasticity values of grins/ tubers group and vegetables/fruits gained value more than 1, meaning that those groups of commodity were considered as luxurious items for most of East Borneo people. Meanwhile, the group of fish/ meat/ egg/ milk and other foods were categorized as necessities. For East Borneo people, fish/ meat/ egg/ milk may be considered as staple since it is inevitably inseparable part of menu (obligatory menu). Moreover, the group of grins/ tubers gained the highest elasticity value compared to others. Generally the impact of subsidy revocation policies for some of 900 VA electricity customers and TTL increase for non-subsidized residential customers who used 900 VA and above did not significantly influence the behavior of allocating income on food consumption, although there were a bit of elasticity changes after the implementation of the policies (insignificant). Also, table 12 showed that the elasticity of income on the groups of grins/ tubers, fish/ meat/ egg/ milk, and other foods increased after the implementation of the policies, while the group of vegetables/ fruits tended to decrease. The highest increase was experienced by the group of grins/ tubers that turned into 1.2554, meaning that the subsidy revocation for some of 900 VA customers and non-subsidized TTL increase changed the behaviors of households in allocating their income. If household income rises by 1 percent, it would be responded by an increase in consumption of grains / tubers by 1.2554 percent. The next highest increase in income elasticity was the other food groups to 0.9337 and followed by the fish / meat / eggs / milk group to 0.8962. Meanwhile, the vegetables / fruits group dropped to 1.1467. The findings of this study are in line with (Rahmi, 2001) who analyzed the impact of price changes in West Java in which all own-price elasticity was marked negative. It is in accordance with consumers demand theory that there is an opposite relationship between ownprice and the quantity of the demanded goods. It means that whenever the commodity price increases, the demand for the commodity would decrease. Whereas, most of own-price elasticity gained value more than 1 or can be said elastic, meaning that price changes are lower than demand changes. Price elasticity is basically a household response in consuming items by the time there is an increase in the item price. Generally, the study findings also indicated the own-price elasticity of urban households in both model was lower than the rural households. Besides the high dependence of urban household on electricity, the level of income earned by the urban household was relatively high compared to those in rural area, so the urban household had a higher purchasing power than rural households. The results of this study were based on the cross variance elasticity value, both negative and positive. They were also in lien with Kahar's study (2010). This study was conducted in Banten by simulating price increase that resulted in the domination of negative cross elasticity value. The negative elasticity value made the group of foods considered as complementary items. Complementary item means whenever there is an increase in price, there will be a decrease in demand, and vice versa. Meanwhile, there was only a small portion of food groups having positive value, meaning that the items still belonged to substitution category. One form of complements that occurred was the relationship between grains/ tubers with education. The relationship meant that the increase in grins/ tuber prices caused the decrease in education consumption demand, and vice versa. It showed that the society still prioritized staple foods to meet their daily needs. In this study, TTL increase did not cause the households to reduce staple foods consumption. However, they tended to reduce other items consumption other than staple foods. It proved that staple foods consumption would remain increasing as the population and energy increase. The results of this study are also in line with Sari's study, 2016 that the expenditure elasticity value in this study gained positive values. Based on a theory, positive value indicated that all food groups belonged to normal items or not inferior. Positive mark also meant that whenever the income allocated for foods increase, the demand of the particular foods would also increase. The group of fish/ meat/ egg/ milk was categorized as necessities (staple foods) for East Borneo community because its value was not more than 0. Similarly, it also applied to all area typologies both urban and rural areas. On the one hand, East Borneo is a province whose fish consumption is beyond the national average fish consumption. It is supported by high yields of marine fisheries, public waters and aquaculture. Thus, it was understandable that consuming fish was one of the obligatory menus for households in East Borneo. CONCLUSION According to the above analysis, the researchers drew some conclusions. First, the subsidy revocation for some of 900 VA residential electricity customers and nonsubsidized TTL increase in January 2017 indirectly impact all residential electricity customers other than 900 VA and above customers. Second, the subsidy revocation for some of 900 VA residential electricity customers and non-subsidized TTL increase in January 2017 do not significantly affect 900 VA and above residential electricity customers since they generally capable. Third, based on the types of area, the subsidy revocation for 900 VA customers and non-subsidized TTL increase indicate that the rural households tend to be more responsive than the urban's. This is understandable since the urban electricity customers on electricity dependence is quite high compared to those in rural area. Fourth, the residential electricity customers generally would be more responsive to reduce foods consumption other than staple foods in dealing with the subsidy revocation or TTL increase compared to 900 VA or above customers. It is surely related to the economic condition of the 900 VA residential electricity customers that is more capable and no longer prioritize foods needs as their basic needs. Meanwhile, the middle class economy households would continue to maintain the nutritional status by consuming foods with high level of protein (fish/ meat/ egg/ milk), although they need to reduce other foods consumption. Fifth, the impact of the subsidy revocation policies and TTL increase for 900 VA or above residential electricity customers do not significantly influence the households' behavior in allocating income for food consumption. Sixth, the impact of the subsidy revocation policies for some 900 VA electricity customers and non-subsidized TTL increase for 900 VA or above customers do not influence the level of food and electricity consumption of the capable households. By referring to the findings, and conclusion, there are some suggestions given by the researchers; the large number of public complaints against government policies in cutting electricity subsidies for 900 VA customers has implied that the revocation and provision of electricity subsidies are supposed to be done as selectively as possible so that it is right on target, so it would reduce inclusive error and exclusive error. One way to do so is by renewing the community data that has the right to obtain the government subsidies. This is in line with Malawat's study (2016) that increase in electricity tariffs should pay attention to conditions/ location of area, and income per capita of the community. In this study the weaknesses of the analysis are described as follows: the impact before and after TTL adjustment policies was assumed that other prices would not change (cateris paribus). In fact, other commodities prices experienced changes. Besides, the observation units before and after the policies used cross section data. It was supposed to be more appropriate when it used panel data observation. It means that the households involved as the observation units in 2016 should be the same as what was used in 2017, so the changes on the same subjects would be visible.
8,149.4
2019-09-03T00:00:00.000
[ "Economics", "Environmental Science" ]
Enhanced lipase production by mutation induced Aspergillus japonicus The purpose of the present investigation is to enhance production of the biomedically important enzyme, lipase, by subjecting the indigenous lipase producing fungal strain Aspergillus japonicus MTCC 1975 to strain improvement and random mutagenesis (UV irradiation, HNO2 and N-methyl-N’nitro-N-nitroso guanidine). The isolation of mutants and the lipolytic activity of selected mutants were described. The best UV selectant (AUV3) showed 127% higher lipase activity than the parent strain. The lipase yield of the best HNO2 mutant (AHN3) was 139% higher than UV mutant (AUV3) and 177% higher than the parent strain. Also, the lipase yield of the best NTG mutant (ANT4) was 156% higher lipase activity than the HNO2 mutant (AHN3) and 217% higher than the UV mutant (AUV3) and 276% higher lipase activity than the parent strain. The results indicated that UV, HNO2 and NTG treatment were effective physical and chemical mutagenic agents for strain improvement of Aspergillus japonicus for enhanced lipase productivity. INTRODUCTION Lipases (triacyl glycerol acyl hydrolases, (E.C. 3.1.1.3)are versatile catalysts, which are used for diverse purposes by (Kajandjian et al., 1986).Fungal lipases have received attention because of their potential use in food processing, pharmaceuticals, cosmetics, detergents and leather industry was reported (Sugiara, 1984).Rao et al. (1993) showed new applications, such as the resolution of racemic mixtures to produce optically active compounds, which arise from the stereo specific activity of lipase. The exponential increase in the application of lipases in various fields in the last few decades demands extension in both qualitative improvement and quantitative enhancement.Quantitative enhancement requires strain improvement and medium optimization for the overproduction of the enzyme, as the quantities produced by culture strains are low.Strain improvement is an essential part of process development for fermentation products.Develop-*Corresponding author E-mail<EMAIL_ADDRESS>+91-8922-256255, Fax: +91-8922-278946.ed strains can reduce the costs with increased productivity and can possess some specialized desirable characteristics.Such improved strains can be obtained by mutation and selection depends on the alternate processes of diversification, selection and rediversification, so that better strains are successfully picked out and further improved (Rowlands, 1984).The method used for diversification is mutation.This process involved changes in the nucleus of the organism, which leads to increased productivity.A mutant microbial strain is a strain derived from multiplication of a single haploid cell containing a mutant gene (Bapi Raju et al., 2004). The aim of the present investigation is to enhance lipase productivity of the fungal strain Aspergillus japonicus by subjecting it to improvement by random mutagenesis (UV Irradiation, HNO 2 and N-methyl-N'nitro-N-nitroso guanidine, NTG). Microorganism Aspergillus japonicus MTCC 1975 was purchased from IMTECH, Institute of Microbial Cultures, Chandigarh, India.This fungal strain shows good lipolytic activity.The strain was grown on malt-agar media (malt extract, 2 g; agar 2 g; Triton X-100, 0.1 ml; distilled water, 100 ml) slants at 28°C for 5 days and stored in the refrigerator at 4°C until further use. UV Irradiation A completely sporulated slant of A. japonicus (5 days old slant) was taken.The spores were scrapped off in to 5 ml of sterile water.The spore suspension was serially diluted up to 10 -5 dilution.A 0.1 ml quantity of spore suspension was poured aseptically on the medium contained in petriplates.The suspension was uniformly distributed using a sterile spreader. The spore suspension was exposed to UV light, was carried out in a UV Illuminator fitted with TUP 40w Germicidal lamp which has about 90% of its radiation at 2540-2550 A°.The exposure was carried out at a distance of 16 cm away from the center of the Germicidal lamp (UV light source) with occasional shaking.The exposure times were 60, 120, 180, 240 and 300 s. each UV exposed spore suspension was stored overnight to avoid photo reactivation.The plates were incubated for 5 days at 28°C and the numbers of colonies in each plate were counted.A total of 24 colonies were obtained and 6 isolates were selected from the colonies.The 6 isolates were streaked on Malt-Agar Slants and incubated for 5 days at 28°C.Among the selectants, the best UV mutant strain was used for further studies. HNO2 treatment To 9 ml of 10 -6 dilution of the best UV irradiated A. japonicus lipase producing spore suspension, 1 ml of sterile stock solution of 0.01 M sodium nitrate was added.One ml aliquots of samples were withdrawn at intervals of 10 min up to 60 min.0.5 ml of phosphate buffer was added to each sample and was neutralized with 0.5 ml of 0.1 M NaOH.0.1 ml of this exposed suspension was plated on malt-agar medium and the suspension was uniformly distributed using a sterile spreader.Each HNO2 exposed spore suspension was stored overnight to avoid photo reactivation.The plates were incubated for 5 days at 28°C.After 5 days, 17 colonies were obtained and among them, 6 isolates were selected from the plates on the basis of their morphology, size and shape.These isolates were streaked on malt-agar slants and incubated for 5 days at 28°C. N-methyl-N'-nitro-N-nitroso guanidine treatment The best HNO2 mutant (AHN3) was used for N-methyl-N'-nitro-Nnitroso guanidine (NTG) treatment.The spore suspension was prepared in the same manner as described earlier.To a 9 ml of spore suspension, 1 ml of NTG (3 mg ml -1 in phosphate buffer) was added.The reaction was allowed to proceed.Samples were withdrawn from the reaction mixture at intervals of 30, 60, 90, 120, 150, 180 min and immediately centrifuged for 10 min at 5000 rpm and the supernatant solution was decanted.Cells were washed three times with sterile water and resuspended in 10 ml of sterile phosphate buffer.The samples were serially diluted in the same buffer and plated over malt-agar as mentioned earlier.A total of 15 colonies, 5 isolates were selected from the plates showing less than 1% survival rate (120 and 150 min NTG treated spore suspension) and tested for lipase production. Growth and lipase production The fungi cultivated on malt-agar slants was scrapped with 5 ml of sterile distilled water and transferred into 250 ml Erlenmeyer flasks Karanam and Medicherla 2065 containing 50 ml of production medium (malt extract 20 g, peptone 5 g, yeast extract 3 g and sodium chloride 5 g per liter of distilled water).The flasks were incubated 28°C for 5 days on a rotary shaker (120 rpm). On the fifth day of fermentation, the enzyme was extracted from broth.The broth was filtered through cotton gauze for removing the cell mass.The filtrate was saturated with ammonium sulphate (40%) and then centrifuged at 10,000 rpm for 20 min at 5°C.The lipase enzyme precipitated was dissolved in phosphate buffer (pH 7). Lipase assay The activity of lipase was determined as described in the literature (Winkler and Stuckman, 1979) with some modifications. 1 ml of isopropanal containing 3 mg of p-nitrophenyl palmitate (pNPP) was mixed with 9 ml of 0.05 M Tris-Hcl buffer (pH 8.0), 40 mg of Triton X-100 and 10 mg of gum Arabic.Liberation of p-nitrophenol at 28°C was detected in UV Spectrophotometer at 410 nm.One enzyme unit was defined as 1 µmol of p-nitrophenol enzymatically released from the substrate per minute (Raman et al., 1998).All the fermentations and activity calculations were carried out in duplicate and the mean value was presented.Hopwood et al. (1985) suggested that 99.9% kill is best suited for strain improvement as the fewer survivors in the treated sample will have undergone repeated or multiple mutations which may lead to the enhancement in the productivity of the culture.The plates having less than 1% survival rates (120 and 180 s) were used to select for mutants.A total of 6 mutants (AUV 1 -AUV 6 ) were selected and tested for the lipase production and the results were represented in Figure 1.AUV 3 showed maximum lipase activity than parent strain (7.44 U/ml).Ellaiah et al. (2002) reported 156% fold increase in lipase yield of A. niger by UV mutagenic treatment whereas in the present investigation, the best UV mutant (AUV 3 ) showed 118% higher lipase activity than the parent strain. RESULTS AND DISCUSSION The UV mutant (AUV 3 ) was selected and was subjected to further strain improvement by HNO 2 treatment.HNO 2 is considered to be very effective chemical mutagen.The selected HNO 2 treated isolates, AHN 1 to AHN 6 , were obtained from plates having less than 1% survival rates (30 and 45 s).AHN 3 shows maximum lipase activity (9.47 U/ml) than parent strain as depicted in Figure 2. The lipase yield of the best HNO 2 mutant (AHN 3 ) was 139% higher than UV strain (AUV 3 ) and 177% higher than the parent strain. The HNO 2 mutant (AHN 3 ) was selected and was subjected to further strain improvement by NTG treatment, a well known mutagenic technique (Cerado-Olmedo and Hanawalt, 1968;Adelberg et al., 1965).Plates having less than 1% survival rates (120 and 150 min) were selected for the isolation of mutants and the lipolytic activity.ANT 4 showed maximum lipase activity (13.2 U/ml) than parent strain was represented in Figure 3. ANT 4 has 156% higher lipase activity than the HNO 2 Afr.J. Biotechnol.mutant (AHN 3 ), 217% higher than the UV strain (AUV 3 ) and 276% higher lipase activity than the parent strain.Caob and Zhanga (2000) reported an increase in lipase production of 3.25-fold by using a Pseudomonas mutant generated by UV, HNO 2 and NTG.Also, a 200% increase in lipase yield by Aspergillus niger mutant from UV and NTG treatments was reported by (Elliah et al., 2002).In the present investigation, a 276% increase in lipase production was achieved by strain improvement of indigenous isolate A. japonicus by induced mutations employing UV, HNO 2 and NTG. Improvement of microbial strains for the overproduction of industrial products has been the hallmark of all commercial fermentation processes.Such improved strains can reduce the cost of the processes with increased productivity and may also possess some specialized desirable characteristics.Effectiveness of UV Irradiation (physical mutagen) and HNO 2 , NTG treatments (Chemical mutagen) in strain improvement for enhanced lipase productivity was demonstrated in the present investigation.It is hoped that the high yielding fungal mutant strain of the isolate A. japonicus (ANT 4 ) can be exploited commercially for large-scale industrial production of lipase.
2,340.8
2008-06-17T00:00:00.000
[ "Biology", "Chemistry" ]
AUTOMATION OF GEOSPATIAL RASTER DATA ANALYSIS AND METADATA UPDATING: AN IN-DATABASE APPROACH This paper proposes a spatial data infrastructure (SDI) module for management of a continuous flow of geospatial images and related metadata. Examples of such flows are continuosly acquired map scans from the digitalization process of an old maps collection, or the satellite imagery retrieved through a receiving station. Storage of the raster data in a database is a key feature of the system, which enhances the usual tasks and usability of SDI systems. The analytical procedures deployed within the data store perform automated raster analysis and content-based metadata extraction. This functionality is illustrated with two experiments – improving the display of early map scans and snow and cloud detection from satellite images. Applications of the proposed approach and utilization of the prototype application by geographers and cartographers are discussed. Introduction At the Faculty of Science of Charles University in Prague, huge amount of descriptive, statistical and geometric spatial data is used for research and education purposes.A Spatial Data Infrastructure (SDI) implements a complex framework of technology, geographic data, metadata and users in order to use spatial data in an efficient way.Recently, the amount of spatial raster data has grown significantly due to a new receiving station of satellite imagery and the advancement of the old maps collection digitalization.The increased pressure on human and technological resources unveiled, how far the current approaches and tools designed for vector data are unsuitable for raster images, that are much bigger in data volumes and more variable in storage formats.This implicates higher demands on management and administration of the data store mechanism. Related work Data and metadata records are two crucial components in any SDI.Metadata enable data discovery and access for users and provide information about the purpose, currency and accuracy of spatial data sets (Olfat 2013).However, the manual creation, update and authoring of images' metadata is considered as being monotonous, time-consuming, and labor-intensive tasks (Trilles 2012).That is why challenges arise regarding metadata collection, storage, updating and integration in metadata catalogues (Batcheller 2009;Grill 2009;Olfat 2013). Therefore, there is a need for an automated administration of collected raster data and related metadata that is more efficient than current archiving approaches.The key idea to increase the effectiveness of existing SDI solutions is the application of analytical tools for raster data integrated within the archiving system.This shift of application logic is allowed due to newly introduced support to in-database storage of raster data by several database management system (DBMS) vendors (PostgreSQL 2013;Oracle 2013). The effectiveness of this approach is multiplied by having all available data in one place within SDI, enabling the extrusion of regions in the images or intersection analysis using available vector data.It also performs the vital functions that make spatial data interoperable, i.e., capable of being shared between systems. To increase the effectiveness of searching for the desired raster data, the content-based metadata of such an image are needed.Their creation and retrieval fills the gap between low-level information that can be processed by computers and high-level semantic information understandable and applicable by humans (Akrivas 2007;Zhang 2012).The application of analytical tools in a raster processing line can automate the generation of such metadata or image annotation. To prove the functionality of the proposed solution and to demonstrate the potential usage two case studies are presented in this paper. The Floreo (Demonstration of ESA Environments in support to FLOod Risk Earth Observation monitoring) research project and the existence of the receiving station for satellite data at the Faculty of Science were the motivation for an implementation of clouds and snow detection procedure.The receiving station continuously provides AVHRR/NOAA images.Variety of snow and clouds detection methods using the Advanced Very High Resolution Radiometer (AVHRR) data have been already reported.While it is relatively uncomplicated to separate snow-free land from snow-covered land using spectral characteristics, it is no easy task to discriminate between snow and clouds (Höppner 2002).The variety of works referred to extraction of snow or clouds from NOAA/ AVHRR (Allen 1990;Gesell 1989;Saunders 1986;Simpson 1998;Voigt 1999). The TEMAP (Technology for discovering of map collections) research project (TEMAP 2014) aims at applying the advancements in geospatial web technologies to facilitate the access to early maps for the end-user.The map collection of the Faculty of Science, Charles University in Prague contains tens of thousands of maps with more then 35,000 already digitized and catalogued.As an example of similar initiatives, the David Rumsey Map Collection (2014), can be mentioned.It contains more than 150,000 maps, of which 42,000 are digitized and georeferenced. Solutions being developed within TEMAP project adapt and further extend the latest technologies for searching and distribution of digitized maps like MapRank (2013), enabling geographic searching by map location and coverage in Google Maps, or Georeferencer, designed for crowdsourced georeferencing for map collections (Fleet 2012). SDI module for raster data managementimplementation architecture The initial work on SDI solution was introduced by Hettler (2012) to provide means for automatic management of continuously acquired raster data and metadata.The implementation architecture consists of several components, as depicted in Figure 1. The administration layer provides an environment for the initialization and configuration of the solution for automatic raster data and metadata archiving and publishing.Technically, the administration layer is based on the Java application MtdtRasPub, which, in addition to controlling the raster data flow, constitutes the metadata record for each raster image from all available sources (World Files, raster headers, bibliographic records).Metadata records follow the ISO 19115:2003 standard, the current "best practice" standard defining the geospatial metadata format. The storage layer, which includes the databases used to store data and metadata, is based on the PostgreSQL database platform.Within this layer, appropriate data structures for data and metadata are built in order to perform their automatic publication for the system's users. The service layer manages the communication of the data store with the metadata catalogue and map server, employing for this sake the GeoNetwork opensource (GeoNetwork 2013) metadata catalogue and GeoServer (Geoserver 2013) map server.The web-based graphic user interface presents the data and metadata to user. This SDI module for raster data administration provided the automation of raster data storage and distribution via the web user interface together with all available metadata.However, using such a solution for analytical image processing, the data must be first transferred from the data store to an external application.Thus, so as to fully exploit the advantage of storage of raster data in the database, an extension of this architecture is needed.Consequently, the next section presents the shift of image processing from an external application onto the storage layer and discusses the requirements for the deployment of an image processing functionality within the DBMS.Furthermore, case studies of a custom-made images' analysis functionality are presented, followed by a discussion on the potential usage of a solution for a broad geographic community. In-database image analysis approach The current usual practice is based on the out-of-thedatabase storage of raster data.In a relational database only the metadata describing the image are kept.For any processing or analysis, the raster data must be first transferred to separate processing and analytical software applications, as it is suggested in Figure 2. The analytical task can result in raster editing.In such a case, a new raster representation needs to be transferred back to the data store, causing extra data transfer overhead. The advantage of keeping the data out-of-the-database, i.e., as binary large objects, appears in case the publication of stored raster data from the database system (like map service publishing) is the only objective of an application.The first performance evaluation results presented in (Hettler 2012) shows that the publication from a native PostGIS raster format is slightly slower than from the alternative binary raster storage, which usage however prevents from the employment of analytical tools. In-database storage.The in-database strategy (Xie 2013;Obe 2011) employed by the solution for geospatial images proposed in this paper has several features to enhance the storage and analysis of big geospatial images. The first feature is moving the image processing closer to the data to avoid moving large data sets from the databases to detached analytical software.The second feature is parallel processing provided by the database for the in-database raster format storage.The third feature is concurrent processing that enables leveraging the power of computer clusters to concurrently process numerous images (Xie 2013).In the Figure 3, the retrieval of results of an image analysis, which is performed on the database side, is depicted. In-database image processing functions.There are countless of possible image analysis functions.Raster data processing and analysis involves a large set of operations, such as radiometric and geometric corrections, image transformation and mosaicking, image enhancement, pattern recognition and raster map algebra, to name a few (Gonzales 2006). Database platforms with raster data support implement only core functions that are required by database management or that improve the effectiveness of data manipulation, such as image updates, processing or aggregation.These complement traditional GIS applications and can be reused by complex or custom-made analytical procedures deployed for a specific purpose, providing effective raster data manipulation for such developed procedures. In order to fully utilize the effectiveness of in-database approach for analytical procedures over spatial raster data, the database platform is supposed to support the following key functionality: -raster bands accessors, -raster pixel accessors and setters, -raster band statistics, -map algebra over individual pixels, -spatial indexing, -datum definition and coordinate system transformation.This allows for a basic analysis and moreover supports the development of custom-made functionality, providing procedural language.With respect to the architecture presented above, PostgreSQL with spatial extension Post-GIS (PostGIS 2013) was chosen for the implementation of the proposed SDI enhancement.PostgreSQL offers this key functionality for raster data processing and also provides PL/pgSQL procedural language. Experiment Ongoing research projects like TEMAP, which aims to develop of technologies and procedures for discovering old maps collections, or FLOREO, which is concerned with snow detection from satellite images, provided the motivation and data for the tests of the proposed solution.Continuous flow of acquired very large raster data from the satellite images receiving station and from the digitalization of tens of thousands old maps required the greatest possible reduction of raster data movements for the sake of processing and analysis. This requirement was met by employing the in-database approach and the development of specialized analytical functions within the data store.Implementation of this functionality is enabled through the procedural languages.The PL/pgSQL procedural language of the Post-greSQL database platform was utilized for these purposes. Cloud and snow detection The algorithm for snow detection introduced within the FLOREO project was designed based on the past work of Romanov (2000).The cloud detection part is adopted from the AVHRR Processing over Land Cloud Fig. 2 Raster analysis in detached data storage and data analysis applications. Fig. 3 The character of raster data stored in-database processing. and Ocean (APOLLO), which was developed by Saunders (1986).The implementation of such procedures within the data store aimed at a retrieval of basic information about the snow and cloud coverage in the image.This information is utilized for two purposes.First, the automatic identification of such images appropriate for classification.Second, the automatic creation of content-based metadata to increase the effectiveness of the search in the metadata catalogue. NOAA-AVHRR data.NOAA is a polar satellite that circles at an altitude of approximately 850 km.The satellite scans each place on Earth at least twice a day, with increasing frequency at places closer to poles.The Advanced Very High Resolution Radiometer (AVHRR) instrument on board of NOAA has 5 (or 6) wavelength channels.The channels are optimized to measure cloud and surface characteristics with minimum contamination from other atmospheric constituents.The channel specifications are presented in Table 1. Channel number Resolution at Nadir Wavelength (μm) The spectral signatures of snow and clouds can be very similar and depend on various environmental factors.The snow and cloud discrimination relies on threshold value estimates, like minimum possible surface temperature of snow compared to clouds, as determined by a histogram analysis of temperature and reflectivity.Threshold values are instrument specific. Snow detection. The snow detection procedure is implemented based on a series tests adopted from the algorithm developed by Romanov (2000).An image pixel is identified as snow by a threshold method, which tests if the signal in a channel or combination of channels corresponds to defined spectral characteristics in a cloud free atmosphere.The main tests for daytime are: Improving raster display In image processing, the normalization is an image enhancement technique that improves the contrast in an image by stretching the range of pixel intensity values into a desired range of values (Gonzales 2006). Implementation of a procedure for an improvement of raster display is motivated by the effort to increase the readability of early maps scans.The original documents faded out due to the long period of time since their creation, the character of the materials used and colouring techniques. The PL/pgSQL procedure transforms an image I with intensity values in the range (Min, Max), into a new image IN with intensity values in the range (newMin, newMax).The linear normalization is performed according to the formula: The Figure 5 illustrates the outcome of the histogram stretching function.The implementation of such a procedure is straightforward and also computationally efficient.That is due to the optimized functions for raster data manipulation, editing and yielding of image statistics or image histogram.These functions are natively provided by the PostgreSQL Raster platform and are re-usable within other developed functions. Metadata GeoNetwork opensource metadata catalogue is employed by the proposed solution to provide means of description of various types of geographic data (vector or raster layers, map services, statistical data). The metadata document is formed within the administration application from available sources in accordance with ISO 19115:2003 rules.Only a small portion of elements is used following the GeoNetwork and INSPIRE (2014) recommendations on required or highly recommended elements to properly describe geographic data.The compliance of metadata with these rules is checked by GeoNetwork when metadata records are imported or updated.For this sake, the GeoNetwork's xml.metadata.insertservice is employed.Three groups of metadata fields in a resulting metadata record can be identified based on the source of their origin. Available descriptive metadata.Depending on the data source, the descriptive metadata can be retrieved from World Files, raster headers or bibliographic records.To carry this out automatically, the format of the source document must be known, i.e., a catalogued description of an old map in XML document following the MARC 21 standard.The cataloguing procedure itself follows the methodology described in Novotná (2013) System generated metadata.Metadata generated automatically by some components of the system like Online resource linkage (url of the source disseminated by GeoServer) or Data quality info belongs to this category.This category includes metadata fields, whose values are set to the system administration application by a person responsible for the dataset, like Presentation form, Organization name, Role, Maintenance and update frequency.Also, the values of fields like Abstract or Purpose can be determined using the objective of the specific satellite mission and further automatically set by the administration MtdtRasPub application for all images of the dataset. Metadata as an analysis product.In the field Supplemental Information, additional information acquired through raster analysis is encoded.This refers to the percentage of cloud and snow coverage in satellite images and also to the original and new minimum and maximum intensity values of old maps scans.The unknown reference system or map scale of old maps can stand as another example.In this case such information cannot be retrieved from bibliographic document.Cartometric analysis of such an old map scan however can provide estimates of these parameters (Bayer 2014). The assignment of metadata fields into the categories above is not strict and depends on the unique characteristics of datasets.The update of existing metadata records is enabled by the system and carried out on an 'as needed' basis applying the metadata.insertxml service.The unique identifier prevents from the creation of duplicated records.The updating of existing metadata is required to encode the analysis results. Conclusions An SDI module for management of continuous flow of raster data and related metadata was proposed.This module addresses the needs for the automation of raster data archiving, analysis and distribution. System evaluation.The functional parts of the prototype system have been developed in cooperation with the researchers and end-users of provided services. The developers of the metadata solution, along with map archivists and end-users, defined fields from bibliographic records, that would be relevant for both public and scientists working in fields of geography and cartography.Selected metadata elements provided sufficient map description and search capabilities within the SDI system. Also, the role of content-based image description was proven to be the key for the effective management of satellite imagery.Due to huge data volumes regularly produced, older or unused images are moved to be archived on backup media like magnetic tape.The image description is then crucial in allowing the searching of archived images, whose display is not available on-line. The analysis of snow and cloud coverage demonstrated the way of content-based metadata creation.Romanov (2000) presented evaluation of classifications for satellite-based snow products.Results of the method varied from 75% to 85% of correct classification depending on environmental conditions.The comparison of results obtained by manual processing with automatically acquired results fit into this range.Nevertheless, to improve the image search capabilities and to fully answer the needs of end-users, more complex analysis on snow and clouds characteristics and more detail placement of such phenomenon are necessary. The presented experiments provided valuable results for related research projects.The retrieved metadata enhanced the search capabilities and presented information about the suitability of an image for further analyses (like the classification of snow characteristics).The normalization of old map scans improved the readability of such documents.The main contribution, however, inheres in proving the functionality of the in-database analysis approach followed up by the automatic content-based metadata creation which is a major open research problem, not only to metadata catalogue or SDI systems but in other fields too. Future development.The introduced approach provides many opportunities for geographers of various specializations to facilitate the retrieval of information from huge rasters, share it and effectively search for available data within the SDI. The extension of the automatic processing of old map scan is an example.With a reasonable amount of effort, analytical procedures for the determination of the level of damage of a historical document, like that in Figure 5, can be implemented. As another example, the enhancement of a geo-referenced mosaic created by historical cartographers from early map series like the military survey (Molnár 2011), can be mentioned.The image histogram equalization would provide a unified appearance by removing differences in contrast between individual lists caused by different materials or archiving approaches.More advanced procedures may deal with automatic map field extraction for such map series. Another related example of potential future development is the extension of proposed snow and cloud detection aiming at the automation of snow and cloud typology classification or land use classification within the data store. As shown in Dang (2012) the in-database storage is also a promising approach for the effective distribution and visualization of data changing continuously in time and space.Examples of such phenomena are temperature, pressure, precipitation, snow cover, land use or population density. Future steps in application development will aim at the implementation of additional analytical procedures: (a) the image histogram equalization for the sake of mosaicking, (b) the embedding of existing procedures on cartometric analysis into the SDI system and (c) the integration of the application with solutions for crowdsourced georeferencing. Fig. 1 Fig. 1 Implementation architecture for automated raster data management. Fig. 4 Fig. 4 The classification of the original NOAA image (a) into the categories of land, snow and cloud coverage (b). Fig. 5 Fig. 5 The segment of an early map with some damages, changes of paper colouring and faded labelling.(a) Before and (b) after application of normalization. facilitating the identification of the key corresponding fields in both standards -the source MARC 21 bibliographical standard and the target ISO 19115:2003 geo-informatical standard.The following fields are acquired by processing the source documents: Title, Date, Date type, Abstract, Purpose, Descriptive Keywords, Language, Topic category, Scale Denominator, Temporal Extent, Geographic Bounding Box, Reference System Info.
4,662
2014-11-21T00:00:00.000
[ "Computer Science", "Geography", "Environmental Science" ]
DATABASE OF EEG / ERP EXPERIMENTS The article deals with the database of EEG/ERP experiments and its developed prototype. Storage, download and interchange of EEG/ERP data and metadata through the web interface is possible, various user roles are defined. The requirements specification including the system context, scope, basic features, data formats and metadata structures is presented. The system architecture, used technologies and the final realization are described. Additional tools and structures as converters of data formats and generated ontology are mentioned. The possible users of the database are specified. INTRODUCTION Our research group at Department of Computer Sciences and Engineering, University of West Bohemia in cooperation with other partner institutions (e.g.Czech Technical University in Prague, University Hospital in Pilsen, Škoda Auto Inc, ... ) specializes in the research of attention, especially attention of drivers and seriously injured people.With regard to our research we widely use the methods of electroencephalography (EEG) and event related potentials (ERP).Within our partner network we are responsible for technical and scientific issues, e.g.EEG/ERP laboratory operation, development of advanced software tools for EEG/ERP research, or analysis and proposal of signal processing methods. EEG and ERP experiments take usually long time and produce a lot of data.With the increasing number of experiments carried out in our laboratory we had to solve their long-term storage and management.Looking for a suitable data store for EEG/ERP data and metadata we encountered series of problems: There is no widely spread and generally used standard for EEG/ERP data files within the community. Results (interpretations) of EEG/ERP experiments are usually more important than obtained data (some researchers even declare that experimental data have a low value when they are interpreted). There is no reasonable and easily extensible tool for long-term EEG/ERP data (metadata) storage and management (the general practice is to organize data and metadata in common file directories). There is no practice to share and interchange data between EEG/ERP laboratories (EEG/ERP data are supposed to be secret or unimportant to share them). System Context Because of hard manual work with large amount of EEG/ERP data and metadata and in face of difficulties mentioned in Introduction part, we decided to design and implement own software tool suitable for EEG/ERP data and metadata storage and management. The developed EEG/ERP data store (called simply the system in the following text) pursues not only our local research but in general it contributes to advancements in human brain understanding.In addition, we believe that such advanced software tools increase both the efficiency and the effectiveness of neuroscientific research. Requirements Specification The specification of requirements originated from experience of our laboratory, co-workers from cooperating institutions, books describing principles of EEG/ERP design and data recording (e.g.Luck, 2005) and numerous scientific papers describing specific EEG/ERP experiments.It also corresponds to the effort of International Neuroinformatics Coordinating facility (INCF) (Pelt, 2007) in the field of development and standardization of databases in neuroinformatics. System Users The system prototype is dedicated for department users and collaborative partners as well as for a limited group of researchers interested in EEG/ERP research.The system is supposed to be widely tested to guarantee the safety of personal information, availability of EEG/ERP resources and their usability for people interested in this research field. Project Scope and System Features EEG/ERP database enables clinicians and various community researchers to store, update and download data and metadata from EEG/ERP experiments.System is developed as a standalone product (integration with the software for EEG/ERP experimental design is not a task of this project).The database access is available through a web interface.We need a web server supporting open source (Java and XML) technologies and a database system, which is able to process huge EEG/ERP data.The system is easily extensible and can serve as an open source. The system essentially offers the following set of features (the number of accessible features depends on a specific user role): User authentication Storage, update, and download of EEG/ERP data and metadata Storage, update and download of EEG/ERP experimental design (experimental scenarios) Storage, update and download of data related to testing subjects The crucial user requirement is the possibility to add an additional set of metadata required by a specific EEG/ERP experiment.The complete overview of the system features and user roles (use case diagram) is available in (Pergler, 2009). User Roles Since the system is thought to be finally open to the whole EEG/ERP community there is necessary to protect EEG/ERP data and metadata, and especially personal data of testing subjects stored in the database from an unauthorized access.Then a restricted user policy is applied and user roles are introduced. On the basis of activities that a user can perform within the system the following roles are proposed: Anonymous user has the basic access to the system (it includes essential information available on the system homepage and the possibility to create his/her account by filling the registration form). Reader has already his/her account in the system and can list through and download experimental data, metadata and scenarios from the system, if they are made public by their owner.Reader cannot download any personal data or store his/her experiments into database. Experimenter has the same rights as Reader; in addition he/she can insert his/her own experiments (data and metadata including experimental scenarios) and he/she has the full access to them.This user role cannot be assigned automatically, a user with the role reader has to apply for it and the new role must be accepted by supervisor. Supervisor has an extra privilege to administer user accounts and change their user roles according to the policy. Data Formats There exists a variety of data formats for storing EEG/ERP data.The more spread formats and formats used in our laboratory include European Data Format (EDF and EDF+) ("EDF", n.d.), Vision Data Exchange Format (VDEF) ("VDEF", n.d.), Attribute-Relation File Format (ARFF) ("ARFF", n.d.), and KIV format (Kučera, 2008).European Data Format (EDF) contains an uninterrupted digitized EEG record stored in one file (a header record is followed by data records).The header content has a variable length.It identifies a testing subject and specifies the technical characteristics of recorded EEG signal.The data part contains consecutive fixed-duration epochs of the record.Despite its drawback this data format has been probably the most hopeful attempt to standardize description of EEG data. Vision Data Exchange Format (VDEF) is used by the technical equipment in our laboratory.EEG record is divided into three files: a header file, a marker file and a data file.The header file based on Windows INI format describes recorded data and provides a limited set of corresponding metadata as the attribute-value pairs.The marker file contains information about markers (their types and timing) in EEG signal.The data file contains raw EEG data. Attribute-Relation File Format (ARFF) is used in our laboratory as the interface to WEKA software ("WEKA", n.d.).Data and metadata are stored in one ASCII file consisting of two sections.The header section provides a limited set of metadata and it is followed by the data part. KIV data format is a modification of simple ASCII format of EEG signal, where metadata (file header in ASCII) are stored in XML file and data from electrodes are stored in separate binary files. The users' requirement on the system is to accept at least three formats mentioned above.An optional requirement is to provide users with conversion tools between these formats. Standardization of EEG/ERP data format we are also working on (with INCF support) is out of scope of this article. Definition of Metadata The data obtained from EEG/ERP experiments are senseless if they are not supported by more detailed description of testing subjects, experimental scenarios, laboratory equipment etc. Metadata are also necessary for an interpretation of performed experiment and for data search and manipulation.There is important that only a small predefined set of metadata is optional to fill in.In addition, a user with the role experimenter has the right to define his/her own metadata. System Sustainability The system purpose is not only to serve as a local managing tool for our EEG/ERP research but to serve as a system, which enables sharing and interchange of data between various research groups.Nowadays EEG and ERP data are provided by diverse groups of not only medical communities but scientists or universities as well.The system is therefore developed as open source accepting INCF recommendations.It will be offered as a free managing tool and source of EEG/ERP data within collection of other neuroinformatics data sources. System Security The system database contains personal data, which are necessary for interpretation of experiment or for contact with testing subject.Only experimenter has access to personal data of testing persons who took part in his/her experiment.Collection of personal data and their storage are managed according to law. System Performance The system database has to work with long EEG/ERP records (usually tens of megabytes) in reasonable time.The main limiting factor is a user internet connection, not the database performance. System Architecture The system is based on three layer architecture.This architectonic style is supported by selection of programming tools and technologies.We used Java and XML technologies to ensure a high level of abstraction (system extensibility) as well as a long term existence of the system as open source. Persistence Layer Persistence layer uses Hibernate framework.It means that relational database and object -relational mapping are supported.Oracle 11g database server is used to ensure the processing of large data files.ERA model of relational database is available in Figure 1; all tables describing metadata extension are omitted to keep the model understandable. Application and Presentation Layer Application and presentation layers are designed and implemented using Spring technology.This framework supports MVC architecture, Dependency injection and Aspect Oriented Programming.Integration of both frameworks, Hibernate and Spring MVC, was without difficulties.Spring Security framework is used to ensure management of authentication and user roles.User access to the relational database is realized through the web interface.Majority of users are familiarized with web applications and they do not need any additional software except a web browser. User interface is divided into several parts (main menu, second level menu, header, footer, and content part).The main menu includes e.g. the following sections: Home -system introduction, registration, login Experiments -management of EEG/ERP experiments Scenarios -management of experimental EEG/ERP designs People -management of people in the system Figure 2 presents a user interface preview.Input data are validated.Error messages are presented using special marks in JSP views and by definition of CSS styles for corresponding input fields. Storage/download of raw EEG/ERP files is universal; there is possible to store/download any allowed file type. Semantic Web Technologies Registration of the system as a recognized data source occasionally requires providing data and metadata structures in the form of ontology in accordance with ideas of semantic web.We also started to work on the representation of data and metadata structures using semantic web technologies.Nowadays there is possible to generate and provide data and metadata structures using Ontology Web Language (OWL).The details will be presented in a separate paper. Conversions Between Data Formats Converters between data formats mentioned in Section 2.2.4 were implemented.These converters can be downloaded and used locally; no conversion is performed during data upload/download. CONCLUSIONS The presented system combines research in EEG/ERP and informatics fields as well as application of informatics in neuroscience.Our research group designed and implemented the prototype of experimental EEG/ERP database for storage, download and interchange of EEG/ERP experiments.The database preserves EEG/ERP raw data together with the corresponding metadata.The currently developed prototype is prepared for extensive testing carried out by our department, cooperating institutions and a limited number of people interested in EEG/ERP research and its applications. Advanced Java technologies (Hibernate, Spring, and Spring security frameworks) were used to ensure a high level of abstraction and further maintenance and extensibility of the system as the open source software. In addition, converters between various data formats and database ontology in OWL are provided for experienced users. We hope that EEG/ERP database can also provide useful data and metadata to research groups, which do not perform their own experiments, but which are interested e.g. in signal processing or data mining. As the next big step we prepare a progressive change of EEG/ERP experimental database to EEG/ERP portal offering e.g.advanced software tools, which can help researchers with difficulties of EEG/ERP experimental design, and set of methods for signal processing. We also plan to register our system as a data source within large world known projects in neuroinformatics, e. g.Neuroscience Information Framework ("NIF", n. d.).
2,881.2
2010-01-01T00:00:00.000
[ "Computer Science" ]
Unraveling the Molecular Tumor-Promoting Regulation of Cofilin-1 in Pancreatic Cancer Simple Summary Unraveling the mechanistic regulations that influence tumor behavior is an important step towards treatment. However, in vitro studies capture only small parts of the complex signaling cascades leading to tumor development. Mechanistic modeling, instead, allows a more holistic view of complex signaling pathways and their crosstalk. These models are able to suggest mechanistic regulations that can be validated by targeted and thus more cost-effective experiments. This article presents a logical model of pancreatic cancer cells with high cofilin-1 expression. The model includes migratory, proliferative, and apoptotic pathways as well as their crosstalk. Based on this model, mechanistic regulations affecting tumor promotion could be unraveled. Moreover, it was applied to screen for new therapeutic targets. The development of resistance mechanisms is a common limitation of cancer therapies. Therefore, new approaches are needed to identify optimal treatments. One is suggested in this article, indicating the surface protein CD44 as a promising target. Abstract Cofilin-1 (CFL1) overexpression in pancreatic cancer correlates with high invasiveness and shorter survival. Besides a well-documented role in actin remodeling, additional cellular functions of CFL1 remain poorly understood. Here, we unraveled molecular tumor-promoting functions of CFL1 in pancreatic cancer. For this purpose, we first show that a knockdown of CFL1 results in reduced growth and proliferation rates in vitro and in vivo, while apoptosis is not induced. By mechanistic modeling we were able to predict the underlying regulation. Model simulations indicate that an imbalance in actin remodeling induces overexpression and activation of CFL1 by acting on transcription factor 7-like 2 (TCF7L2) and aurora kinase A (AURKA). Moreover, we could predict that CFL1 impacts proliferation and apoptosis via the signal transducer and activator of transcription 3 (STAT3). These initial model-based regulations could be substantiated by studying protein levels in pancreatic cancer cell lines and human datasets. Finally, we identified the surface protein CD44 as a promising therapeutic target for pancreatic cancer patients with high CFL1 expression. Introduction Pancreatic cancer is one of the most lethal cancers in developed countries [1,2]. With an estimated incidence of 458,918 new cases in 2018, it is the eleventh most common cancer normal water. Unspecific effects of sucrose addition on tumor growth were excluded in previous studies using the same experimental setup [24,25]. Health status of the animals, including possible signs of dehydration, was controlled daily. Six animals per arm, in total 12 were implanted all developing tumors. Tumor sizes were measured twice weekly using a caliper. Upon sacrificing the mice, resected tumors were frozen in liquid nitrogen and stored for RNA and protein analyses. Animal experiments were approved by the relevant Ethics Committee at the Regierungspräsidium Giessen, Germany (ethic approval number V54 19c 20 15 (1) MR 20/11 Nr. 50/2011). Western Blot Proteins were isolated by centrifugation of cells in culture medium at 4 • C for 5 min and 16,000× g. If total amount of proteins was extracted, pellets were resuspended in proteinase inhibitor (G-Biosciences, Maryland Heights, MO, USA) supplemented PBS before sonicated with a Labsonic U (B.Braun, Melsungen, Germany). Instead, cell fractions were obtained by resuspending initial centrifugated cells again in ice-cold PBS before centrifugation for another round of 3 min at 6000× g and 4 • C. The obtained pellet for the nuclear fraction was then resolved in 200 µL of proteinase inhibitor supplemented fractionation buffer (20 mM HEPES-KOH, pH 7.5, 10 mM KCl, 1 mM EGTA, 1 mM DTT, 0.5 mM PMSF, supplemented with proteinase inhibitor) and kept on ice for 30 min. Afterwards, it was squeezed five times through a 26G needle and centrifuged for 10 min at 1000× g. Next, supernatant and pellet were separated. The separated pellet was resuspended again in 50 µL fraction buffer and designated as nuclear fraction. Instead, the separated supernatant was centrifugated for 10 min at 16,000× g and 4 • C before designating it as cytosolic fraction. Bradford assay measured with the Multiskan FC photometer (Thermo Scientific, Langenselbold, Germany) was used to determine protein concentrations. For the SDS-PAGE, 10 µg proteins were loaded on 10 or 15% gel in SDS buffer and separated at 120V voltage. Afterwards, proteins were transferred onto a nitrocellulose membrane (Whatman GmbH, Dassel, Germany) at 300 mA current using a semi-dry blotting system (Biozym, Hessisch Oldendorf, Germany), and blocked for 4-6 h at 4 • C in 1×TBS, 0.1% Tween 20 (TBS-T) and 5% powdered milk. Primary antibodies (anti-CFL1 #ab42824, Abcam, Cambridge, UK; anti-Caspase-3 #9662, Cell Signaling Technology, Danvers, MA, USA; anti-Cleaved Caspase-3 #9661, Cell Signaling Technology; anti-PARP #9542, Cell Signaling Technology; anti-STAT3 #9139, Cell Signaling Technology; anti-phospho-STAT3 (Tyr705) #9145, Cell Signaling Technology; anti-Cyclin D1 #ab16663, Abcam; and anti-Myc #sc-481, Santa Cruz Biotechnology, Santa Cruz, CA, USA) were diluted 1:1,000 in blocking buffer and incubated overnight at 4 • C and washed afterwards in 0.1% TBS-T. To ensure equal loading, actin (anti-β-actin #sc-1616 Santa Cruz Biotechnology) or Lamin A/C (Lamin A/C #2032, Cell Signaling Technology) were used. The secondary antibody (anti-rabbit #7074, Cell Signaling Technology) was 1: 10,000 diluted in blocking buffer and incubated for 1-2 h at 4 • C. Proteins were detected using the ECL immunoblot kit (GE Healthcare Europe GmbH, Freiburg, Germany). Quantification of CFL1 bands in the western blots have been performed by densiometric analyses via Image-J. Band intensity was normalized to β-actin for all samples. Flow Cytometry Cells were trypsinized and centrifuged for 3 min at 1000× g. Following washing with PBS, the pellet was resuspended in 50-100 µL PBS and the suspension was transferred in a drop-wise manner into ice-cold ethanol (70%) under continuous vortexing. After centrifugation for 5 min at 1000× g and discarding of the supernatant, the remaining pellet was washed with 500 µL ice-cold PBS and resuspended in a propidium-iodide mixture (1 mg/mL Propidium-iodide (Sigma Aldrich, St. Louis, MO, USA) + 500 µg/mL DNAsefree RNAse (Roche Diagnostics, Mannheim, Germany) in PBS). Following incubation in darkness for 30 to 45 min at room temperature (RT), the sample was measured subsequently with the flow cytometer LSR II (BD Biosciences, Heidelberg, Germany). Recorded data were evaluated using the software ModFit LT (Verity Software House, Topsham, ME, USA). Cell Tracking To measure undirected cell migration, S2-007 cells (20,000 cells per well) or Panc-1 cells (30,000 cells per well) were seeded in collagen-coated 6-well culture plates and recorded under the microscope. We used a microscope from the Zeiss Cell Observer system (Carl Zeiss GmbH, Jena, Germany) with temperature as well as CO 2 control. Pictures were recorded in 10 min intervals. The resulting time-lapse video files were analyzed with the Time Lapse Analyzer software [26] by extracting paths for each cell and calculating the average velocity of migration in µm/min. Wound Healing A 200 µL Diamond Tip (Gilson Inc., Middleton, WI, USA) was used to scratch a wound of 1-2 mm into a confluent layer of transfected cells in a 6-well plate and the medium replaced with fresh complete medium (10% FBS). Cells were seeded at different densities to achieve confluence within similar timelines. Pictures of regions of interest were recorded in intervals of 10 min and the resulting time-lapse video files analyzed using the Time Lapse Analyzer software [26]. Wound closure rates were expressed as a decrease in wound area over the duration of the experiment (µm 2 /h). Boolean Network Mechanistic models are inferred from experimental observations. However, many of these mechanistic models require kinetic parameters thus limiting their use. Logical Cancers 2021, 13, 725 5 of 25 models instead can be conceptualized on qualitative knowledge. Here, regulatory interactions are formulized by logical connectives. Boolean network models are a popular modeling paradigm among these gene regulatory networks [27][28][29][30][31][32]. The simplicity of this model arises from the assumption that genes and proteins are considered as either expressed/active or not expressed/inactive and regulatory interactions are formulized by logical connectives [33,34]. Information regarding regulatory interactions is derived from qualitative literature statements summarizing information from multiple sources e.g., publications (in vitro and in vivo studies), databases, or clinical trials. This causes the process of modeling to be iterative to ensure that any enlargement of the model still recapitulates the expected behavior. While molecular and biochemical experiments are preferable to build logical connections, validation of long-term behavior is made through phenotypical studies (e.g., proliferation assays, apoptosis measures, or cell cycle analyses). This validation is possible only by studying the dynamic behavior of the established model. In particular, steady states so called attractors, describe the long-term behavior of the model and have been associated to biological phenotypes [33,35]. These attractors can be either single state ones or a cyclic sequence of states. Studying long-term behavior together with paths leading to attractors (steady states) allow to unravel mechanistic regulations [33] and cell fates [35] of different weights [36,37]. Mechanistic models also allow to study perturbation of dynamic behavior. These in silico experiments on the mathematical model are similar to in vivo or in vitro knock out or knock in experiments on model organisms. For this reason, they are of crucial interest in predicting intervention targets and further guiding future laboratory experiments [37,38]. Altogether, this approach allows to reduce both time and costs for experimental setups by suggesting mechanisms and intervention targets. Model Construction For the presented model, regulatory interactions of network components were manually extracted from literature. In general, we started summarizing reviews on PDAC and molecular features of CFL1 overexpression. Whenever possible, information from PDAC context was considered. Here we integrated studies from various mouse models and pancreatic cancer cell lines. When possible, human studies were also considered with special focus on CFL1 expressing tumors. Moreover, data providing evidence for direct interactions (phosphorylation, binding, transcription) were weighted with major priority. In further refinements, data from indirect interactions was considered (expression correlations). The aim of the presented model is to investigate mechanisms involved in CFL1 overexpression in PDACs and its role in cancer progression. In particular, a special focus of the model was to unravel the tumor promoting function of CFL1 with respect to cell survival and proliferation. This information was integrated with CFL1's most studied role in cellular motility. Based on our initial in vitro and in vivo studies, we started the model setup by integrating regulators of CFL1, the cell cycle, as well as apoptosis inducers. For this purpose, we applied the search terms "CFL1 pancreatic cancer", "CFL1 regulation", "CFL1 cell cycle", or "apoptosis pancreatic cancer" in Google. Afterward, we focused on studies describing CFL1 interventions and the described effect on other proteins (on expression or activity levels). In this perspective, our Google search terms were "CFL1 knockout" or "CFL1 overexpression". Finally, the identified proteins within the model were analyzed concerning their mutual regulation. All identified regulations were summarized by logical connectives into a mechanistical model. Please note again, that throughout the whole modeling process, studies or reviews of PDAC were preferred. Only in cases where nothing related to PDAC could be found or to confirm a link, information from other cancers was considered. Model Simulation Attractor search and knockout simulations were performed using the R-package BoolNet [39,40]. Here, we utilized an exhaustive attractor search with a synchronous updating scheme. For the estimation of the basin of attraction of the attractors from the intervention screening performed with a SAT solver we also performed an exhaustive attractor search with limited to 1,000,000 initial start states. Perturbation Screening Systematic perturbation screenings were performed with the Java-based framework ViSiBooL [41,42] taking account of up to two combinations of interventions and that all proteins/genes in the network can change except caspases (Table S1). Based on the established Boolean network of CFL1 in PDAC a requirement to induce apoptosis is the activity of caspases. Thus, automatic screening was performed to search for combinations that active caspases in the attractor. Binarization of Gene Expression Data We considered three microarray studies comparing normal pancreatic tissues with pancreatic cancer tissues (GSE15471, GSE32676, GSE16515). Please note, that the dataset GSE15471 contains matched tissues taken at the time of resection. All these microarrays were performed on Affymetrix U133 Plus 2.0 whole-genome chips. Transcriptional regulated proteins that were included in the model were mapped according to their Entrez ID to the hgu133plus2 SYMBOLs. Their expression data was robust multi-array average (RMA) normalized [43] and binarized via a threshold defined by a ROC curve using the R-package pROC [44] ( Figure S1). According to this threshold expression values above the threshold were considered as active and expression values below the threshold as inactive. Three samples in the GSE15471 dataset contain replicates. These replicates were averaged and considered as a single data point. CFL1 Is Overexpressed in Pancreatic Cancer As a first step in the depth-characterization of CFL1 in pancreatic cancer, we analyzed its expression in primary human pancreatic cancer and control tissues. Quantitative Re-alTime PCR analyses demonstrated strong significant overexpression of CFL1 mRNA in pancreatic tissue (n = 12) both in comparison to healthy pancreas tissue (n = 8) from organ donors (p = 0.0005) as well as to chronic pancreatitis (C.P., n = 9) cases (p = 0.00008), while expression in chronic pancreatitis and healthy pancreas were not significantly different (p = 0.54) ( Figure 1a). As expected, CFL1 expression was also high in all pancreatic cancer cell lines examined. To examine the functional role of CFL1 in pancreatic cancer cells, knockdown of endogenous CFL1 with three independent siRNAs was performed. As normal CFL1 expressing non-tumoral control, we used the well-characterized HEK-293 cell line. Thereby we achieved a CFL1 knockdown efficiency of 60-90% on the RNA level ( Figure S2a CFL1 Knockdown Inhibits Proliferation and Tumor Growth BrdU incorporation assays revealed inhibition of proliferation in CFL1 silenced pancreatic cancer cell lines cancer cell lines (Figure 1c). In line with this observation, a decrease in colony formation and size was observed after treatment with siRNA against CFL1 ( Figure S2c,d). Next, we studied the cell cycle phase impacted by CFL1 silencing with flow cytometric analyses (Figure 1d). Similar to the findings of Wang et al. [11] in bladder cancer, we observed a reduced number of actively proliferating cells (S-phase) accompanied by an increase of cells in the G0/G1-phase indicating an attenuation of the G1/S-phase transition. knockdown were injected into nude mice. One-half of the mice (n = 6) were treated with doxycycline via drinking water to induce CFL1 repression. After the removal of the tumors RNA and protein were extracted from tissues and analyzed for the CFL1 level. Quantitative RT-PCR as well as western blots confirmed repression of CFL1 in the treatment group. β-Actin served as a loading control and mean values of expression were normalized to RPLP0 housekeeping gene. Statistics were performed using Wilcoxon-test. (f) Tumor size measurements revealed a significant decrease in tumor growth in doxycycline-treated animals (n = 6) compared to controls (n = 6). All mice developed tumors. Displayed are data points with a smoothing line and the corresponding confidence interval. Boxplots depict the median with the first and third quartiles. (e) S2-007 cells with doxycycline-inducible CFL1 knockdown were injected into nude mice. One-half of the mice (n = 6) were treated with doxycycline via drinking water to induce CFL1 repression. After the removal of the tumors RNA and protein were extracted from tissues and analyzed for the CFL1 level. Quantitative RT-PCR as well as western blots confirmed repression of CFL1 in the treatment group. β-Actin served as a loading control and mean values of expression were normalized to RPLP0 housekeeping gene. Statistics were performed using Wilcoxon-test. (f) Tumor size measurements revealed a significant decrease in tumor growth in doxycycline-treated animals (n = 6) compared to controls (n = 6). All mice developed tumors. Displayed are data points with a smoothing line and the corresponding confidence interval. Boxplots depict the median with the first and third quartiles. To assess the importance of CFL1 expression for the growth of pancreatic cancer cells in vivo, xenograft tumors were induced in nude mice by subcutaneous injection of S2-007 cells stably transfected with an inducible CFL1 shRNA. Induction of shRNA CFL1 expression led to a significant reduction of CFL1 on mRNA as well as on the protein level ( Figure 1e and Figure S4). Moreover, CFL1 repression resulted in reduced tumor volume ( Figure 1f). These results demonstrated that CFL1 has a growth-supporting role in pancreatic cancer. CFL1 Knockdown Does Not Induce Apoptosis Studies in bladder and vulvar squamous carcinoma describe an increase in the apoptosis rate after CFL1 knockdown [11,45]. In contrast to this, we did not observe induction of apoptosis after silencing of CFL1 in pancreatic cancer cells (Figure 2a and Figure S5). None of the pancreatic cancer cell lines treated with CFL1 siRNA showed cleaved-caspase 3 or cleaved PARP activity-both indications for apoptosis -nor did any of the cells treated with non-silencing siRNA or untreated cells. These results indicate distinct differences in the response of pancreatic cancer cells to CFL1 silencing, and by extension distinct differences in CFL1-associated signaling networks, compared to other types of cancers. ure S2c,d). Next, we studied the cell cycle phase impacted by CFL1 silencing with flow cytometric analyses (Figure 1d). Similar to the findings of Wang et al. [11] in bladder cancer, we observed a reduced number of actively proliferating cells (S-phase) accompanied by an increase of cells in the G0/G1-phase indicating an attenuation of the G1/S-phase transition. To assess the importance of CFL1 expression for the growth of pancreatic cancer cells in vivo, xenograft tumors were induced in nude mice by subcutaneous injection of S2-007 cells stably transfected with an inducible CFL1 shRNA. Induction of shRNA CFL1 expression led to a significant reduction of CFL1 on mRNA as well as on the protein level ( Figure 1e and Figure S4). Moreover, CFL1 repression resulted in reduced tumor volume ( Figure 1f). These results demonstrated that CFL1 has a growth-supporting role in pancreatic cancer. CFL1 Knockdown does not Induce Apoptosis Studies in bladder and vulvar squamous carcinoma describe an increase in the apoptosis rate after CFL1 knockdown [11,45]. In contrast to this, we did not observe induction of apoptosis after silencing of CFL1 in pancreatic cancer cells (Figure 2a and Figure S5). None of the pancreatic cancer cell lines treated with CFL1 siRNA showed cleaved-caspase 3 or cleaved PARP activity-both indications for apoptosis -nor did any of the cells treated with non-silencing siRNA or untreated cells. These results indicate distinct differences in the response of pancreatic cancer cells to CFL1 silencing, and by extension distinct differences in CFL1-associated signaling networks, compared to other types of cancers. CFL1 Deficiency Leads to Distinct Defects in Cell Migration Since CFL1 has previously been described to influence the migratory potential of cancer cells, we measured the influence of CFL1 knockdown on the migratory potential of cells by cell tracking and wound healing experiments (Figure 2b,c, and Figure S6). Here, we observed reduced cell velocity as well as a decreased potential for wound closing. The effect was more pronounced for S2-007 cells compared to Panc-1 cells, which may be explained by the fact that the liver metastasis-derived S2-007 cells show a more aggressive growth behavior overall compared to Panc-1 cells, which originate from a primary tumor. Hence, similar to other cancers, CFL1 silencing decreases infiltration and migration potential of pancreatic cancer cells. Establishing a Model to Uncover Mechanistic Regulation of CFL1 Besides its well-described role in actin remodeling, regulatory mechanisms inducing CFL1 overexpression or affecting apoptosis, and proliferation downstream of CFL1 are unknown. From the results above (Sections 3.1-3.4), we were able to show that CFL1 silencing affects these processes. Nevertheless, these results provide only phenotypical descriptions. To uncover tumor-promoting mechanisms of CFL1, we built a gene regulatory network. This model was based on the functional data described above as well as an intensive literature search. Moreover, CFL1 regulators and proteins involved in migration, proliferation, and apoptosis were included. A detailed description of the considered regulatory interactions included in the final mechanistic model can be found in Table 1. The final model consisted of 33 nodes and 130 interactions capturing actin-remodeling, CFL1 regulation as well as cell cycle regulation and apoptosis induction (Figure 3a). Systematic evaluation of global network dynamics over time showed that this model is able to recapitulate our previous observations in pancreatic cancers as well as for CFL1 silenced cells (Figure 3b,c). The gene regulatory network shows in its stable state (attractor) active CFL1 in combination with a proliferative and infiltrating phenotype (active S-phase and F-actin new ) but no induction of apoptosis (inactive caspases). In contrast, the analyses of CFL1 knockout revealed an attractor representing inactive proliferation and infiltration but still without active apoptosis. Thus, our newly established gene regulatory model might be suitable to uncover mechanistic regulations behind the dynamics leading to the more severe phenotype of active CFL1 in pancreatic cancer. To do so, we analyzed the network progression from healthy (unstimulated state) towards cancer. Since activating KRAS mutations can be found as one of the earliest mutations in approximately 90% of pancreatic cancer patients [46], we started from a state with active KRAS. Originated cascades (Figure 3b,c) were analyzed in detail to uncover cancer driving mechanisms. Model-based predicted activities of proteins were compared to literature or supported by laboratory experiments and human dataset analyses. While detailed analyses of the progression are described in the following paragraphs, a summary of this comparison can be found in Table 2. Table 1. Boolean functions of the CFL1 model. Depicted are the Boolean functions of the analyzed model. Interactions are described by logical connectives AND (∧), OR (∨), and NOT (¬). Linear interactions have been simplified by time delays ((-2) or (-3)). CYCS Pro-apoptotic proteins ∧ ¬Anti-apoptotic proteins ∧ CFL1 Imbalance between pro-and anti-apoptotic proteins induce release of CYCS by activating BAX. Unphosphorylated CFL1 translocates to the mitochondrion after induction of apoptosis [122][123][124][125] and acts as a carrier for BAX [126]. The simulation of a signaling cascade with an in-silico CFL1 knockout yields another single state attractor representing cell cycle arrest but no induction of apoptosis. Both signaling cascades start from an initial state with only active KRAS as present in 90% of pancreatic cancer patients and proceeds in distinct time steps towards the attractors. Note, that these two attractors are also the only ones that will be obtained by an exhaustive network evaluation. The network components are listed on the left, while the state of each protein is represented by blue (=active) and red (=inactive) rectangles. The simulation of a signaling cascade with an in-silico CFL1 knockout yields another single state attractor representing cell cycle arrest but no induction of apoptosis. Both signaling cascades start from an initial state with only active KRAS as present in 90% of pancreatic cancer patients and proceeds in distinct time steps towards the attractors. Note, that these two attractors are also the only ones that will be obtained by an exhaustive network evaluation. The network components are listed on the left, while the state of each protein is represented by blue (=active) and red (=inactive) rectangles. [130,131] TCF7L2 (active) GSE15471, GSE16515, GSE32676 (see Figure 4b) AURKA (active) GSE15471, GSE16515, GSE32676 (see Figure 4a) SSH1L (active) [ First, we concentrated on processes leading to overexpression and activation of CFL1. Following the progression towards cancer, our model shows an imbalance between the two opponents ras homolog family member A (RHOA) and ras-related botulinum toxin substrate 1 (RAC1) in favor of RAC1 (Figure 3b). Thus, protein kinase D1 (PRKD1) downstream of RHOA is rendered inactive and enables the expression of TCF7L2, an inducer of CFL1 expression. On the other hand, by acting on p21 activating kinase 1 (PAK1), RAC1 activates aurora kinase A (AURKA). AURKA phosphorylates and thus activates slingshot-1L (SSH1L), one of the activators of CFL1. Based on the network evolution over time, we assume that an imbalance in actin remodeling induced by KRAS acting on PI3K results in overexpression and activation of CFL1. Our mechanistic hypothesis is supported by gene expression data, showing that both AURKA and TCF7L2 are significantly overexpressed in pancreatic tumor tissues in comparison to healthy donors (Figure 4a,b). Here, binarization of the expression data classified tumor samples as active in contrast to healthy samples. As a further support, results were compared to literature findings (Table 2) The Synergy between CFL1 and Arp2/3 Is Important for Migration Pancreatic cancer cells are characterized by a high level of early and aggressive metastasis [3]. We have already been able to show that CFL1 influences this migratory potential ( Figure 2). In general, it is assumed that non-phosphorylated CFL1 severs filamentous actin fibers (F-actin) preferentially at old adenosine diphosphate (ADP)-F-actin ends thus providing glomerulus actin (G-actin) for the generation of newly polymerized F-actin [58,68,77]. Although this mechanism is certainly applicable to a large number of cell types, some recent studies have shown that CFL1 is also involved in lamellipodia formation [58,68,71], which is the driving force of cancer cell migration [133,139]. Insights on the dynamic nature of this regulatory network support this assumption (Figure 3b,c). Here, actin-related protein 2/3 complex (ARP2/3) downstream of RAC1 is unable to synthesize new actin filaments unless CFL1 is activated. Based on this, we suggest that CFL1 and ARP2/3 work in synergy for cell spreading. In literature, we found that this synergy might work through the severing capabilities of CLF1 providing short actin fibers that are preferentially used by ARP2/3 to create new branched actin fibers [134]. CFL1 Influences the Cell Cycle via STAT3 Next, we concentrated on signaling pathways describing how CFL1 might influence the cell cycle. Tsai et al. [140] suggest a cell cycle inhibitor dependent regulation. That would also be possible in pancreatic cancer if these inhibitors are excluded from the nucleus by enhanced AKT activity [109]. However, we could show that neither p21 and p27 protein levels nor their cellular localization changed in response to variations in CFL1 levels (Figure 4c and Figure S7). In the presented gene regulatory network, the next protein being activated in the transition towards pancreatic cancer (Figure 3b) after CFL1 is STAT3 (time step 8). However, STAT3 alone is not able to promote cell cycle progression through activation of cyclin D1 (CCND1), because CCND1 is not activated before time step 12. After activation, STAT3 activates protein kinase B (AKT) further enabling β-catenin (CTNNB1) to induce MYC expression by inhibiting glycogen synthase 3β (GSK3B) (time step 9-11). Consequently, the MYC proto-oncogene induces the expression of CCND1. Due to already active AKT, CCND1 can associate with its cyclin-dependent kinase (considered together with CCND1) and translocate into the nucleus. Here, it phosphorylates retinoblastoma (RB), thereby freeing the E2F transcription factor (E2F) which further induces the expression of Cyclin E2 (CCNE1) and thus the progression from G1 to S-phase. Conversely, all these proteins stay inactive in the systemic CFL1 knockout simulation and also cyclins stay inactive. This model behavior can be understood as attenuation of G1-to S-phase transition (Figure 3c) which we could show to take place in CFL1 silenced pancreatic cancer cells (Figure 1d). To confirm the model-based assumption that CFL1 regulates the cell cycle via STAT3, we investigated the impact of active CFL1 and an in-silico STAT3 knockout on the dynamic network behavior by further simulations ( Figure S8). Although this simulation shows active CFL1 and indicates migratory potential of the cells (active F-actin new ), the in-silico knockout of STAT3 inhibits proliferation (inactive cyclins). Contrary to CFL1 knockout simulation, however, in-silico STAT3 knockout was able to induce apoptosis. Based on these observations, we concluded that CFL1 influences the cell cycle via STAT3. In order to support this theory, we checked protein levels of STAT3, MYC, and CCND1 in CFL1 knockdown and control cells by western blots (Figure 4d and Figure S9). Thereby we observed that both total STAT3 as well as active STAT3 protein levels decreased after CFL1 silencing. Similar results were found for MYC and CCND1 thus strongly supporting model simulations. Mitochondrial CFL1 and Its Downstream Targets Influence Apoptosis Regulation Pancreatic cancer is characterized by a high degree of apoptosis resistance. One reason for this is might be that pancreatic cancer cells require death-receptor signals as well as the mitochondrial enhancing signal to induce apoptosis [119,120]. Although our simulation indicates activation of cytochrome C (CYCS) and thus its release into the cytoplasm in the transition towards cancer (Figure 3b, time step 9), there is no induction of apoptosis as evidenced by inactive caspases. The simulation shows active STAT3 and AKT shortly before CYCS is activated. Both are known to influence apoptosis by inducing expression of anti-apoptotic factors or phosphorylation of caspases [119,120]. Based on these findings, we assume that apoptosis of pancreatic cancer cells is inhibited by unbalanced expression of anti-apoptotic proteins, and the activity of AKT. To support this model-based hypothesis, we studied the expression of BCL2L1 in human gene expression datasets (Figure 4e). This anti-apoptotic protein is known to be regulated downstream of STAT3. We observed significant overexpression of BCL2L1 in pancreatic tumor tissues. Besides, binarization of the expression data classified BCL2L1 as active (Figure 4e). Contrary to model analyses with active CFL1, AKT and STAT3 are both inactive in CFL1 knockout simulations (confirmed for STAT3 by molecular data see Figure 4d), and CYCS remains inactive throughout all time steps (Figure 3c). This simulation outcome points to an additional role of CFL1 in apoptosis induction. Interestingly, we found no difference in BAX expression levels between samples from pancreatic tumors or normal controls (Figure 4f). Independent studies describe a translocation of activated CFL1 to the mitochondrion after apoptosis induction. In this context, CFL1 acts as a carrier for the pro-apoptotic BAX protein [122,126]. Combining our findings with previously described CFL1-dependent mechanisms, we postulate that although inactivation of AKT and STAT3 in response to CFL1 knockdown would normally lead to CYCS release and subsequent activation of caspases in pancreatic cancer cells, BAX activation is prevented by lack of availability of unphosphorylated CFL1, thus counteracting apoptosis induction in this case. Systematic Perturbation Screening Identified Targets to Induce Apoptosis in the Model Even though our mechanistic model necessarily represents an oversimplification of complex cellular networks, we could show that it is able to reproduce the behavior of CFL1 in pancreatic cancer. Consequently, we tested if we could apply it to screen for therapeutic targets that may trigger the induction of apoptosis in pancreatic cancer. Based on our model, this would be the case if caspases become active. Manually screening for promising intervention targets in larger networks is time-consuming, if not even impossible when considering combinatorial approaches [42]. Based on the presented CFL1 model, we had to test 2048 possible combinations to screen for perturbation of at least two proteins [42]. Thus, we used the Java-based framework ViSiBooL to screen systematically for promising intervention targets which uses SAT solvers for fast exhaustive attractor search. A single target intervention screening based on the established model identified three proteins (CD44, STAT3, and TWIST1) which were predicted to induce apoptosis to 100% ( Figure 5 and Figure S10). Besides, we searched for combinations of targets to induce apoptosis in the model. Here, the model-based perturbation screening proposed a list of 14 different combinations of interventions (Table S1). Please note, that one limitation of the applied SAT algorithm is that it does not return state transitions. For this purpose, further detailed simulations with the previously suggested targets were performed. By taking into account the biological importance of the individual proteins and their dynamic impact on the final phenotype, we finally identified AURKA and PAK1 as further intervention candidates. According to the intervention screening, both induce apoptosis in combination with active CFL1 or active actin (which are both present in pancreatic cancer). Furthermore, simulations of a single intervention of AURKA or PAK1 lead in >99% to attractors representing apoptosis ( Figure 5 and Figure S10). 021, 13, x 5 of 25 Figure 5. In-silico screening for therapeutic targets. Automated model perturbation screening identified a list of proteins that might induce apoptosis. Displayed is the long-term behavior of the model after introducing the model-suggested interventions. An in-silico knockout of CD44, STAT3, or TWIST1 induce apoptosis. Similar results were obtained for AURKA or PAK1 knockouts. These knockouts mainly induce apoptosis and to a minority cell cycle arrest combined with inhibited migration trades. Discussion Various studies in different types of cancers describe a correlation of CFL1 expression with an aggressive phenotype and worse prognosis for patients. High-throughput screenings of pancreatic cancers already suggested CFL1 as biomarker [7][8][9][10]. Similar to Satoh et al. [10], we could show that CFL1 expression is specifically increased in pancreatic cancer but not in chronic pancreatitis ( Figure 1) and that silencing of CFL1 is associated with a reduction of migration ( Figure 2). However, our in-depth characterization of CFL1 in pancreatic cancer further supports a central and much more complex role of CFL1, including regulatory functions in proliferation and apoptosis inhibition. Although several of our applied pancreatic cancer cell lines were originally derived from liver metastases of pancreatic tumors, this does not invalidate their use as in vitro models of PDAC. In this perspective, a number of studies have established that metastases share the phenotypic traits of the primary tumor from which the derive [141,142]. Our selected cell lines represent a spectrum of different grades of differentiation and invasiveness of PDACs, thereby avoiding falsely identifying effects which are in reality artefacts in a single cell line. Although an association of CFL1 overexpression with cancer progression has previously been established, the molecular interactions regulating its behavior in cancer are far from understood. Here, we used a mechanistic model to unravel molecular tumor-promoting regulations of CFL1 in pancreatic cancer. Systems biology is an interdisciplinary approach that studies signaling crosstalk holistically instead of studying small parts or single interactions within a signaling cascade. Thus, to the best of our knowledge, our model is the first that captures holistically the regulation of CFL1 and its impact on cancer progression. Since existing knowledge about regulatory interactions in biology is mostly qualitative and kinetic parameters are often not available, gene regulatory network models are an appropriate tool to initially uncover pathway regulation and their crosstalk. Despite their simplicity, they are able to reproduce complex behavior and help to guide biological research. Likewise, the CFL1 model is capable of reproducing the previously observed functional and molecular events in pancreatic cancer cells with and without RNAi-mediated knockdown of CFL1 expression. Furthermore, model-based assumptions on regulations could be corroborated by additional in vitro experiments. In biology, it is well known that several regulatory interactions are cell type specific. For instance, for some cell entities, the death receptor stimulation is sufficient to induce apoptosis. In contrast, pancreatic cancer cells belong to a group of cells that require an additional mitochondrial enhancing signal to induce apoptosis ("Type 2 cells") [119]. This may explain why CFL1 knockdown already triggers apoptosis in other cancers [45] but not in pancreatic cancer (Figure 2a). Within our model, the lack of apoptosis induction is explained by a dependency of BAX on the availability of CFL1 for efficient translocation Figure 5. In-silico screening for therapeutic targets. Automated model perturbation screening identified a list of proteins that might induce apoptosis. Displayed is the long-term behavior of the model after introducing the model-suggested interventions. An in-silico knockout of CD44, STAT3, or TWIST1 induce apoptosis. Similar results were obtained for AURKA or PAK1 knockouts. These knockouts mainly induce apoptosis and to a minority cell cycle arrest combined with inhibited migration trades. Discussion Various studies in different types of cancers describe a correlation of CFL1 expression with an aggressive phenotype and worse prognosis for patients. High-throughput screenings of pancreatic cancers already suggested CFL1 as biomarker [7][8][9][10]. Similar to Satoh et al. [10], we could show that CFL1 expression is specifically increased in pancreatic cancer but not in chronic pancreatitis ( Figure 1) and that silencing of CFL1 is associated with a reduction of migration ( Figure 2). However, our in-depth characterization of CFL1 in pancreatic cancer further supports a central and much more complex role of CFL1, including regulatory functions in proliferation and apoptosis inhibition. Although several of our applied pancreatic cancer cell lines were originally derived from liver metastases of pancreatic tumors, this does not invalidate their use as in vitro models of PDAC. In this perspective, a number of studies have established that metastases share the phenotypic traits of the primary tumor from which the derive [141,142]. Our selected cell lines represent a spectrum of different grades of differentiation and invasiveness of PDACs, thereby avoiding falsely identifying effects which are in reality artefacts in a single cell line. Although an association of CFL1 overexpression with cancer progression has previously been established, the molecular interactions regulating its behavior in cancer are far from understood. Here, we used a mechanistic model to unravel molecular tumorpromoting regulations of CFL1 in pancreatic cancer. Systems biology is an interdisciplinary approach that studies signaling crosstalk holistically instead of studying small parts or single interactions within a signaling cascade. Thus, to the best of our knowledge, our model is the first that captures holistically the regulation of CFL1 and its impact on cancer progression. Since existing knowledge about regulatory interactions in biology is mostly qualitative and kinetic parameters are often not available, gene regulatory network models are an appropriate tool to initially uncover pathway regulation and their crosstalk. Despite their simplicity, they are able to reproduce complex behavior and help to guide biological research. Likewise, the CFL1 model is capable of reproducing the previously observed functional and molecular events in pancreatic cancer cells with and without RNAi-mediated knockdown of CFL1 expression. Furthermore, model-based assumptions on regulations could be corroborated by additional in vitro experiments. In biology, it is well known that several regulatory interactions are cell type specific. For instance, for some cell entities, the death receptor stimulation is sufficient to induce apoptosis. In contrast, pancreatic cancer cells belong to a group of cells that require an additional mitochondrial enhancing signal to induce apoptosis ("Type 2 cells") [119]. This may explain why CFL1 knockdown already triggers apoptosis in other cancers [45] but not in pancreatic cancer (Figure 2a). Within our model, the lack of apoptosis induction is explained by a dependency of BAX on the availability of CFL1 for efficient translocation to the mitochondria. While CFL1 knockdown thus generates an initial pro-apoptotic stimulus, it simultaneously interrupts the apoptotic cascade on the level of BAX mitochondrial translocation. The regulation of cell cycle progression by CFL1 shows similar cell-specific aspects. While Tsai et al. [140] suggested regulation of proliferation by attenuation of cell cycle inhibitors in human non-small lung cancer cells, this regulation can be excluded for pancreatic cancer cells. We found neither a change in the protein levels of the prominent cell cycle inhibitors p21 and p27 nor a change in their localization after CFL1 depletion (Figure 4c). Conversely, our simulations supported the conclusion of Wu et al. [45] that the effect of CFL1 on the cell cycle is mediated by STAT3. An obvious advantage of having access to mathematical models that faithfully reproduce complex molecular interactions is the possibility to simulate pharmacological inhibition of single targets or combinations of targets within the network at practically no cost. Since the regulation of apoptosis is prominently featured in our model, this was an obvious choice as a "functional readout" for screening for potential intervention targets. The model simulations proposed the proteins AURKA, CD44, PAK1, STAT3, and TWIST1 as promising therapeutic targets for pancreatic cancer. Interestingly, there are already ongoing clinical trials performed with several AURKA (NCT00249301, NCT01924260) and STAT3 (NCT02983578, NCT03382340) inhibitors with pancreatic cancer patients (https://clinicaltrials.gov (accessed on 5 January 2021)), further supporting the relevance of the model's conclusions. However, while ongoing clinical trials are performed with several CD44 inhibitors (e.g., NCT02046928, NCT03078400), none is applied to pancreatic cancer patients. In contrast, no compounds are available yet which are suitable for PAK1 or TWIST1 inhibition in humans [87]. The most promising compound for PAK1 inhibition, PF-3758309, failed due to its low oral bioavailability in humans while other inhibitors like IPA-3 or G-555 reveal either cellular toxicity or cardiovascular toxicity respectively [63,87]. The same is true for the first TWIST1 inhibitor. Recently, Yochum et al. [143] published harmine as a TWIST1 inhibitor. However, while they did not observe toxicity in their in vivo model, harmine is associated with neurotoxicity in humans [144]. According to our model, CD44 may be a particularly attractive novel target in pancreatic cancer. This is further supported by in vitro and in vivo preclinical studies showing decreased migration and growth in pancreatic cancer cells after CD44 knockdown [145][146][147][148]. In this perspective, it should be highlighted that some of these experiments were performed in Panc-1 cell lines that have a high CFL1 expression ( Figure S2). Moreover, it could be shown that a decrease in CD44 levels leads to a reduction in the activity of STAT3 and AKT [118,146], which is also replicated in our attractor ( Figure S10). The simultaneous loss of AKT and STAT3 activity may prove particularly effective because both pathways are described to contribute to chemotherapy resistance [149] (as also described for CD44 [146,150]). However, the efficacy of PAK1, TWIST1, or CD44 inhibition for pancreatic cancer treatment will have to be determined in further pre-clinical and clinical studies. Taken together, our results provide compelling evidence for an important, multifaceted pro-oncogenic role of CFL1 in pancreatic cancer cells in vitro and in vivo. Conclusions We present a Boolean network model which accurately reflects functional and molecular observations of pancreatic cancer with high cofilin-1 (CFL1) expression. This mechanistic model is able to predict the behavior of CFL1 and its effect on downstream targets in pancreatic cancer cells. Analyses of dynamic behaviors allowed to hypothesize about molecular mechanisms of sustaining CFL1 overexpression, its impact on cell cycle, invasion, and apoptosis. Moreover, this model allows simulating pharmacological interventions in order to identify potential novel drug targets. Thereby, we identified CD44 as promising drug target for pancreatic cancer patients with high CFL1 expression. Conflicts of Interest: The authors declare no conflict of interest.
9,758
2021-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Evaluation of New PCM/PV Configurations for Electrical Energy Efficiency Improvement through Thermal Management of PV Systems Photovoltaic modules during sunny days can reach temperatures 35 °C above the ambient temperature, which strongly influences their performance and electrical efficiency as power losses can be up to −0.65%/°C. To minimize and control the PV panel temperature, the scientific community has proposed different strategies and innovative approaches, one of them through passive cooling with phase change materials (PCM). However, further investigation, including the effects of geometric shape, insulation, phase change temperature, ambient temperature, and solar radiation on the PV module power output and efficiency, needs further optimization and research. Therefore, the current work aims to investigate several system configurations and different PCMs (RT42, RT31, and RT25) and compare the system with and without insulation through computational fluid dynamic (CFD) tools. The final goal is to optimise and control the temperature of PV modules and evaluate their system efficiency and energy generation. The results showed that compared with a rectangular shape of the PCM container, the trapezoid-one exhibits a considerably better cooling performance with a negligible variation of the PV temperature, even when the melting temperature of the PCM was lower than the average ambient temperature. Moreover, the study showed that having insulation in the PCM container increases the amount of PCM needed, compared with no insulation case, and the increased amount depends on the PCM type. The newly proposed PV/PCM system configuration shows an efficiency and power generation enhancement of 17% and 14.6%, respectively, at peak times. Introduction Solar PV, together with wind energy, is fast becoming a mainstream and competitive source of power production. Although accounting for only 4.5% of total electricity generation in 2015, they are expected to represent 58% of total electricity production by 2050 [1]. Electricity generation from solar radiation is achieved through photovoltaic (PV) cells or concentrated solar power plants (CSP). This solar radiation can be used for electricity generation or heat production (space heating, hot water supply). PV cells absorb 80% of the incident solar radiation and depending on the PV module material, a small part of this solar radiation (only 15 to 20%) is converted into electrical energy while the remaining part is converted into heat [2]. Manufacturers claim that the available photovoltaic modules have an efficiency from 6 to 16% [3]. However, this claimed efficiency is measured at 25 • C, and they have not considered the PV module temperature rise during their working conditions. The overheating temperature of the module is due mainly to high solar radiation and high ambient temperatures [4]. PV modules during sunny days can reach temperatures of 35 • C above ambient temperature. This temperature increment strongly influences the performance and electrical efficiency of the PV system, which can lead to power losses from 0.40%/ • C at standard test conditions [5] to 0.65%/ • C [6], and increase the ageing of the module. Typical efficiencies for different PV module materials can be found in Table 1. Table 1. Efficiencies of PV modules vs. PV material [7,8]. Amorphous/microcrystalline Si 11.7-9.9 Dye-sensitized 12.3-8.5 Organic 11.3-9.2 1 At 25 • C and spectrum (1000 W/m 2 ). To inhibit the temperature rise in PV modules, several authors have proposed different cooling techniques using air (natural or forced circulation), water (water cooling system or heat pipes), thermoelectric systems, or Phase Change Materials; some of these methods are passive while some others are active. PV modules can also be combined with solar thermal (PV/T) to deliver heat and electricity into a single module. Some studies have shown an increase in electrical efficiency by 5% [9]. PV/T technology is mainly used in domestic and industrial applications for heating air or water as well as electricity generation [10]. In those cases, water or air are mainly used as the heat transfer fluid. Air type PV/T collectors are used for drying, space heating, and ventilation, whereas water types are used to removing the heat from the PV module. Water types are more effective than air types because the fluid temperature variation is narrower. Already some researchers have pointed out higher thermal efficiencies, 50 to 70% for water heating and 17-51% for air heating [11]. These types of PV/T collectors are mainly used in thermal/heat pump systems, water desalination, solar cooling, or solar greenhouse [2]. However, since 2010 the study of PCMs and nanofluid to increase PV module's efficiency has increased [12]. PCMs are materials that store thermal energy through a phase change, the solid/liquid phase change being the most used. These materials are used for thermal energy storage and also thermal management applications as they can charge/discharge at an almost constant temperature and have high energy density (small footprint) [13] despite suffering poor thermal conductivity. In recent years, innovative passive cooling methods have been presented, compared to PV/T, as they do not require additional power consumption, work at a higher operating temperature to supply useful heat, and are a more complex system with a higher initial investment [14]. Abd-Elhady et al. [15] proposed drilling through holes in the PV module to allow the hot layer of air under the module to rise, creating natural flows that cool down the module. The temperature of the PV module decreased with the increased number of through-holes until an optimum number of holes was reached. The increase of the through-holes diameter reached a maximum cooling effect on the PV modules, above which less cooling occurred. Also, PCM has been proposed as a potential solution, although further cost-effective studies need to be conducted [16][17][18]. Several researchers have proposed the use of thin layers attached to the PV modules, similar to the research carried out by Stropnik et al. [17] which achieved an increase of the electrical power by 9.2% under experimental conditions. Su et al. [19] introduced a PCM layer to an air-cooled system, improving its efficiency by 10.7% compared with the PV module with no PCM. Also, other researchers proposed the use of microencapsulated phase change material (MEPCM) [20]. A MEPCM layer attached to a water-surface PV module resulted in a 2.1% relative efficiency improvement compared with the one without MEPCM [21]. Hasan et al. [19] used different melting temperature PCMs to evaluate the performance of each PCM in four different systems. They found that the salt hydrate PCM (CaCl 2 ) achieved the highest temperature reduction in most of the insulations. The results showed that the thermal conductivity of the PCM container had a strong impact on low thermal conductivity PCMs performance. Most PCMs have low thermal conductivity, which strongly affects their heat transfer rate during the charging/discharging process and limits their application, as several researchers have stated in their work. Different strategies to overcome this challenge are currently under study. Huang et al. [22] studied the thermal behavior of PV modules with and without PCM experimentally and by simulation. The system consists of a vertical southeast-oriented PV/PCM system using real ambient temperature and insolation conditions in South East England. The improvement in the thermal performance achieved using metal fins in the PCM container was significant as they enabled a more uniform temperature distribution within the PV/PCM module. The PCM and fins delayed the temperature increment maintaining the operating temperature of the PV cell at a much lower level for extended hours. It was observed that after the PCM melting process, the rate of PV heat extraction decreased, which produced a rapid increase in the module temperature. Khanna et al. [16] focused on optimizing a finned PV/PCM module to achieve the required cooling under different solar radiation; different lengths, thicknesses, and spacing between fins were used. An alternative to increase the thermal conductivity is the use of metallic foams, which was evaluated by Klemm et al. [23]. According to the simulation results, a storage unit consisting of a PCM-filled metallic fibre structure represents an adequate mean for passive thermal management of PV modules in given ambient conditions. The system was able to decrease 20 K of the PCM storage module. However, the configuration has to be validated experimentally under real conditions, and the volume reduction has to be considered. Other researchers used a PV/PCM system with form-stable paraffin/Expanded Graphite (EG) to improve the uniformity of the temperature distribution of the PV modules and thus improve their power output [18]. The PCM/EG helped to control the temperature and the temperature distribution of the PV modules. The output power achieved was above that of the conventional PV module for 230 min, with a maximum increment of 11.50% and an average increment of 7.28% under the experimental conditions. Others, such as Kumar et al. [24], used nanoPCMs to increase the efficiency. The authors achieve a PV panel electrical performance enhancement of up to 4.3%. The prototype studied consisted of a combined PCM mixture of calcium carbonate, copper nanoparticles, and SiC in a ratio of 7:2:1. Among the techniques for cooling systems mentioned above, PCMs are the most promising and effective cooling technique for photovoltaic due to their higher energy density per unit volume [25,26]. The use of PCM for PV modules cooling shows higher heat transfer rates than both forced air circulation and forced water circulation, a higher heat absorption due to the latent heat, and an isothermal heat removal [27]. Moreover, there is no electricity consumption, no noise, and no maintenance cost. However, the PCM has a higher cost than natural and forced air circulation; some PCMs are toxic, have fire safety issues, are strongly corrosive, and are considered disposable after their life cycle is complete. The research regarding this technology needs to move forward, offer solutions to unresolved problems, and understand the potential barriers to practical application. Additionally, the geographic location of the PV modules, no matter the system, has a direct impact on the intensity of solar radiation and wind speed, together with humidity conditions, dust in the air and/or pollution, factors that determine the PV module performance and output fluctuation [12]. Although the reported studies showed a considerable enhancement of the PV module's performance, the experimental results were mainly conducted in lab conditions, where the solar radiation and ambient temperature were fixed at values of 1000 W/m 2 and 25 • C, respectively. These tests make it difficult to predict the actual amount of PCM needed for real applications. Therefore, systems must be investigated at a designated location [28]. Studies have shown that common assumptions about the UK, such as not receiving enough sunshine and not being viable to install PV, were wrong; some findings have shown that a significant proportion of a house's electrical needs could be obtained more than 40% on average [29]. Another aspect that is sometimes overseen is the container dimensions. Typically, a rectangular-shaped PCM container is considered both in modeling and experimental systems. Novel PCM container shapes, different from the usual rectangular solid container filled with the phase change material at the backside of the PV panel, should be considered. Nizetic et al. [30] proposed a new configuration, where several small containers filled with the PCM material were attached to the PV panel. The number of PCM materials was approximately 47% less and the container material, aluminium, was 36% less when compared with a full PCM container. Both configurations performed better than the PV panel without PCM. Although there were periods where the full PCM configuration had the highest power output, the overall performance considering long periods of time for the small container configuration, was better. The authors relate to that outcome due to the more effective thermal management of the small containers owing to less effective heat transfer from the full PCM container strategy. In this study, passive cooling of PV systems using PCMs was investigated where three different PCM candidates were selected (RT42, RT31, and RT25) based on the average ambient temperature, and a polycrystalline PV module was used. The optimization of the PV module considered different parameters such as ambient temperature, daily solar radiation, PCM type, and its melting temperature, and PCM container shape and size. The parameters were assessed and compared with the system without PCM. This work aims not only to assess the performance of the novel PV/PCM system but also to determine the optimum PCM container parameters (shape/geometry, depth, length, and insulation), the PCM type, and the combined effect on the PV module surface temperature, efficiency and power output using real solar radiation and ambient temperature data. Computational Fluid Dynamic CFD was implemented using Ansys Fluent V18.2 [31], and the dynamic heat transfer, fluid flow, melting/solidification, and other PV/PCM system parameters were studied. System Design A passive cooling PV module system was considered in this study, consisting of a PCM container attached to the bottom of a Polycrystalline PV module, as shown in Figure 1. The PV module assembly is structured in five layers, and the physical properties of each layer are presented in Table 2. [32]. The PCM container material was made of 4 mm thickness aluminium, with dimensions 1000 mm in length and a variable depth from 20 to 120 mm. A convective heat boundary was applied on the top surface of the PV module, whereas two different boundary conditions were applied on the bottom and the side walls (adiabatic wall and convective heat) to determine the effect of insulation on the dynamic of the system. The PCM density change during the melting process leads to accumulated heat at the topmost part of the PCM container, which causes nonuniform distribution of the PV module temperature. This difference in temperature across different rows of cells leads to mismatch losses in the PV module. Each cell produces different power based on its temperature and since cells in the PV module are connected in series, the cell subjected to the highest temperature will produce the lowest power. According to Equation (10), the cell current increases with the increase in temperature, so the cooler cells will produce a lesser current. As in series connection, the lowest current producing cell governs the current of the whole string of cells in the module, and the higher current generated by the other cells will get dissipated as heat across the diode, which is parallel to the light source in the single diode model of the solar cell [33]. To address this issue and achieve a uniform temperature distribution, four different PCM container geometries were considered, as shown in Figure 2. To validate the model, measured solar radiation and ambient temperature data from a study conducted by Savvakis et al. [34] was used as input, and the predicted outputs power was compared to the measured value. The PV module orientation was considered as follows: 30 • from horizontal with an azimuth angle of 0 • , which were the experimental work conditions; the average ambient temperature for the selected day was approximately 30 • C. Therefore, three different PCMs with higher, similar, and lower melting temperatures (RT42, RT31, and RT25) were selected to determine the relationship between the location average ambient temperature and the PCM phase change temperature. The physical properties of these PCMs are shown in Table 3. The PCM density in the model was set as a function in the PCM temperature while the specific heat capacity was assumed to be constant. The data from the PCM supplier shows that approximately 90% of the phase change occurs within a temperature range of 5 • C, as shown in Figure 3 [35]. Thus, a narrow temperature range has been used for the PCM simulation; this was partially done to reduce the computational time [32,34,36,37]. Table 3. Properties of the studied PCMs (RT42, RT31, and RT25). PV/PCM System Model The fraction of the incident solar radiation that passes through the top glass layer and absorbed by the PV cells can be found in Equation (1) which considers the reflectivity of the PV module and the solar radiation losses [34]. where (τα) eff is the effective glass layer transmissivity and absorptivity of the PV cell. A small portion of the absorbed solar radiation can be converted into electricity, and the other major part will be converted into heat; this heat is expressed in Equation (2). where η c is the cell conversion efficiency. A computational fluid dynamic (CFD) tool was used to predict the operating temperature of the PV module considering the experimental ambient temperature and solar radiation of the selected day. The main objective is to assess its performance. The assumptions made to reduce the complexity of the problem and the computational time are the following: 1. The thermal resistance between the PV layers is negligible. 2. There is a uniform heat flux distribution on the PV surface. 3. Heat leaks/gains through the insulation are negligible. Ansys fluent V18.2 software [31] was used in the current study, and a melting and solidification model was chosen to simulate the melting/solidification processes of the different PCMs [31,38,39]. The model can solve thermal and fluid flow problems involving melting/ solidification at a specific temperature such as pure substances or over a wide temperature range such as mixtures or alloys. Enthalpy-porosity formulation method was used in Ansys Fluent to track the liquid/solid front explicitly. The liquid/solid interface is denoted by a mushy zone and treated as a porous zone with a porosity equal to the fluid liquid fraction, which changes from 0 to 1 during the melting process [31,38,39]. To solve the energy equation, the model uses the following equation Equation (3) ∂(ρH) ∂t where ρ is the fluid density, → v is the fluid velocity, S is the source term, and H is the material's enthalpy, which is the summation of the sensible (h) and the latent heat (∆H). Enthalpies are written in the manner of Equation (4) where where h ref is the reference enthalpy, T ref is the reference temperature, and C p is a specific heat capacity at constant pressure. ∆H represents the latent heat component which varies from 0 (for the solid phase) at the initial state of the material to 1 (for the liquid phase) at the end of the phase change. Therefore ∆H of material L during the melting process (mushy zone) can be written as: More details on the melting and solidification model can be found in [31]. Regarding the boundary conditions, a convective heat of 10 W/m 2 -K and radiation heat were applied on the top wall surface of the PV module. The same boundary conditions with a convective heat value of 7 W/m 2 . K were applied on the side and bottom walls in cases without insulation and adiabatic walls in cases with insulation. The model used the measured ambient temperature to predict the heat transfer rate through convection and radiation. Regarding the convective heat, these values are not practically constant. They function in many parameters such as ambient temperature, wind speed, and even the cleanliness of the module surfaces; however, these values were selected because they demonstrated good agreement with the experiment. Regarding radiation, it is mainly due to the temperature difference; therefore, considering the measured ambient temperature will lead to a highly accepted prediction of the radiation heat. The performance of the PV module was assessed in terms of conversion efficiency (η c ), short circuit current (I sc ), open-circuit voltage (V oc ), and power output (P) based on the predicted operating temperature as defined in Equations (8)-(13) [40]: where η T ref is cell/module electrical efficiency at standard operating conditions (SOC) and β ref is the temperature coefficient (TC) which is defined in Equation (9) [40]: where T 0 is the PV temperature that drops the module electrical efficiency to zero; this temperature is equal to 270 • C for crystalline silicon cells [40]. The PV module current and voltage at the operating temperature was calculated using Equations (10) and (11), respectively [40]. where ∝, β c and δ are the current, voltage, and solar radiation correction coefficients for the operating temperature. G T is the solar irradiance on the PV surface (W/m 2 ). G T SRC is the solar irradiance at standard reporting conditions. The maximum power was calculated using Equation (12). The module power output at the operating temperature is defined by the following equation: where A is the module surface area. The power losses generated due to the nonuniformity of the temperature distribution (mismatch loss fraction, L MLF ) on the PV module was calculated using Equation (1) [33]: where P mp is the PV module output power, P i is the cell power generation, n is the number of cells in the module, nr is the number of cells in one row, P row is the power output of one row, V row is the open voltage of one row, and I lowest is the lowest current in the module. Model Validation Nikolaos and Theocharis [34] experimentally tested the PV/PCM system and compared its performance with a conventional PV system. Their measured PV surface temperature with and without PCM cooling system was used to validate the developed CFD modelling. The measured solar radiation and ambient temperature were used in the CFD modelling. The measured PV temperature was compared against the predicted values, and the results are shown in Figure 4. For both cases (with and without PCM attached), the predicted PV temperatures demonstrated good agreement with the experimental work, with a value of R-square of 0.78 and 0.94, respectively, and a maximum temperature difference of only ±2 • C. Result and Discussion The current work investigates the potential of using PCMs to enhance the performance of PV modules where a PCM container is attached to the bottom surface of the PV module. The study aims to optimise the system variables: PCM container (shape, height/depth and length), insulation and PCM type, which contains a sufficient amount of PCM to meet the cooling load, by studying their effect on the operating temperature of the PV module. The PV module temperature was predicted using CFD modelling with Ansys Fluent V18.2 software to assess its performance. A published experimental work was used to validate the developed model. The dynamic power output from the PV module and its conversion efficiency with respect to the operating temperature was calculated using well-known empirical equations. Three different PCMs (RT42, RT31 and RT25) were chosen based on the average ambient temperature (~30 • C) of the studied case. The melting temperature of these PCMs was approximately 5 • C less, equal to and 10 • C higher than the average ambient temperature, to determine the effect of the selected phase change temperature on the container size, insulation, PV temperature, conversion efficiency, and power output. Figure 5a,b show the PV temperature of the RT42 PV/PCM system with different container heights, with and without insulation, versus the daytime. In both cases, the temperature of the PV-only system is included for comparison purposes. The results showed that the insufficient container heights (20,40,50, and 60 mm) led to an increase in the PV/PCM module temperature even higher than the conventional system (PV-only) at certain times. The increase was higher with insulation, as shown in Figure 5b; this is due to the increasing PCM temperature when it completely melts and releases its heat, becoming higher than the ambient temperature. Thus, the conventional PV system showed a lower temperature when the heat could be easily separated from the back of the PV module. Without insulation, the optimum tank height was 70 mm while it was 80 mm with insulation; this means that having insulation in the PCM container when RT42 is used increases the required PCM amount by 14%, in addition to its cost. RT31 and RT25 showed a similar trend to RT42, with and without insulation, as shown in Figures 6 and 7. However, the average PV module temperature using RT31 and RT25 with the optimal PCM height was around 37 • C and 32 • C, respectively, which were lower than that of using RT42 (43 • C). Thus, RT31 and RT25 provide a significant reduction in the PV temperature at peak times by 23 • C and 28 • C, respectively, compared with the PV-only system when it was 17 • C after RT42 was used. The optimum tank height for RT31 was 110 mm when no insulation was used and 120 mm using insulation. When RT25 is used, the optimal heights were 120 and 125 mm, respectively. Figure 8 shows the comparison of RT42, RT31, and RT25 in terms of the PCM container size. The figure demonstrates that when RT31 and RT25 were used, the required amount of PCM was 56% and 72% higher than that of RT42. Regarding the effect of the tank shape on the PV module temperature, four different tank geometries (cases) were considered, as shown in Figure 2. In all these cases, no insulation was used, and RT42 was selected as the PCM material. When the PV module temperature becomes higher than the melting temperature of the PCM, the melting process starts, and the density change occurs. This density change forces the liquid phase to move to the top side of the PCM container, leading to a nonuniform temperature distribution in the PV module. As mentioned above, this temperature gradient is highly dependent on the tank depth, shape, and length. In the first case (Case 1) of the four configurations, the container had a rectangularshaped cross-section with a height value of 70 mm, while the second, third, and fourth cases (Case 2, Case 3 and Case 4) had container cross-sections shaped like trapezoids, with a variation in height from bottom to top. The bottom heights of Case 2, 3, and 4 were 50, 40 and 30 mm, whereas 90, 100 and 110 mm were the heights of the top side, respectively. In all cases, the tank length was fixed at 1000 mm. For the first configuration (Case 1), Figure 9 shows the PV surface temperature gradient along its length at different day times. After 14:50, most of the PCM melted, and the top part's temperature started increasing and reached its peak at 15:40. with a difference higher than 4 • C. Figure 9b,c show the temperature and the mass fraction contours of the PV/PCM system at 15:40. Figures 10a, 11a and 12a show the temperature gradients of PV temperature along with the trapezoid shape containers of Cases 2, 3, and 4, respectively. The PV surface temperature is almost constant in Cases 2 and 3, with a variation of less than 0.5 • C. However, Case 4 shows a considerable reduction in the surface temperature at the bottom side of the container. This container part was affected by the ambient temperature due to the thinness of the PCM layer. Figures 10b, 11b and 12b show the mass fraction contours of the PV/PCM system at 15:40. It can be seen that the solid part of the PCM in Cases 3 and 4 did not move to the bottom side of the container due to their high viscosity and the low container slope. Reducing the tank length leads to a lower temperature variation and vice versa. Considering both the PV module temperature and the movement of the PCM inside the container, Case 2 was the best configuration; however, this result is subjective to the PV tilt angle and the ambient temperature. The PV module efficiency at the operating temperature was calculated using Equation (8). Figure 13 shows the variation of the PV module efficiency at the operating temperature during the studied daytime for both the PV/PCM and the PV-only system. By comparing the two systems, unlike the conventional system, the PV/PCM system showed no significant variation in the PV efficiency during the daytime. The lowest melting temperature PCM (RT25) showed the highest PV module efficiency. The PV/PCM systems reached an efficiency increase of 10%, 13% and 17% at 13:00 when RT42, RT31, and RT25 were used, respectively, as shown in Figure 14. This considerable enhancement of the PV/PCM system efficiency resulted in a great increase in the hourly power output, as shown in Figure 15, where the power output of the 1 m 2 module's system is presented. Figure 16 shows the percentage enhancement of the power output of the PV/PCM system compared to the PV-only system. This enhancement reached around 9%, 11.5% and 14.6% at the maximum solar radiation when RT42, RT31, and RT25, respectively, were used. RT42 showed the lowest PV efficiency and power output enhancement compared with RT31 and RT25. However, the output power when using RT31 and RT25 as PCM showed a maximum increase of only 3% and 5.5%, respectively, compared with RT42, as shown in Figure 17. These results indicate that using PCM with a melting temperature higher than the average ambient temperature significantly reduces the PCM amount without a significant reduction in the total power output. The rectangular PCM container shows the most inhomogeneous temperature distribution, as seen in Figure 9a. This inhomogeneous temperature distribution leads to a mismatch loss, an outcome previously mentioned in Section 2. A PV module with 72 cells was used to estimate the mismatch losses for Case 1 at the daytime hour of 15:40. The module specification is shown in Table 4 with a landscape orientation. The cells array consists of 10 cells in each row and 6 in each column. The CFD simulation results were used to feed the mathematical model to calculate each row's output voltage and power and the whole module. The solar radiation was assumed to be 800 W/m 2 , and the results are shown in Table 5. The results show a mismatch loss fraction of 0.42%, which seems insignificant, but considering a large PV plant consisting of several modules, these losses will significantly contribute to reducing the power generation. Conclusions Researchers have already reported passive cooling systems for PV modules using phase change materials as a promising and effective cooling technique due to their higher energy density per unit volume and high heat transfer rates compared with air circulation. These systems does not require electricity consumption or moving parts and have a low maintenance cost. This work investigated the effects of different design parameters of PV/PCM systems, including PCM container shape, depth, length, insulation, and PCM type on the PV module surface temperature, efficiency, and power output. Three different PCMs were selected (RT42, RT31, and RT25), and experimental hourly solar radiation and ambient temperature data available in the literature were used. A CFD model using Ansys Fluent was developed to simulate the melting/solidification processes of the PCM and to predict the temperature variation in the dynamics of the PV/PCM system during the daytime. The results showed that: 1. The CFD model demonstrated good agreement with the experimental work found in the literature with a maximum temperature difference of less than 2 • C. 2. Insulation in the PCM container will increase the required amount of the PCM, no matter the melting temperature of the PCM. 3. The rectangular shape (Case 1) and the optimum depth/height of the PCM containers with a sufficient amount of PCM to meet the cooling load during the daytime were 70 mm, 110 mm, and 120 when RT42, RT31, and RT25, respectively, were used in cases without insulation. With insulation, the optimum depth/height of the PCM containers were 80 mm, 120 mm, and 125 mm, respectively. 4. PCMs with a lower melting temperature require more amounts of PCM when there is no significant difference in the latent heat. Compared to RT42, RT31 and RT25 showed an increase in the PCM amount by 56% and 72%, respectively. 5. Regarding the PCM container geometry, trapezoid container configurations (Cases 2, 3, and 4) showed a considerably better cooling performance due to their lower variation of PV temperature. This enhances the performance of the PV systems by reducing mismatched losses. 6. In all investigated PCMs, the PV/PCM system showed a considerable enhancement of the PV module efficiency and maintained it at an almost constant level over the daytime. Compared with the PV-only system, the efficiency enhancement at the peak times reached 10%, 13% and 17% when RT42, RT31, and RT25 were used, respectively. 7. PV/PCM systems showed a considerable power output enhancement; at the solar peak time, the power output increased by 9%, 11.5% and 14.6% when RT42, RT31, and RT25 were used, respectively, compared with the PV-only system. 8. Although RT42 showed the lowest efficiency and power enhancement, it showed a significant reduction in the amount of PCM by 36% and 14.6% compared with RT31 and RT25, respectively. Moreover, the power output from RT31 and RT25 cases showed a maximum increase of 3% and 5.5%, respectively, compared with RT42, indicating that using a PCM with a melting temperature higher than the average ambient temperature will lead to a cost-effective system without a significant reduction in the power output.
7,630
2021-07-08T00:00:00.000
[ "Engineering" ]
Hesse Pencils and 3-Torsion Structures This paper intends to focus on the universal property of this Hesse pencil and of its twists. The main goal is to do this as explicit and elementary as possible, and moreover to do it in such a way that it works in every characteristic different from three. Introduction In a paper with Noriko Yui [14], explicit equations for all elliptic modular surfaces corresponding to genus zero torsion-free subgroups of PSL 2 (Z) were presented. Arguably the most famous and classical one of these surfaces is the Hesse pencil, usually described as the family of plane cubics x 3 + y 3 + z 3 + 6txyz = 0 (see, e.g., [14, Table 2] and references given there). The current paper intends to focus on the universal property of this Hesse pencil and of its twists (see Theorem 1.1). The main goal is to do this as explicit and elementary as possible, and moreover to do it in such a way that it works in every characteristic different from three. We are not aware of any earlier publication of these results in the special case of characteristic two, so although admittedly not very difficult, those appear to be new. The first author of this paper worked on the present results as a (small) part of his Ph.D. Thesis [1] and the third author did the same as part of her Bachelor's Thesis [15]. Both were supervised by the second author. Let k be a perfect field of characteristic different from three. Denote the absolute Galois group of k by G k . Given an elliptic curve E defined over k, one obtains a Galois representation on the 3-torsion group E [3] of E. This paper describes the family of all elliptic curves that have equivalent Galois representations on E [3]. Recall that elliptic curves E and E over k yield equivalent Galois representations on their 3-torsion if and only if E [3] and E [3] are isomorphic as G k -modules. To be more specific, we demand that the equivalence is symplectic: a symplectic homomorphism φ : E [3] → E [3] is defined as in [10], as follows. If e 3 (S, T ) = e 3 (φ(S), φ(T )) for all S, T ∈ E [3] where e 3 and e 3 are the Weil-pairings on the 3-torsion of E and E respectively, then φ is called a symplectic homomorphism, otherwise φ is called an anti-symplectic homomorphism. Next, recall the definition of the Hessian of a polynomial. Let F ∈ k[X, Y, Z] be a homogeneous polynomial of degree n. The Hessian Hess (F ) of F is the determinant of the Hessian matrix of F , that is which is either a homogeneous polynomial of degree 3n − 6 or zero. Given a curve C = Z(F ) with F ∈ k[X, Y, Z] homogeneous of degree three, the Hesse pencil of C is defined as C = Z(tF + Hess (F )) over k(t). Recall that the discrete valuations on k(t) correspond to the points in P 1 (k), where we usually write (t 0 : 1) as t 0 and (1 : 0) as ∞. We denote the reduced curve of C at t 0 ∈ P 1 (k) by C t 0 . Notice that C ∞ = C and for t 0 = ∞ C t 0 = Z(t 0 F + Hess (F )). In the special case that C = E is an elliptic curve given by a Weierstrass equation, we have (see Section 2) that the point O at infinity is a point on E t 0 for every t 0 ∈ P 1 (k). If E t 0 is a smooth curve, then this makes it an elliptic curve with unit element O. In the case of characteristic two, the standard definition of the Hessian does not lead to a satisfactory theory. In Sections 9.2 and 9.3 a modified Hessian is introduced for this case; in fact this modification was already used by Dickson [5] in 1915. The goal of this paper is to provide an elementary proof of the following theorem: Theorem 1.1. If E and E are elliptic curves over k, with E given by some Weierstrass equation over k, then there exists a symplectic isomorphism E[3] → E [3] if and only if E appears in the Hesse pencil of E, i.e., E t 0 ∼ = k E for some t 0 ∈ P 1 (k). We note that both Fisher's paper [6] and Kuwata's paper [10] discuss, apart from the result above (although not in characteristic two) also the case of anti-symplectic isomorphisms between the 3-torsion groups of elliptic curves. In Sections 2, 3 and 4 we show that the 3-torsion groups of an elliptic curve in Weierstrass form and its Hesse pencil are identical not only as sets, but also have the same group structure and Weil-pairings. Using the Weierstrass form of the Hesse pencil computed in Section 5 and the relation between a linear change of coordinates and its restriction to the 3-torsion group described in Section 6, we prove in Section 7 essentially by a counting argument that an isomorphism of the 3-torsion groups respecting the Weil-pairings is the restriction of a linear change of coordinates. The proof of the theorem is completed in Section 8. After this, we adapt the argument in order to conclude the same result in characteristic 2 (where a slightly adapted notion of Hesse pencil is required). We compare our results with existing literature in Section 10. The flex points Let C = Z(F ) be a plane curve with F ∈ k[X, Y, Z] homogeneous of degree n and irreducible. A point P on C is called a flex point if there exists a line L such that the intersection number of C and L at P is at least three. Notice that in our definition P is allowed to be a singular point on C. The Hessian curve of C is defined as Hess (C) = Z(Hess (F )). Proposition 2.1. If P is a point on C and char (k) (n − 1), then P is a flex point if and only if P ∈ C ∩ Hess (C). From now on we will only work with curves of degree three, so the proposition above is only usable for fields k of characteristic different from two. This is the reason for why we exclude characteristic two for now; see Section 9 for the excluded case. This is a well-known and old result in the case of F = X 3 + Y 3 + Z 3 , see for example [11, Section VII.1]. The 3-torsion group Let E = Z(F ) be an elliptic curve with unit element O and F ∈ k[X, Y, Z] homogeneous of degree 3. Recall the following well known fact. Proof . Let L S and L T be the tangent lines to E at S and T respectively. Assume that T is also a flex point. Consider the function L S L T on E which has divisor 3(S) − 3(T ). From [13,Corollary III.3.5] it follows that 3S − 3T = O. Hence S − T ∈ E [3]. Assume that T is not a flex point. Now the divisor of the function L S This result tells us that if O is a flex point on E, then the concepts of flex point and 3-torsion point coincide. In the previous section we learned that a flex point on E is also a flex point on E = Z(tF + Hess (F )). Hence if we combine these statements, then we obtain E[3] ⊂ E [3]. Since the characteristic of k is different from three, these sets are equal in size, thus the same. Moreover suppose that E t 0 for some t 0 ∈ P 1 (k) is non-singular. Provide E and E t 0 with a group structure by taking O as the unit element. Since the flex points of E (considered as a plane cubic over k(t)) and of E t 0 (a cubic curve over k) are the same and a line that intersects an elliptic curve at two flex points will also intersect the curve at a third flex point, the group structures on E [3] and E t 0 [3] are equal as well. Recall that if the unit element O is a flex point on E, then we can find a projective linear transformation in PGL 3 (k) such that E is given by a Weierstrass equation in the new coordinates. Moreover since the characteristic of k is different from two and three, we may even assume that E : The Weil-pairing In the previous section we saw that Denote the Weil-pairing on the 3-torsion of E by e 3 and on the 3-torsion of E t 0 by e t 0 3 . An introduction to Weil-pairings can be found in [ Let s ∈ k(S, T )(t) be a local coordinate at t 0 . Choose the equations of the tangent lines such that they are also defined over k(S, T )[[s]] and are non-zero modulo s. Notice that L O , L S , L T and L −T modulo s are tangent lines to E t 0 at O, S, T and −T respectively. Follow the construction above to obtain the Weil-pairing The Weierstrass form Proposition 5.1. Let E be an elliptic curve given by the Weierstrass equation y 2 z = x 3 + axz 2 + bz 3 with a, b ∈ k. Then the Hesse pencil E can be given by Observe that the t in the proposition is equal to 8t in the previous sections. Proof . The proof boils down to computing the map A, which can be found in three steps. First map the tangent line to E at O to the line at infinity. Next scale the z-coordinate so that the coefficient in front of x 3 and y 2 z are equal up to minus sign. Finally shift the y -resp. the x-coordinate so that the xyz, yz 2 resp. the x 2 z terms vanish. This proposition shows that 6 Linear change of coordinates I Proposition 6.1. Let P i ∈ P 2 (k) for i = 1, . . . , 4 be points such that no three of them are collinear. If Q i ∈ P 2 (k) for i = 1, . . . , 4 is another such set of points, then there exists a unique This is a well-known result which is easily proved using some elementary linear algebra. Observe that an analogous result holds for two sets of n + 2 points in P n (k) such that no n + 1 of them lie on a hyperplane. Proof . Suppose that L is a line in P 2 k containing three of the points O, S, T and S + T . Denote these by P 1 , P 2 and P 3 . Since E is given by a Weierstrass equation, O is a flex point, thus P 1 + P 2 + P 3 = O. However this is impossible for the points mentioned above. Hence such a line L does not exist. Suppose that we are given two elliptic curves E and E as in the proposition above with E[3] = S, T and E [3] = S , T , then Propositions 6.1 and 6.2 imply that there exists an A ∈ PGL 3 k such that O → O , S → S , T → T and S + T → S + T and that this A is unique. 7 Linear change of coordinates II Proposition 7.1. Let E and E be elliptic curves given by a Weierstrass equation defined over k. is an isomorphism which respects the Weil-pairings, then there exists some t 0 ∈ P 1 k such that the fiber E t 0 of the Hesse pencil of E admits a linear change of coordinates Φ : The essence of the proof of this proposition is the following: We determine the t i ∈ P 1 k for which the j-invariant of E t i is equal to the j-invariant of E . For each of these t i 's we obtain a number of linear changes of coordinates E t i → E. A counting argument shows that φ is the restriction of one of those maps. The following observation is used in the counting argument: Notice that Next we prove the proposition. So φ Proof of Proposition 7.1. Let j 0 and j 0 be the j-invariants of E and E respectively. Denote the specialization of E W at t 0 ∈ P 1 k by be the isomorphism induced by the linear change of coordinates A from Proposition 5.1 at t 0 . Assume that j 0 = j 0 , 0, 1728 and take a t as defined in Proposition 5.1. Consider the polynomial in k[t], whose roots give E W t 0 's with j-invariant equal to j 0 . The polynomial G has degree 12 and its discriminant is which is non-zero, so G has distinct roots t 1 , . . . , t 12 in k. Since the j-invariant of E W t i is equal to j 0 , there exists an isomorphism Ψ i : [3] as groups with identical Weil-pairings. Therefore for every i = 1, . . . , 12 and σ ∈ Aut (E ) ∼ = Z/2Z is an isomorphism respecting the Weil-pairings. Notice that σ•Ψ i •A t i is an element of PGL 3 k , because E W t i and E are in Weierstrass form and A is a linear change of coordinates. All 24 isomorphisms φ i,σ are distinct as the following argument shows. Suppose that φ i,σ = φ j,τ , then so Corollary 2.3 implies that t i = t j , that is i = j. Since Ψ i and A t i are isomorphisms, σ = τ . Thus φ i,σ = φ j,τ if and only if i = j and σ = τ . Since the φ i,σ 's respect the Weil-pairings, Lemma 7.2 implies that these are all the possible isomorphisms E[3] → E [3] that respect the Weil-pairings. Hence φ = φ i,σ for some i = 1, . . . , 12 and σ ∈ Aut (E ), which proves the proposition in this case. Suppose that j 0 = j 0 and j 0 = 0, 1728, then the G above has degree 11 and the discriminant of G is which is again non-zero, so G has distinct roots t 1 , . . . , t 11 in k. In this case the j-invariant of E ∞ is also equal to j 0 , so let t 12 = ∞. The argument presented before now finishes the proof in this case. Assume that j 0 = 0. This case is the same as before with the exception of the polynomial G, which in this case should be replaced by a t . The four distinct t i 's and the six elements in Aut (E ) again give 24 isomorphisms φ i,σ . Finally, if j 0 = 1728, then replace G by b t (defined in Proposition 5.1) and proceed as before. Proof of the theorem In the proof of Theorem 1.1 we need a result from Galois cohomology, namely: Proof . Consider the short exact sequence of G k -groups which induces the exact sequence in the first row of the diagram The second row is the definition of PGL 3 (k) and the vertical maps are the inclusion maps. Hilbert's Theorem 90 gives that Proof of Theorem 1.1. Assume that Φ : E t 0 → E for some t 0 ∈ P 1 (k) is an isomorphism defined over k. This map respects the Weil-pairings according to [ Suppose that there exists a symplectic isomorphism φ : E[3] → E [3], then Proposition 7.1 implies that there exists a Φ ∈ PGL 3 k and a t 0 ∈ P 1 k such that Φ : E t 0 → E and φ = Φ| E [3] . Characteristic two So far we assumed k to be a perfect field of characteristic different from two and three. There is a natural idea how to adapt the proof of Theorem 1.1 to characteristic two: replace the explicitly given Hesse pencil by what it actually describes, namely the pencil of cubics with the nine points of order 3 on the initial elliptic curve as base points. This was done by one of us in her bachelor's project [15], and we briefly describe the results here. Elliptic curves in characteristic two Any elliptic curve E over a field of characteristic 2 can be given as follows, see [13, p. 409]: If k is a field of characteristic 2, then the Hessian of any homogeneous polynomial F ∈ k[X, Y, Z] of degree 3 equals zero, as is easily verified. We will show that given an elliptic curve E over k, say by a special equation as above, the curves in the pencil with base points E [3] all have E [3] as flex points, compare Corollary 2.2 for the classical situation. In [9], Glynn defines a Hessian for any curve C = Z(F ) with F ∈ k[X, Y, Z] homogeneous of degree 3 (characteristic two). In fact our construction coincides with his, although we put more emphasis on how it is obtained from considering 3-division polynomials. Note that the subject of flex points on cubic curves in characteristic two is in fact very classical: compare with, e.g., Dickson's paper [5] published in 1915. The case j(E) = 0 We may and will assume that E is given by Define Hess(E) as the plane curve defined by y 2 + xy 2 + x 2 y + xy + a 2 x 3 + a 2 x 2 + a 6 x = 0. In the next paragraphs, we will show that the Hessian and Hesse pencil have the desired properties. Firstly, the analog of Proposition 2.1 holds: Proposition 9.1. If P is a point on an elliptic curve E with equation y 2 + xy = x 3 + a 2 x 2 + a 6 , then P is a flex point of E if and only if P ∈ E ∩ Hess(E). Proof . The point O is a flex point on both E and Hess(E) hence the result holds for O. Next take any other flex point of E, i.e., a point P = O with 3P = O. Put P = (x, y). Note that x = 0, because any point (0, y) ∈ E has order two. A small calculations (compare x-coordinates of −P and 2P ) shows P is a flex point precisely when Using the equation defining E, one rewrites (E1) as 0 = y 2 + xy 2 + x 2 y + xy + a 2 x 3 + a 2 x 2 + a 6 x. The proposition now follows from a straightforward calculation. Note that in fact P = (x, y) is a flex point on E if and only if P ∈ E satisfies equation (9.1). Now we show the analog of Corollary 2.2. Proposition 9.2. If P is a flex point on an elliptic curve E given by y 2 + xy = x 3 + a 2 x 2 + a 6 , then P is also a flex point on the Hesse pencil E. Proof . It follows directly from the construction of E that P is indeed a point on it. To prove that P is also a flex point on E, one shows that the tangent line to E at P intersects E at P with multiplicity 3. This is a straightforward calculation for which we refer to [15]. Clearly the point O is also a point on the Hesse pencil and it is also a flex point, as can be shown in the same way. The case j(E) = 0 In the remaining case j(E) = 0 we may and will assume that E is given as Now define Hess(E) by xy 2 + a 3 xy + a 4 x 2 + a 2 3 + a 6 x + a 2 4 = 0, so the Hesse pencil E becomes t y 2 + a 3 y + x 3 + a 4 x + a 6 + xy 2 + a 3 xy + a 4 x 2 + a 2 3 + a 6 x + a 2 4 = 0. In this case as well, the analogs of Proposition 2.1 and Corollary 2.2 hold: Proposition 9.3. If P is a point on the elliptic curve E given by y 2 + a 3 y = x 3 + a 4 x + a 6 , then P is a flex point if and only if P ∈ E ∩ Hess(E). Proof . For O the exact same argument holds as when j(E) = 0. A calculation shows that P = (x, y) ∈ E is a flex point precisely when Using the equation of the elliptic curve this is rewritten as 0 = a 2 3 x + x y 2 + a 3 y + a 4 x + a 6 + a 2 4 = xy 2 + a 2 3 + a 6 x + a 4 x 2 + a 3 xy + a 2 4 . Proposition 9.4. If P is a flex point on the elliptic curve E given by y 2 + a 3 y = x 3 + a 4 x + a 6 , then it is also a flex point on the Hesse pencil E. Proof . The reasoning is the same as for the case j(E) = 0. The straightforward calculation is presented in detail in [15]. Using the properties shown above of our Hesse pencil in characteristic two, we can now almost completely follow the reasoning of the earlier sections since most arguments do not involve the characteristic of k. Only for the analog of Proposition 7.1 the proof needs to be adjusted in characteristic two, because here actual calculations are done with the Hesse pencil. We state it in the present situation. Proposition 9.5. Let E and E be elliptic curves given by a Weierstrass equation defined over k. is an isomorphism which respects the Weil-pairings, then there exists a linear change of coordinates Φ : The remainder of this section consists of proving Proposition 9.5. The case j(E) = 0 We first determine the Weierstrass form of the Hesse pencil in the present case. Denote this family of curves by E W and let an individual curve in the pencil be denoted by E W t , then If t = 1, the transformations needed to obtain the above Weierstrass form do not work. In this case one transforms the fiber of given pencil, so the curve E 1 with equation into the other Weierstrass form in characteristic 2: Explicitly, this results in the equation η 2 ζ + a 6 ηζ 2 + ξ 3 + a 2 6 ξζ 2 + a 2 6 (1 + a 2 )ζ 3 = 0. In this way one obtains for every t a projective, linear transformation E t → E W t . Let us denote this transformation by A t . Proof of Proposition 9.5 for the case j(E) = 0. Given another elliptic curve E with jinvariant j 0 , we want to determine t for which our Hesse pencil has the same j-invariant. First, let us assume that j 0 is nonzero and not equal to j 0 . Then j 0 = j E W t ⇔ (t + 1) 12 = j 0 a 6 t 4 + t 3 + t 2 + t + a 6 3 . The zeros of this polynomial are precisely all t 0 such that j E W t 0 = j 0 . The discriminant of G equals a 44 6 j 14 0 , which is nonzero, because j 0 and a 6 are nonzero. We conclude that precisely 12 values t 0 ∈k exist which give the desired j-invariant. For every t 0 , there is an isomorphism A t 0 between E t 0 and E W t 0 , induced by the change of coordinates seen above. For every t 0 which is moreover a zero of G, there is an isomorphism Ψ t 0 between E W t 0 and E , because these curves have equal j-invariants. Lastly, there exist 2 automorphisms σ of E [13, p. 410]. Taking the composition of these three isomorphisms and restricting to the 3-torsion group E t 0 [3], which equals E [3], we obtain 12 × 2 = 24 isomorphisms φ t 0 ,σ ; they are described as These 24 isomorphisms are pairwise distinct and respect the Weil-pairing (see Section 7, observe that this argument is independent of the characteristic of k). Now consider the case j 0 = j 0 = 0. Then j 0 a 6 = 1 since j 0 = 1/a 6 . Our polynomial G therefore has degree 11 and discriminant a 30 6 = 0. So this gives us 11 pairwise distinct t ∈k such that j E W t = j 0 . Another curve with this j-invariant is E ∞ = E. So again we find 12 distinct t's and in the same way as above, we find 24 isomorphisms respecting the Weil-pairing. If j 0 = 0, the only t-value with j(E t ) = 0 is t = 1. Because E has j-invariant zero and k has characteristic 2, its automorphism group has 24 elements [13, p. 410]. So again we find 24 isomorphisms respecting the Weil-pairing. We now complete the proof of Proposition 9.5 for the case j(E) = 0 by the exact same argument as presented in the proof of Proposition 7.1. The case j(E) = 0 For j(E) = 0 the calculations are slightly more involved. Bringing the Hesse pencil t y 2 z + a 3 yz 2 + x 3 + a 4 xz 2 + a 6 z 3 = xy 2 + a 3 xyz + a 4 x 2 z + a 2 3 + a 6 xz 2 + a 2 4 z 3 in Weierstrass form, one obtains E W of the form If t = 0, so if E t = E 0 is the Hessian curve, the transformations needed here are not valid. Therefore we treat this case separately. The Hessian here is given by xy 2 + a 3 xyz + a 4 x 2 z + a 2 3 + a 6 xz 2 + a 2 4 z 3 = 0. 10 Comparison with the literature Theorem 1.1 is part of a more general problem: Given an elliptic curve E over a field k and an integer n, describe the universal family of elliptic curves E such that for each member E t 0 the Galois representations on E[n] and E t 0 [n] are isomorphic and the isomorphism is symplectic. For various n explicit families are known in the literature. In [12] Rubin and Silverberg construct for any elliptic curve over Q such an explicit family for n = 3 and n = 5. Their proofs are motivated by the theory of modular curves. Our Theorem 1.1 corresponds roughly to [12,Theorem 4.1] and [12,Remark 4.2]. Using invariant theory and a generalization of the classical Hesse pencil, Fisher in [6] describes such families for elliptic curves defined over a perfect field of characteristic not dividing 6n with n = 2, 3, 4, 5. Theorem 1.1 is a special case of [6,Theorem 13.2]. It is unclear whether Fisher's proof of [6,Theorem 13.2] can be adapted to the case of characteristic two. In [7] Fisher moreover treats the cases n = 7 and n = 11. The Hesse pencil is used by Kuwata in [10]. For any elliptic curve E over a number field he constructs two families of elliptic curves such that for each member the Galois representation on its 3-torsion is equivalent to the one on E [3]. In the first family the isomorphism of the 3-torsion groups is symplectic, whereas in the second family the isomorphism is anti-symplectic. The proofs use classical projective geometry and the classification of rational elliptic surfaces. Theorem 1.1 is essentially [10, Theorem 4.2] (although our proof is more detailed and totally elementary, and moreover we extend the result to characteristic two). Notice that the Weierstrass form of the Hesse pencil in [10,Remark 4.4] is the same as the one in Proposition 5.1 with t replaced by t −1 and the x and y coordinates scaled by some power of t. An overview of results on the classical Hesse pencil is given by Artebani and Dolgachev in [2].
6,874
2017-08-31T00:00:00.000
[ "Philosophy" ]
Nonlinear modal interactions in parity-time (PT) symmetric lasers Parity-time symmetric lasers have attracted considerable attention lately due to their promising applications and intriguing properties, such as free spectral range doubling and single-mode lasing. In this work we discuss nonlinear modal interactions in these laser systems under steady state conditions, and we demonstrate that several gain clamping scenarios can occur for lasing operation in the -symmetric and -broken phases. In particular, we show that, depending on the system’s design and the external pump profile, its operation in the nonlinear regime falls into two different categories: in one the system is frozen in the phase space as the applied gain increases, while in the other the system is pulled towards its exceptional point. These features are first illustrated by a coupled mode formalism and later verified by employing the Steady-state Ab-initio Laser Theory (SALT). Our findings shine light on the robustness of single-mode operation against saturation nonlinearity in -symmetric lasers. Recently such lasers have been demonstrated using a micro-ring resonator with azimuthal complex index modulation 18 and two coupled micro-ring resonators 19 , respectively. Both of them exhibit single-mode lasing behavior, which had not been anticipated before. While linear threshold analysis (without considering nonlinearity) has revealed some important features of PT -symmetric lasers 19 , to which the single-mode lasing behavior was attributed, the laser is an intrinsically nonlinear system due to gain saturation, without which the system would not be stable. Therefore, it is important to consider nonlinear modal interactions in the analysis of such novel lasers, which is the goal of the present paper. We investigate two different PT -symmetric laser configurations that represent essentially the setups in refs 18 and 19. This is first done by using a coupled mode formalism in Section "Coupled mode analysis. " We focus on a pair of supermodes that lie closest to the gain center ω g that presumably lead to the lowest threshold. In the first configuration [see Fig. 1(a)], we consider two identical cavities with the gain applied to only one of them (cavity a) 19 . In this configuration a standard gain clamping behavior [23][24][25] takes place once the laser is above its threshold, where the saturated gain maintains its threshold value independent of whether lasing occurs in the PT -symmetric phase or PT -broken phase. As a result, the system is frozen in the PT phase space, at a constant distance from its exceptional point (EP) [26][27][28][29][30][31][32][33][34][35] , and the second supermode cannot reach its own threshold. In the second configuration [see Fig. 1(b)], we consider equally applied gain to the two cavities, with the loss in cavity b stronger than that in cavity a (see the Discussion section for its connection with the setup in ref. 18). Unlike the first configuration, here the gain clamping does not take place immediately above threshold if lasing occurs in the PT -broken phase. Instead, the saturation effect takes place gradually as the applied gain increases. This gain saturation has a back action on the lasing mode and pulls the system towards its EP 36 . While the modal gain of the second supermode is higher than its value in configuration 1, this mode is still suppressed even when the applied gain is high above its threshold value. In Section "SALT analysis," we examine these predictions using the Steady-state Ab-initio Laser Theory (SALT) [37][38][39][40][41] , and we show that they hold qualitatively despite a weaker suppression of the second supermode. Furthermore, we extend the discussion of modal interactions by including other supermodes close to the gain center, which is beyond the scope of the simple coupled mode theory mentioned above. This extension is important to determine the range of single-mode operation in PT -symmetric lasers, and one key question is whether the different gain clamping scenarios mentioned above can prevent all other supermodes from lasing, which would lead to an intrinsically single-mode laser. While we found the answer to be negative, the modal interactions via gain saturation still lead to a wider range of single-mode operation (in terms of the applied gain) than previously expected from a linear threshold analysis. Results Coupled mode analysis. We first discuss modal interactions in PT -symmetric lasers using a coupled mode formalism, where the gain saturation is incorporated under steady state conditions. The coupled mode approach is attractive due to its simple form that provides a physical insight into the role of coupling and non-Hermiticity (gain and loss) and how they affect the operation of PT -symmetric lasers. In fact, this insight has broadened the definition of PT -symmetric lasers to those without physically balanced gain and loss 34 , such as the ones considered in refs 33, 35 and 42. The coupled mode theory we employ takes the following form ( ) in cavity a and b (e.g., waveguides, microdisks, and microrings). Here "T" denotes the matrix transpose. ω 0 is the identical resonant frequency of the two cavities in the absence of coupling g, which is the closest one to the gain center ω g and presumably corresponds to the lasing mode with the lowest threshold. κ a,b , γ a,b are the loss and saturated gain in the two cavities respectively, and we take g to be a positive real quantity without loss of generality. In configuration 1 mentioned in the introduction, we have κ a = κ b , γ b = 0, and γ γ ( ) , where γ is the applied gain and ψ ≡ µ µ I a a ( ) ( ) 2 is the intensity of mode μ in cavity a. µ I a ( ) is measured in its natural units and dimensionless (see the discussion in Section "SALT analysis"). This form of saturation is derived in steady state operation, with the fast dynamics of the polarization in the gain medium eliminated adiabatically. In config- ( ) ] and nonzero, together with κ a < κ b . We note that the summations in γ a,b are only over the lasing supermodes, i.e., the ones with a nonzero intensity. To differentiate a lasing and non-lasing mode in our coupled mode theory, we note that the dynamics of the supermode μ here is given by ϕ (μ) (t) = ϕ (μ) (0) exp(− iλ (μ) t) in steady state operation, where λ (μ) is one of the two eigenvalues of the effective Hamiltonian: Here κ γ , are the averages of the losses and saturated gains of the two cavities, and Δ ,δ are their half differences, i.e., Δ = (κ b − κ a )/2, δ = (γ b − γ a )/2. A non-lasing mode does not exhibit a sustained laser oscillation with a finite amplitude, which indicates that the corresponding λ has a negative imaginary part. A lasing mode, in contrast, features a real λ in steady state that gives the lasing frequency. The lasing threshold γ µ TH ( ) of mode μ can then be defined as the value of the applied gain γ at which the corresponding λ becomes real. For convenience, we will refer to Im[λ] as the modal gain, which is negative for a mode below its threshold and becomes zero at and above its threshold. Our coupled mode theory allows single-mode and two-mode operations, where one or both λ given by Equation (2) are real. From the nonlinear optics point of view, this constraint on µ I a b , ( ) for a given γ is very different from other models that have been applied to study steady states in PT -symmetric systems [43][44][45] , where one imposes the constraint directly on nonlinearity, e.g., with a fixed total intensity + µ µ . The nonlinearity reflected by γ a,b here represents modal interactions through gain saturation, including self saturation in the single-mode case and cross saturation as well in the two-mode case. It should be noted that the effective Hamiltonian given by Equation (1) is PT -symmetric without requiring physically balanced gain and loss, i.e., with a net gain cavity and a net loss this balance holds with respect to the average gain and loss: the non-Hermitian part of H is ± i(Δ − δ) on the diagonal after pulling out the common factor 34 0 where 1 is the identity matrix. Clearly it leads to an EP at δ ∆ − = . g (4) Below we refer to the radicand in Equation (2) as the PT parameter τ: The PT -symmetric phase is defined by a negative τ, where the modal gain of both supermodes are given by γ κ − ( ) . The PT -broken phase is defined by a positive τ, the square root of which differentiates the modal gains of the two supermodes. Configuration 1. We start with the discussion of nonlinear modal interactions in configuration 1, where Δ = 0 (κ a = κ b ≡ κ) and δ γ γ = − = − /2 a . We first investigate the PT -broken phase (which we denote as case 1a), based on which single-mode lasing was demonstrated in ref. 19. This case requires 34 κ > g, and the constraint of a real λ becomes The "± " signs represent the two supermodes, and it is easy to check that only the "− " sign leads to a physical (real-valued) threshold given by TH (1) 2 2 in terms of the applied gain γ. To maintain a real λ above threshold, it is straightforward to show that γ γ = a TH (1) must hold, i.e., the saturated gain (in cavity a) is clamped at its threshold value. Consequently, the system is frozen in the PT -broken phase, with a constant [see Fig. 2(a)] and a constant intensity ratio above threshold: [see Fig. 2(b)]. The intensity of mode 1 can be directly calculated from the clamped gain, Therefore, the second mode is suppressed and cannot reach its threshold. We note that the PT -symmetric laser in this case does not have physical balance of gain and loss above threshold, because the net gain in cavity a (given by γ a − κ = g 2 /κ) is smaller than the net loss in cavity b (given by κ). This imbalance increases with τ and becomes significant deep in the PT -broken phase. In contrast, lasing in the PT -symmetric phase, defined by κ < g and denoted by case 1b, does feature physically balanced gain and loss as we discuss below. In case 1b the PT parameter τ is negative and the modal gains of mode 1 and 2 are the same, given by γ a /2 − κ. Therefore, the constraint of a real λ is given by a i.e., the saturated gain is clamped at its threshold value 2κ, and above threshold the net gain in cavity a (given by γ a − κ = κ) equals the net loss in cavity b (given by κ). As a consequence of the gain clamping, the system is frozen in the PT -symmetric phase, with a constant These behaviors (i.e., gain clamping at threshold and a frozen PT parameter) are similar to those in case 1a, but the supermode symmetries here are different from those in case 1a. In particular, both supermodes here have a symmetric intensity profile = µ µ ( ) ( ) and the same threshold. In reality only one of them lases, for example, due to a slight difference of the resonant frequencies in the two cavities. With this additional consideration and assuming mode 1 is the lasing mode, we find above its threshold [see Fig. 3(b)], and the other supermode has a negative modal gain at the threshold of mode 1 [see Fig. 3(a)]. Since the saturated gain is clamped, the modal gain of this mode is also clamped. As a result, this mode is suppressed and cannot reach its threshold. As a final remark for configuration 1, we note that lasing in the PT -broken phase (case 1a) is more favorable than lasing in the PT -symmetric phase (case 1b): the threshold given by Equation (7) is lower than that given by Equation (12) for the same loss κ, which also leads to a stronger total intensity For evanescently coupled cavities, the coupling g depends strongly on the inter-cavity distance s. Therefore, if s is tuned and κ − g changes sign as a result, one can imagine a transition between lasing in these two phases. For example, if the cavities undergo mechanically oscillations ("oscillating photonic molecule"), the laser output does not vary when the system stays in the PT -symmetric phase, and it spikes periodically if max[s] is large enough to push the system into the PT -broken phase. Configuration 2. In configuration 2 cavity b has a higher loss than cavity a (Δ > 0) and the gain is applied equally to both cavities. Note that the latter does not necessarily imply that δ = (γ b − γ a )/2 is zero in the nonlinear regime, as we shall see below. The laser at threshold is PT Lasing in the PT -symmetric phase (case 2b) is similar to that in case 1b: the two supermodes have the same threshold now given by TH (1) but in reality only one of them lases with equal intensities in the two cavities. Hence the applied gain γ is saturated symmetrically (γ a = γ b and δ = 0) as the applied gain increases, which indicates that the system is again frozen in the PT -symmetric phase, with a constant PT parameter 2 2 In addition, we find that γ using the definition of γ and the constraint of a real λ, which shows that the saturated gains in both cavities are clamped at their threshold values. Thus mode 2 is prevented from lasing, with its modal gain staying below threshold. Meanwhile, we note that the laser features physically balanced gain and loss above threshold as in case 1a, because the net gain in cavity a is given by γ a − κ a = Δ and equals the net loss in cavity b (given by κ b − γ b = Δ ). Finally, we find above threshold using γ κ = a b , . All these behaviors are qualitatively the same as those shown in Fig. 3 and are hence not shown. Lasing in the PT -broken phase (case 2a) here is qualitatively different from the three cases (1a, 1b and 2b) discussed so far: the onset of the first lasing mode here does not lead to an immediate clamping of the gain as we show below. The constraint of a real λ in this case is and the laser threshold of the first supermode is given by at which gain saturation just kicks in and δ = 0. The intensity of the first mode is higher in cavity a than in cavity b above threshold: Therefore, as the applied gain increases above γ TH , it is saturated more in cavity a than in cavity b, which leads to a positive and increasing δ. As a result, the PT parameter τ = (Δ − δ) 2 − g 2 decreases towards zero [see Fig. 4(a)]. In other words, this saturation has a back action on the lasing mode itself and the system is pulled towards its EP (where τ = 0) as a result: the intensity ratio I I / a b (1) (1) reduces towards unity as γ increases [see Fig. 4(b)], or more precisely, (1) 2 2 2 2 the right hand side of which is approximate 1 when Δ ≈ g. In addition, the saturated gains in both cavities approach their clamped values in the large γ limit: a a This gain saturation then leads to an asymptotic value of the PT parameter: In Fig. 4(a) we have taken g 2 to be much smaller than κ 2 , and the above asymptotic value is very close to zero when measured by κ 2 . We also note that the system does not have physically balanced gain and loss even in the large γ limit: the net gain in cavity a and the net loss in cavity b are always reciprocal of each other, i.e., they are equal only when the fractions in Equations (23) and (24) become 1, or equivalently, Δ = g. Similar to case 1a, it's easy to show that the modal gain of mode 2 here is given by [see Fig. 4(a)], meaning that mode 2 is also suppressed no matter how strong the applied gain is. SALT analysis. In the previous section we considered a pair of supermodes closest to the gain center ω g , one of which presumably is the first lasing mode when all the modes of the laser are considered. For other supermodes that are further away from the gain center, they typically have higher thresholds and lower modal gains. To understand the range of single-mode operation in PT -symmetric lasers, it is important to take these additional supermodes into consideration. One key question we ask is whether the different gain clamping scenarios mentioned above can prevent other supermodes from lasing, which would lead to an intrinsically single-mode laser. We probe this question using SALT 37-41 , a semiclassical theory framework that addresses several key issues in the standard modal description of lasers 23,24 when applied to micro-and nano-systems. Most pertinent here is the inclusion of modal interactions to infinite order in SALT, without which artificial multimode lasing may appear shortly above the laser threshold 46 . The first PT -symmetric laser we consider consists of two coupled 1D ridge cavities [see Fig. 5(a); left inset]. The background dielectric constant of the cavities is taken as ε c = (3 + 0.007i) 2 , the imaginary part of which represents parasitic losses (material absorption, scattering loss, etc.) while the outcoupling loss is taken into consideration by an outgoing/radiation boundary condition 38 . The gain is applied only to cavity a, which has a center frequency ω g L/c = 19.84 and a width of γ ⊥ L/c = 1. Here L is the length of one ridge cavity and c is the speed of light in vacuum. This laser operates in the PT -broken phase, which corresponds to configuration 1a discussed in the previous section. We consider 6 supermodes closest to the gain center, each given by a quasi-bound (QB) mode of complex eigenvalue k (μ) before the gain is applied [see Fig. 5(a)], and the pair closest to the gain center (mode 1 and 2) have 19 intensity peaks in each cavity. The applied gain is increased via the atomic inversion D 0 and results in a total dielectric constant given by 40 where F(x) is the spatial profile of the pump and has the value of 1 (0) in regions with (without) gain. We note that ε(x), as defined above, is mode-dependent due to the different eigenvalues k (μ) of the supermodes. As soon as mode μ starts lasing, its QB eigenvalue k (μ) becomes real and gives the lasing frequency (once multiplied by c). Hence the modal gain here can be defined as µ k L Im[ ] ( ) , which is dimensionless and increases with D 0 in general before the first laser threshold D TH (1) . The applied gain saturates above threshold with a spatial hole burning denominator ϕ is the Lorentzian gain curve [see Fig. 5(a)] and ϕ (μ) (x) is the dimensionless magnitude of the electric field measured in its natural units 37 . We note that the summation over μ in the spatial hole burning denominator is again only over the lasing modes. Although mode 1 has a symmetric intensity profile before the gain is applied [see Fig. 5(a); right inset], the lack of gain in cavity b leads the system to the PT -broken phase, resulting in (1) above threshold [see Fig. 5(c); inset]. Here the intensities in the two cavities are defined by , which are also dimensionless. In Fig. 5(b) we see that the modal gain of mode 2 has a minute increase above the threshold of the first mode, which agrees qualitatively with the prediction of gain clamping given by the coupled mode theory shown in Fig. 2(a). To verify that the system is frozen in the PT phase space (i.e., with a fixed τ and intensity ratio I I / While these features agree well with the results of the coupled mode theory, the gain clamping does not hold for other supermodes, especially for mode 3 and 5 whose modal gains continue to increase above the first threshold with the applied gain. This behavior is common in microlasers 39 and caused by non-uniform saturation of the gain: it is depleted more at the intensity peaks of mode 1, with "holes" burnt in its spatial gain profile. Mode 3 and 5 have different numbers of intensity peaks (20 and 18 in one cavity) from mode 1, hence they can utilize the increased gain where the intensity of mode 1 is weak. Nevertheless, their interactions with mode 1 still extend the range of single mode operation significantly: mode 3 would have started lasing at just 29% above the first threshold without considering gain saturation [see dotted line in Fig. 5(b)], while this fraction is in fact 110% due to gain saturation. Similar agreement with the coupled mode theory is observed in configuration 1b and 2b, and we show an example of the former in Fig. 6. To make the system lase in the PT -symmetric phase, we increase the coupling between the two cavities by shortening the gap between them (by a factor of 4) and reduce the cavity loss by having a smaller ε = . Im[ ] 0 001 c . We note that the difference of the modal gains for the supermode pair closest to the gain center is much smaller than in configuration 1a [see Fig. 6(a)], which indicates that the system is in the PT -symmetric phase. The modal gain of mode 2 is still semi-clamped above threshold, but its minute increase beyond D TH (1) , similar to what we have seen in Fig. 5(b), now pushes mode 2 above threshold shortly after mode 1 becomes lasing. This behavior eliminates configuration 1b (and 2b) as a candidate for single-mode operation. Another deviation from the result of the coupled mode theory lies in the spatial profile of mode 1 and 2: they are not necessarily symmetric above the first threshold, and in fact they have a similar intensity ratio I a /I b < 1 in this example [see Fig. 6(b)]. This is due to the outcoupling loss that is not considered in the coupled mode theory. To be exact, the time reversal of a lasing mode at threshold is a coherent perfect absorption mode ("time reversed lasing mode") 21,22 with purely incoming waves outside the system, hence a lasing mode itself does not satisfy PTϕ ϕ = ) even in the PT -symmetric phase and with physically balanced gain and loss 42 . When the outcoupling/radiation loss is weak compared with the parasitic loss in a high-Q cavity, for example, in coupled photonic crystal (PhC) defect cavities, we do recover |ϕ (μ) (x)| 2 ≈ |ϕ (μ) (− x)| 2 and . In (b) the dotted section shows the modal gain of mode 3 if its modal interaction with mode 1 is neglected. In (c) the intensity ratio I I / a b (1) (1) is also shown, and the spatial profile of mode 1 at threshold is given by the inset. The cavity refractive index is ε = + . i 3 0 007 c , and gap between the two cavities is L/10. supermode 1, which is the lasing mode with an intensity ratio = . immediately above its threshold [see Fig. 7(c)]. The other supermode 2 formed due to the coupling of these fundamental modes features = . (1) (1) at the same pump power, which indicates that lasing indeed occurs in the PT -broken phase. The band edge modes in fact have a smaller loss (|Im[k (μ) L]|) before the gain is applied, but they are more extended and have a much weaker overlap with the applied gain in the two cavities. As a result, their modal gains increase much slower than those of the band gap modes [see Fig. 7(b)], and they are suppressed even when the applied gain becomes very high. Meanwhile, the modal gain of mode 2 shows a clear saturation shortly above the threshold of the first mode and stays negative, which agrees well with the finding in the coupled mode theory. Finally, the system is pulled rapidly towards its EP, which is manifested by the dramatic change of the intensity ratio I a /I b of the first mode: it reduces quickly to 1.22 at = D D 2 0 T H (1) [see Fig. 7(c)]. We mention in passing that the same system can be used to demonstrate lasing in the PT -symmetric phase (configuration 2b), if the gain is uniformly applied to both cavities and the DBRs. The band edge modes now become the lasing modes; their more extended spatial profiles lead to a stronger coupling between the two cavities, which overcomes the different losses of the cavities and leads the system to the PT -symmetric phase. As a result, they have a more or less symmetric intensity profile with I a /I b ≈ 1. Discussion In summary, we have discussed nonlinear modal interactions in the steady state of PT -symmetric lasers, and we have shown different gain clamping scenarios that depend on (1) whether the loss or gain is uniform in the system and (2) whether lasing occurs in the PT -symmetric or PT -broken phase. As a consequence, the PT -symmetric lasers can be separated into two categories: in one (I) the system is frozen in the PT phase space, while in the other (II) the system is pulled towards its exceptional point as the applied gain increases. While the answer to the question imposed at the beginning of Section "SALT analysis" (i.e., whether the PT -symmetric lasers considered here are intrinsically single mode) is negative, the modal interactions via gain saturation in the PT -broken phase do seem to indicate a robust single-mode operation even after all possible lasing modes are considered, and the range of applied gain for single-mode operation is significantly wider than previously expected from a linear threshold analysis 19 . We emphasize that this behavior holds in high quality cavities too, as exemplified using a PhC defect laser, which is in contrast to the usual belief that PT symmetry related phenomena require considerable loss (and gain). The two categories I and II cover many other gain and loss configurations that we haven't discussed. For example, if the two cavities have the same loss and the applied gains maintain a fixed ratio α ≠ 1 in them (i.e., γ in cavity a and αγ in cavity b), lasing in its PT -broken phase (which requires κ 2 /g 2 > 4α/(α − 1) 2 ) falls into category II with a weaker pulling effect, and lasing in its PT -symmetric phase falls into category I. In all these cases, the lasing intensity is a monotonic function of the applied gain (at least in the single-mode regime), which is different from laser self-termination (LST) 33-35 that requires a variable α (i.e., two independent pumps) as the applied gain increases. In fact, since the net gains (losses) in the two cavities discussed here also vary above threshold if the gain is not clamped immediately, we show below that LST can take place with a fixed α also (i.e., a single asymmetric pump) when the two cavities have different losses (κ b = βκ a ). Our PT -symmetric laser can have at most three thresholds, one in the PT -symmetric phase given by γ κ = , or equivalently, s a TH ( ) and two in the PT -broken phase given by γ κ δ = ± ∆ − − g ( ) 2 2 , or equivalently, where the radicand is positive. It is easy to check that we recover Equations (7) and (20) from Equation (29) after taking β = 1,α → 0 and α = 1, respectively. Similarly, we recover Equations (12) and (16) from Equation (28) (1) of the lasing mode in the two cavities. Note that they are very different before the laser is terminated, indicating lasing in the PT -broken phase, and they are very similar after the second onset threshold, indicating lasing in the PT -symmetric phase. Here g/κ a = 1/3.6, and κ b /κ a = β = 4. ω 0 /κ is 10 4 in cavity a and slightly lower (by 10 −3 ) in cavity b. The slight detuning suppresses mode 2 beyond the threshold γ s TH ( ) in the PT -symmetric phase, similar to the situation in Fig. 3. It also causes the slight imbalance between the PT -symmetric phase (γ s TH ( ) ) as the applied gain increases (see Fig. 8(b) for example). In other words, LST requires γ s TH ( ) given by Equation (28) to be higher than both γ ± TH ( ) given by Equation (29), which leads to with the constraint 1 < α < β or β < α < 1. Following these criteria, we show one example of LST with α = 3, β = 4, and κ a /g = 3.6 in Fig. 8 using the coupled mode theory. Finally, we note that the single-mode lasing demonstrated in ref. 18 is essentially configuration 2a we have discussed: the microring was patterned with strong and weak loss regions, and the gain is applied uniformly to the ring. Although in ref. 18 the PT transition would be "thresholdless" due to a Hermitican degeneracy, which was first reported by Ge and Stone in ref. 17 and later extended to a flat-band system 47 , the laser remains in the PT -broken phase and does not differentiate whether the PT transition originates from an EP or a Hermitian degeneracy. We do note that the microring structure used in ref. 18 does not allow lasing in the PT -symmetric phase (configuration 2b), which does not actually exist when the non-Hermiticity of the system is nonzero.
7,134.4
2016-02-23T00:00:00.000
[ "Physics" ]
Explainable Security in SDN-Based IoT Networks The significant advances in wireless networks in the past decade have made a variety of Internet of Things (IoT) use cases possible, greatly facilitating many operations in our daily lives. IoT is only expected to grow with 5G and beyond networks, which will primarily rely on software-defined networking (SDN) and network functions virtualization for achieving the promised quality of service. The prevalence of IoT and the large attack surface that it has created calls for SDN-based intelligent security solutions that achieve real-time, automated intrusion detection and mitigation. In this paper, we propose a real-time intrusion detection and mitigation solution for SDN, which aims to provide autonomous security in the high-traffic IoT networks of the 5G and beyond era, while achieving a high degree of interpretability by human experts. The proposed approach is built upon automated flow feature extraction and classification of flows while using random forest classifiers at the SDN application layer. We present an SDN-specific dataset that we generated for IoT and provide results on the accuracy of intrusion detection in addition to performance results in the presence and absence of our proposed security mechanism. The experimental results demonstrate that the proposed security approach is promising for achieving real-time, highly accurate detection and mitigation of attacks in SDN-managed IoT networks. Introduction The number of connected devices and Internet of Things (IoT) use cases have been continuously increasing, thanks to the developments in the fields of mobile networks, big data, and cloud computing. IoT use cases that significantly facilitate our daily lives include smart homes, autonomous cars, security systems, smart cities, and remote healthcare, among many others. When the large volumes of data generated by IoT are considered, it is obvious that the quality of service (QoS) requirements of these various use cases will not be satisfiable by legacy wireless networks. 5G and beyond networks that rely on software-defined networking (SDN) and network function virtualization (NFV) for resource management will be a key enabler for the future's ubiquitous IoT. IoT has already resulted in a large attack surface, due to limited processing power and battery life, as well as the lack of security standards, which make a large number of IoT devices incapable of implementing even basic security mechanisms, like encryption. New use cases, protocols, and technologies add new attack surfaces to the existing ones. It is of utmost importance to develop intrusion detection and prevention systems for IoT networks that address new and existing vulnerabilities in order to ensure the healthy operation of these systems. It is also essential to ensure the compliance of the developed security techniques with SDN-based network architectures and benefit from the network programmability that is provided by SDN to ensure fast detection and mitigation of attacks, as well as a quick reconfiguration of the networks in order to prevent QoS degradation and failures. Machine learning (ML) techniques have become popular tools for network intrusion detection tasks in the past two decades, especially due to the increasing accuracy that is achieved by a variety of models, and the superiority that they have over rule-based systems in detecting previously unseen attacks. The developments in the field of deep learning have made them indispensable parts of any classification task, including intrusion detection. Although deep learning models have been shown to be quite successful in intrusion detection, they are usually used as blackboxes, and their decision-making processes are not readily explainable to human experts [1]. The explainability problem is especially important in the security domain [2] in order to correctly interpret the results produced by these models. Despite the importance of realistic network traffic data for effective model building, most of the existing research in IoT intrusion detection has used datasets that were generated for legacy networks without IoT traffic. This is mostly due to the lack of publicly available datasets that include IoT traffic, except the recently released Bot-IoT dataset [3]. To the best of our knowledge, there is no publicy available dataset specifically for SDN-based IoT environments. The network traffic characteristics of IoT and SDN are quite different from those of legacy networks; therefore, using models that were trained with legacy network data might lead to inaccurate classification results. Furthermore, it is crucial for intrusion detection systems to retrieve features in real time for effective attack detection and mitigation. Existing public datasets have been created by processing pcap files and there is no guarantee that all of the features that they include can be retrieved in real time. In an effort to address the abovementioned shortcomings of existing security approaches for SDN-based next generation mobile networks, this paper presents a real-time intrusion detection and mitigation solution for SDN, which aims to provide autonomous security in the high-traffic IoT networks of the 5G and beyond era, while achieving a high degree of interpretability by human experts. The proposed approach is built upon automated flow feature extraction and classification of flows using random forest classifiers at the SDN application layer. This allows for the detection of various classes of attacks and it takes appropriate actions by installing new flow rules. We present a SDN-specific dataset that we generated for an IoT environment and provide the results on the accuracy of intrusion detection as well as performance results in the presence and absence of our proposed security mechanism. The rest of this paper is organized as follows: Section 2 reviews related work in intrusion detection for SDN-based networks and existing ML datasets for network intrusion detection. Section 3 provides a brief background on SDN and classification using random forest. Section 4 describes our proposed end-to-end intrusion detection and mitigation approach for SDN-based networks. Section 5 describes our public intrusion detection dataset for SDN-based IoT. Section 6 provides a detailed performance evaluation of the proposed security approach with the generated SDN dataset. Section 7 concludes the paper with future work directions. Related Work Intrusion detection and mitigation in networks has become an ever more important topic of research with the increasing cyber security incidents, caused by the large attack surfaces that are created by IoT. Rathore and Park [4] proposed a fog-based semi-supervised learning approach for distributed attack detection in IoT networks. The authors used the NSL-KDD dataset and showed their distributed approach performed better than centralized solutions in terms of detection time and accuracy. Evmorfos et al. [5] proposed an architecture that uses Random Neural Networks and LSTM in order to detect SYN flooding attacks in IoT networks. The authors generated their dataset by creating a virtual network and recorded the traffic into pcap files. Soe et al. [6] proposed a sequential attack detection architecture that uses three machine learning models for IoT networks. The authors used the N-BaIoT dataset and achieved 99% accuracy. Alqahtani et al. [7] proposed a genetic-based extreme gradient boosting (GXGBoost) model that uses Fisher-score in order to select features in IoT networks. The authors also used the N-BaIoT dataset and achieved 99.96% accuracy. Even though these Sensors 2020, 20, 7326 3 of 30 approaches were shown to successfully detect attacks in IoT networks, their design was performed according to legacy network infrastructures that do not utilize SDN. SDN-based networks have important differences both in terms of operation and the packet flow features that can be extracted in real time, requiring compatible models to be built, as will be explained in Section 3.1. With the increasing adoption of SDN-based network architectures in the past decade, SDN security has become one of the centers of attention for the cyber security research community. The majority of the solutions that have been proposed for SDN-based networks so far have focused on techniques for the detection and mitigation of denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks. In [8], a semi-supervised model was used to detect DDoS attacks in SDN-based IoT networks. The model achieved above 96% accuracy on the UNB-ISCX dataset and the own dataset of the authors, which only included UDP flooding attacks. In [9], an entropy-based solution was proposed for the detection and mitigation of DoS and DDoS attacks in software-defined IoT networks. The approach achieved high accuracy on the Bot-IoT dataset and the authors' dataset containing TCP SYN flooding attacks. Yin et al. [10] proposed using cosine similarity of packet_in rates received by the controller and drop packets if a predefined threshold is reached. Their approach only mitigated DDoS attacks. Ahmed and Kim [11] proposed an inter-domain information exchange approach that uses statistics that are collected from switches across different domains to mitigate DDoS attacks. Bhunia and Gurusamy [12] used Support Vector Machine (SVM) in order to detect and mitigate DoS attacks in SDN-based IoT networks. The authors created their own data; however, the dataset is not publicly available. Sharma et al. [13] proposed using deep belief networks to mitigate DDoS attacks in SDN-based cloud IoT. Bull et al. [14] used an SDN gateway to detect and block anomalous flows in IoT networks. Their approach managed to succesfully detect and mitigate TCP and ICMP flooding attacks. In spite of the fact that most of these approaches have accomplished successful detection and mitigation, they only work against DoS and DDoS attacks. Other works have targeted coverage of additional attacks, but used datasets that are not specific to SDN for evaluation. Li et al. [15] proposed using the BAT algorithm for feature selection and then used the random forest algorithm on the KDD CUP'99 dataset, achieving 96% accuracy. In [16], the CART decision tree algorithm was proposed in order to detect anomalies in IoT networks using SDN. The authors used the CICIDS'2017 dataset and achieved a 99% detection rate. Dawoud et al. [17] proposed an SDN-based framework for IoT that uses Restricted Boltzmann Machines to detect attacks. The authors achieved a higher detection rate than existing works on the KDD CUP'99 dataset. Al Hayajneh et al. [18] proposed a solution for detecting man-in-the-middle attacks against IoT in SDN. Their solution only works for IoT devices that use HTTP for communication. Shafi et al. [19] proposed a fog-assisted SDN-based intrusion detection system for IoT that uses Alternate Decision Tree. The authors used the UNSW-NB15 dataset and achieved high detection rates. Derhab et al. [20] proposed an intrusion detection system, which uses Random Subspace Learning, K-Nearest Neighbor and blockchain against attacks that target industrial control processes. The authors used the Industrial Control System Cyber attack dataset and demonstrated their solution achieves high accuracy. Work in explainable intrusion detection systems has been rather limited so far. One example is the work of Wang et al., who proposed an explainable machine learning framework for intrusion detection systems that are based on Shapley Additive Explanations [21]. The framework was evaluated on the NSL-KDD dataset and it achieved promising results. Network intrusion detection using ML techniques has been a popular approach of network security, especially for the past two decades, for which researchers have created a number of extensive network trace datasets. These datasets, even if they are old, are still in use today by security researchers as benchmarks. Among existing publicly available network intrusion detection datasets are the following: • KDD CUP'99 [22] dataset. However, it is not without some drawbacks. Firstly, the distribution of the records in the training and test sets are widely different, because the test set includes some attack types that are not in the training set [24]. Secondly, around 75% of the data in the training and test sets are duplicates [24], which could lead to biased classification models. Most importantly, the dataset was not generated in an IoT environment and it does not include SDN-specific features. • NSL-KDD [24] was created to improve the KDD CUP'99 dataset. Duplicate records were eliminated and the number of records was reduced. Also classes were balanced. Still, this dataset does not represent the behavior of current networks. • UNB-ISCX [25] was created by the Canadian Institute of Cybersecurity in 2012. Real network traces were analyzed to create realistic profiles. The dataset consists of seven days of network traffic containing three types of attacks: DDoS, brute force SSH, and infiltrating the network from inside. • CAIDA [26] contains anonymized network traces. Records were created by removing the payloads of the packets and anonymizing the headers. This dataset only contains DoS attacks and features are the header fields. Additional features using the header fields were not generated. • UNSW-NB15 [27] was created in 2015. The IXIA tool was used to generate the network traffic. UNSW-NB15 has 49 features and two of them are labels for binary and multi-class classification. The dataset consists of normal traffic and nine types of attack traffic, namely DoS, DDoS, fuzzing, backdoor, analysis, exploit, generic, worm, and shellcode. The main problem of the dataset is the lack of sufficiently many samples for some attack types. • CICIDS2017 [28] is another dataset that was created by the Canadian Institute of Cybersecurity. Realistic benign traffic was created using their B-Profile system. The dataset includes normal traffic and six types of attack traffic, namely DoS, botnet, port scanning, brute force, infiltration, and web attack. • Bot-IoT [3] was introduced in 2018. The most important feature of the dataset is that it includes IoT traffic, unlike most of the existing intrusion detection datasets. The dataset has 46 features and two of them are labels for binary and multi-class classification. The dataset consist of normal traffic and six different attack types, namely DoS, DDoS, service scanning, OS fingerprinting, data theft, and keylogging. The main problem of the dataset is the lack of sufficiently many samples for some attack types. The number of records for normal traffic is also low. Most of the existing work on intrusion detection systems for IoT and SDN environments used the datasets that are mentioned above. However, these datasets were not created in networks managed by SDN. Furthermore, these datasets do not contain IoT traffic, except for the BoT-IoT dataset. Most of the existing datasets were created by recording and processing pcap files with different tools. Therefore, an SDN controller may not be able to obtain all of the features in real time. To the best of our knowledge, there is no other publicly available SDN dataset that includes IoT traffic. Preliminaries This section provides an overview of SDN and the random forest classifier, which are key components of the proposed solution. Software-Defined Networks (SDN) Software-defined networking (SDN) emerged as a novel networking paradigm in the past decade, supporting the need for programmatically managing networks, the operational costs of which were increasing sharply with the widespread use and new technologies that are needed to accommodate various IoT use cases. SDN differs from traditional networks by separating the data and control planes, where routers/switches are now responsible for forwarding functionality, where routing decisions are taken by the controller (control plane). The SDN architecture mainly consists of three layers: applications, control, and infrastructure (data plane), as seen in Figure 1. All of the applications, such as load balancing and intrusion detection systems, run on the application layer and communication with the controller takes place through the north-bound API. Communication between the controller and switches takes place through the south-bound API, mainly using the OpenFlow [29] protocol. The logically centralized controller is responsible for managing the network. The controller maintains a global view of the network and installs forwarding rules, called "flow rules", into the corresponding switches based on the routing decisions it makes. Switches store flow rules in their flow tables and forward network packets based on matches with existing flow rules. Figure 2 shows the structure of a flow rule. It is mainly composed of three parts: match fields, counters, and actions. Unlike traditional networks that perform forwarding based on the destination addresses, match fields are determined by the configuration of the forwarding application and might be ingress port, VLAN ID, source and/or destination MAC addresses, IP addresses, and/or port numbers. Counters keep track of the duration of the flow and byte and packet counts that matched the flow. The action can be forwarding the packet to the specified port or dropping it, among others. The header fields of incoming packets are compared with the match fields of flow rules in the switch. All of the required header fields of the packet should match with the match fields of a rule in the flow table to be forwarded immediately. Otherwise, the switch will buffer the packet and send what is called a packet_in message to the controller that contains the header fields of the packet. The controller then examines the packet_in message and generates a routing decision for the packet, which is sent back to the switch in a packet_out message. The necessary action is taken for the packet and the corresponding flow rules are installed to the switches by sending flow_mod messages. Even though the controller decides to install flow rules into the corresponding switches, a packet_out message is always sent before the flow_mod message. Therefore, unlike traditional networks, the statistics of the first packet that triggered flow rule installation cannot be seen in the installed flow rule. For TCP connections, statistics of the SYN and SYN ACK packets are lost, because they are the first packets sent from source to destination and destination to source, respectively. During DoS and DDoS attacks with spoofed addresses, all of the incoming packets may have different source addresses. Therefore, all of the incoming packets from the attacker may trigger a new flow rule installation. SDN in 5G Networks While early adoptions of SDN mostly took place in wired enterprise networks, its flexibility, programmability, speed, and cost advantages have recently made it a promising tool for other networks, including wireless sensor networks (WSNs) [30] and next generation wireless networking infrastructures. SDN will be one of the greatest enablers of 5G and beyond networks by providing the network virtualization capabilities that are needed to remotely and dynamically manage the networks. The fast failover and autonomous management capabilities to be achieved with SDN applications will provide the high bandwidth and low delay requirements of 5G networks, making them support a variety of IoT use cases. SDN, together with network functions virtualization (NFV), will especially form the basis of network slicing in 5G core and radio access networks, which will be a significant enabler for operators to efficiently utilize their infrastructure in order to provide the required quality of service and security guarantees to their customers [31]. A number of SDN-based architectures for 5G networks have been proposed [32]. One of the early proposals is SoftAir by Akyildiz et al. [33], where the data plane is a programmable network forwarding infrastructure that consists of software-defined core network (SD-CN) and software-defined radio access network (SD-RAN). While SD-RAN contains software-defined base stations, including small cells (microcells, femtocells, and picocells) in addition to traditional macro cells, SD-CN contains software-defined switches that form the 5G core network, as seen in Figure 3. User equipment and other devices are connected to the software-defined base stations or wireless access points, which are connected to the software-defined core network through the backhaul links. As proposed in SoftAir, SD-CN and SD-RAN can both use OpenFlow as the southbound API, which will provide a uniform interface with the controller routing traffic from the base stations through the optimal paths in the core network. This architecture enables the application of many of the same principles in terms of network control from wired SDN to SDN-based 5G networks. One of the biggest promises of and reasons for the introduction of SDN is the provisioning of improved security in the network through global visibility, and the fast automated reconfiguration of flow rules. This will enable real-time detection and mitigation of malicious traffic in the network. As seen in Figure 1, an SDN can be attacked at various surfaces (the attack surface is demonstrated by red arrows pointing out from the devices, applications, or interfaces that could be attacked in SDN). These attacks could not only target the data plane devices, but also the controller and applications to cause disruptions in network operation. In this work, we focus on attacks that affect the data and control planes and propose an intrusion detection and mitigation solution that provides automated responses to attacks detected while using highly interpretable ML algorithms that are described in the next section. Random Forest Classifier Random forest (RF) is a machine learning model that constructs an ensemble of decision trees, named a forest, such that each decision tree is constructed using an independently and identically distributed random vector [34]. For classifying a particular data instance, a random forest uses the outputs of all trees in the forest to pick the majority decision. The utilization of the outputs of multiple trees makes the classifier more robust than decision trees, which suffer from the overfitting problem in many cases. At a high level, the RF algorithm works as follows: 1. The complete training set S consisting of n data instances with class labels {c i , i = 1, ..., n} from a set of classes C is split into k random subsets using bootstrap sampling: 2. A random feature vector θ i is created and used to build a decision tree from each S i . All {θ i , i = 1, 2, 3, ..., θ k } are independent and identically distributed. 3. Each tree r(S i , θ i ) is grown without pruning to form the forest R. 4. The classification of a test data instance x is calculated, as follows: where I is the indicator function and h i (x) is the result of classification by r(S i , θ i ). Sensors 2020, 20, 7326 8 of 30 Figure 4 shows a simplified view of classification by random forests. Here, the child branches of the root show the different trees in the random forest. When a data item X needs to be classified, its probability of belonging to class c is calculated as the sum of the class probabilities for each decision tree θ i (θ 1 ...θ n in the figure), averaged over all trees. The item will then be assigned to the class with the highest probability. The nodes in each decision tree here use binary splits that are based on a specific feature value (e.g., is number of bytes ≤ 118?), and the branches of the tree are followed up until the leaves by checking the values of those features in data item X, as depicted by the red arrows pointing towards child nodes from the internal nodes of the trees. Information gain is a commonly used metric for deciding the splitting criteria for the various nodes in the decision trees. The information gain from the split of a node S based on a random variable a is calculated as follows: Here, E(S) is the entropy of the parent node before the split and E(S|a) is the weighted average of the entropies of the child nodes after the split. E(S) is calculated as: where p(c i ) is the probability of a data instance in node S having class label c i . Figure 5 shows a partial view of a decision tree from the random forest constructed for a sample network intrusion detection task on the IoT network dataset that we have generated. As seen in the figure, the entropy of nodes decreases while approaching the leaves, as nodes that are higher up in the tree are split based on a specific threshold of feature values discovered by the algorithm. A random forest contains a multitude of such decision trees, each constructed from a different, randomly sampled subset of the whole training data. RF is among the ML algorithms with the highest degree of explainability/interpretability, due to its reliance on decision trees, which construct models based on splits of training data along feature values, which are easily readable by human domain experts. The effectiveness of RF for a variety of classification tasks has been shown in many studies [35]. Despite the success of especially deep learning algorithms in various classification tasks in recent years, RF continues to outperform many state-of-the-art ML algorithms, especially in tasks that involve structured data. Proposed Security Approach The proposed intrusion detection and mitigation approach, the overall operation of which is depicted in Figure 6, provides security in SDN-based networks by automated, intelligent analysis of network flows, followed by mitigation actions being taken in accordance with the decision of the intrusion detection component. The end-to-end intrusion detection and mitigation process relies on three main applications in the application layer, namely Feature Creator, RF classifier, and Attack Mitigator. The Feature Creator collects network flows from the switches at regular intervals and calculates the values of features that are required by the RF classifier for each flow. The RF classifier applies its pre-built intrusion detection model on the flow instance and passes the result to the Attack Mitigator. The Attack Mitigator then determines the action to take based on the classification result and installs flow rules into the corresponding switches to mitigate the attack if necessary. Algorithm 1 summarizes the end-to-end operation of the proposed security solution. The controller periodically collects network flow entries from the switches, which are retrieved by the Feature Creator at regular intervals. Upon retrieval, features are created for every flow, as summarized in Algorithm 2. Common features for every flow are generated, looping over every flow entry in the switch using Algorithm 3. e.g., the average duration of flows and total number of packets in a transaction are created with an initial pass over flow entries. Subsequently, flow-specific features, e.g., the duration of flow and source-to-destination packet count, are retrieved by passing over all of the flow entries. While looping over flow entries, the created feature vector for a flow is immediately sent to the RF classifier, without waiting to finish feature creation for other flow entries. The Feature Creator also retrieves flow match fields, like source IP and MAC addresses, and the physical port of the switch where the packet is coming from. The Attack Mitigator uses these features. Algorithm 2 Feature Creation The common features include Mean, Stddev, Sum, TnP_PSrcIP, TnP_PDstIP, and TnP_Per_Dport. Their detailed descriptions can be found in Table 1. Hash sets are used to store unique source IPs, destination IPs, and destination port numbers. A list is used to store the duration of flow entries. While looping over the flow entries, packet counts of the flow entries are added to the total packet count. The duration of the flow entries are added to the duration list. Source IPs, destination IPs, and destination port numbers are added to the corresponding hash sets. Byte counts of the flows are added to a hash map. Keys of this map are made of source IP, source port, destination IP, and destination port for TCP and UDP packets. For ICMP packets, the keys are made of source IP and destination IP, since they do not have port numbers. This map is later used for retrieving reverse flow statistics. After looping over all of the flow entries, common features are calculated using the total packet count, hash sets, and duration list. Flow-specific features, i.e., Dur, Spkts, Sbytes, and Dbytes, are calculated within the second pass over the flow entries. Duration, packet count, and byte count of the flow entries are extracted. The hash map that was created in the common feature creation is used to retrieve destination-to-source byte count. After creating the feature vector for a flow entry, it is sent for classification without waiting for the creation of other feature vectors. Algorithm 3 Calculation of common statistics and features The RF classifier, which works as explained in Section 3, gets feature vectors from the Feature Creator one-by-one and classifies them using its pre-built intrusion detection model. If the outcome of the classification is any attack type, the Attack Mitigator is sent the detected attack type and source identifiers, i.e., source IP, source MAC address, and the physical switch port that the packet is coming from. The used machine learning model should be updated dynamically by the inclusion of new training data for existing attack types or adding new attack types as they are discovered. The RF model built is a multi-class classification model that is formed using training data that consists of various attack types in addition to normal traffic. We advocate using multi-class attack classification rather than binary classification/anomaly detection, as the former provides more informed decision-making capability in terms of the action to take/the specific flow rule to install. As discussed previously, the RF classifier creates results that are highly explainable to human experts, as opposed to blackbox ML models, whose results are not easily interpretable. For instance, when a specific flow is classified as a DoS attack, it is possible to trace the trees in the forest that voted as a DoS and which feature values caused them to make that decision. This provides the ability for a human network expert to judge the quality of the model, provide recommendations, update the model, or take additional actions if necessary. The Attack Mitigator is informed by the RF classifier upon attack detection. This component creates a flow rule update recommendation, depending on the attack type. The created rule update is sent to the controller, which installs the flow entries into the corresponding switches. The installed flow entries have higher priority than normal flow entries in the switch. The corresponding action can be dropping the matching packets or redirecting matching flows to a honeypot. Packet blocking and redirection can be based on the source MAC address, source IP, or the physical switch port. SDN Datasets In this section, we provide details of our SDN-based IoT network datasets that were generated based on the packet sending rates and packet sizes from an IoT dataset generated in a real testbed. All of our features are SDN-specific and they can be retrieved using an SDN application in real time. The accuracy of the RF classifier was evaluated with the two publicly available SDN datasets we generated and compared with the accuracies of state-of-the-art ML algorithms. Feature selection was used to identify important features for detecting attacks in SDN-based IoT networks. The performance of the model was also evaluated under network changes. We have created 2 SDN datasets [36] and made them available online [37]. Their only difference is the number of IoT devices. In IoT networks, the number of IoT devices may change over time. Our second dataset has more IoT devices and the number of active IoT devices also changed during the traffic recording. The second dataset enables us to evaluate the performance of the models trained with the first dataset. That way we can have an idea about how our model will be affected when the number of IoT devices changes and how often we should update our model. Our datasets contain normal traffic and five different attack types, namely DoS, DDoS, port scanning, OS fingerprinting, and fuzzing. Testbed Overview We used a similar network topology to the Bot-IoT dataset [3]. Mininet [38] was used to virtualize our network and an ONOS controller [39] managed the network. An Open vSwitch [40] was used to connect the controller and simulated devices. Benign Traffic Similar packet sizes and sending rates to the BoT-IoT dataset [3] were used for benign traffic. Our IoT devices simulated IoT services that send small amounts of data to a server periodically, e.g., a smart fridge or weather station. IoT devices sent one or two packets to the server at a time using TCP. We used five simulated IoT devices in our first dataset. In our second dataset, we initially had 10 IoT devices and two of them were turned off after some time during every recording. Two benign hosts in our network sent large amounts of data to the server. One of them used UDP and the other one used TCP. We generated and recorded the benign traffic both with and without the presence of malicious traffic. Malicious Traffic Up to four attacker hosts performed different types of attacks targeting the server or IoT devices, depending on the attack type. We performed five types of attacks, namely DoS, DDoS, port scanning, OS fingerprinting, and fuzzing. • DoS: the Hping3 tool [41] was used for DoS attacks. One malicious host launched the attacks with and without spoofed IP addresses targeting the server or one of the IoT devices. Using spoofed IP addresses causes every attack packet to trigger a new flow rule installation and wastes resources of both the controller and switches. We performed both SYN flood and UDP flood attacks. All of the combinations of four packet sending rates (4000, 6000, 8000, and 10,000 packets per second) and payloads (0, 100, 500, and 1000 bytes) were used. • DDoS: all of the four malicious hosts participated in this attack. The same scenarios as DoS were performed. • Port scanning: the Nmap tool [42] was used for port scanning attacks. One malicious host launched the attack targeting the server or one of the IoT devices. Nmap has two options for port scanning: by default, the first 1024 ports are scanned and users can also specify the range of ports to scan. We scanned the first 1024 ports and all of the port numbers (0 to 65,535). • OS fingerprinting: Nmap was used for the OS fingerprinting attack. During this attack, the attacker first performs a simple port scanning to detect open ports. Subsequently, the attacker uses these ports to proceed with the attack. Therefore, we used one malicious host to launch the attack only targeting the server. • Fuzzing: Boofuzz [43] was used for fuzzing attacks. The aim of this attack is to detect vulnerabilities of the target by sending random data until the target crashes. We performed both ftp fuzzing and http fuzzing attacks using one of the malicious hosts and targeted the server. Our fuzzers know the expected input format for http and ftp connections and generated random values for input fields. For example, for http fuzzing, http methods like get, head, post, put, delete, connect, options, and trace were fuzzed with random request URI and http version fields. Flow Collection and Feature Generation Our goal was to create a dataset that can be used in the real-time detection and mitigation of malicious traffic. Therefore, unlike most of the existing datasets that are generated by recording and processing pcap files, we used an SDN application to retrieve flow entries and create our features. We configured ONOS to pull flow entries from the switches every second. Our SDN application periodically retrieved flow entries from the ONOS controller and generated our features for each flow in the switch. The SDN application waited for one second after every feature generation period and then continued to create features by retrieving new flow rules from ONOS. Our datasets contain 33 features and Table 2 shows our features and their descriptions. Attack and category features are our labels. The attack label can be used for binary classification and the category label can be used for multi-class classification. Every match field of the incoming packet must match with a flow rule; otherwise, a new flow rule is installed, as mentioned in the SDN section. Performing DoS and DDoS attacks using spoofed IP addresses triggered the installation of lots of duplicate flows into the switch. Therefore, we limited the number of recorded packets to 100 at each iteration of feature generation for these attack scenarios. Our first SDN dataset has 27.9 million records and the second one has 30.2 million records. Tables 3 and 4 show the distributions of records in our datasets. Feature retrieval time is very important, as there is no point in detecting attacks after they are over or have caused severe damage. Features should be retrieved quickly for efficient attack detection and prevention. Additionally, the feature retrieval process should not consume a lot of controller resources, otherwise network performance would be adversely affected. Figure 7 shows the flow entry collection and feature creation time up to 1000 flow entries in the switch, which corresponds to the normal traffic. When the switch had 1000 flow entries, flow collection and feature creation time for all of the flows was around 22.8 milliseconds, which is quite low. Figure 8 shows the flow entry collection and feature creation time up to 20,000 flow entries in the switch, which corresponds to the attack traffic. Even though there were 20,000 flow entries in the switch, our SDN application collected flow entries and created features for all of the flows in 411.3 milliseconds, which does not cause much overhead for our controller. We observe that the feature retrieval time increases linearly with the number of flow entries in the switch. Pre-Processing Our datasets contain millions of records and record counts that belong to every category is not the same, as shown in Tables 3 and 4. Processing millions of data records is not feasible and it may lead to overfitting. Additionally, imbalanced datasets might cause biased models. Around 74% of the records belong to the port scanning attack. Therefore, we wanted to take an equal number of records from every category for model training. We also wanted to take an equal number of records from each recording of a category. The reason is that, depending on the configuration and target, the record counts differed a lot. For example, one of the DoS attacks without spoofing had the lowest record count of 3251, while DoS attacks with spoofing had up to 137,000 records. We recorded DoS traffic 12 times, so the maximum number of records that we could get was 39,012. Therefore, we took 35,000 records from every attack category taken equally from every scenario of that attack type, which resulted in a total of 175,000 attack records. For multi-class classification, we also took 35,000 normal records. Normal records were taken equally from the DoS, DDoS, port scanning, OS fingerprinting, fuzzing, and normal traffic without attack files, 5834 each. The constructed dataset had 35,000 records from every category, with a total of 210,000 records. We split this dataset into training and test sets. The training set has 25,000 records from every category, with a total of 150,000 records. The test set had 10,000 records from every category, with a total of 60,000 records. The same procedure was followed for both of the datasets, and training and test datasets were created for both. Multi-Class Classification The constructed training and test sets were used in order to evaluate the performances of different machine learning algorithms. We have used all of the features, except host identifiers: srcMac, dstMac, srcIP, dstIP, srcPort, dstPort, last_seen, and proto_number. Different machine learning algorithms were trained and tested using the first SDN dataset's training and test sets. Figure 9 shows the results of multi-class classification of different algorithms: naive bayes (NB), logistic regression (LR), k-nearest neightbour (K-NN), support vector machines (SVM), kernel support vector machines (K-SVM), random forest (RF), and XGBoost (XGB). RF and XGB performed better than the other algorithms. Our goal of creating two datasets was to perform tests on the second dataset using the models that were trained with the first dataset and see how the system would be affected from network changes. We applied feature selection based on the feature importance attribute of random forest and XGBoost algorithms. The feature importance attribute returns impurity-based feature importance of each feature in the training set. We used the features that had higher feature importance than the average of the feature importance values. We also added one feature whose importance was close to the average and ended up with 10 features. Table 5 shows the selected features and their descriptions. The overall F1 score of the RF model, trained with the selected features, on the first dataset, was 97.86%. Performance was still close to the model trained with all of the training data and 24 features, even though we reduced both training data and the number of features by more than half. Using less features also allows for our SDN application to retrieve features much more quickly. Table 6 shows the performance metrics for all classes in the first dataset. even though we reduced both training data and number of features by more than half. Using less features also allows our SDN application to retrieve features much more quickly. Performance metrics for all classes in the first dataset are shown in Table 6. Normal traffic had the lowest F1 score, which is not desirable, as we do not want to classify normal packets as malicious packets and block legitimate traffic. Five over six of the normal records in our test set belonged to the normal traffic during attack scenarios. Distinguishing normal traffic from attack traffic during an attack is not an easy task. Therefore, we must be sure before taking action upon detecting an attack. In the absence of any attack traffic, our model's accuracy of detecting normal traffic was 99.67%, which is quite high. The overall F1 score on the second dataset was 84.48% using the initially selected 10 features. Two features were replaced and the overall performance increased to 91% using the features that are listed in Table 7 and the hyperparameters listed in Table 8. Our model's performance for the normal traffic on the second dataset was similar to the performance on the first dataset. However, the overall performance was lower than the first dataset, because our model classified some of the DoS attacks as DDoS attacks on the second dataset as expected, due to the increased number of IoT devices in the second dataset. Because the mitigation action taken is the same, the network performance is not affected. F1 scores for every class in the first dataset are shown in Table 9. Results are similar to the initially selected features. On the other hand, using the features in Table 7 performed well both on the first and second datasets. Experimental Evaluation In this section, we provide an experimental evaluation of the proposed security approach using an SDN-managed IoT network simulation environment. We performed experiments to evaluate the end-to-end intrusion detection and mitigation model in terms of its effect on the network parameters during DoS attacks of different types. The experiments were conducted on a machine with Intel Core i7-8750H @ 2.20GHz processor and 16 GB RAM. Experiment Setup For the deployment of the proposed intrusion detection and mitigation system, the testbed setup in Figure 10 was used. Mininet was used to create a virtual network. The maximum bandwidth of each link in the network was limited to 100 Mb per second. An ONOS controller managed the network. Simulated IoT devices, benign hosts, and the server transmitted data, as explained in the SDN Datasets section. Some attack types also affect the performance of the network as well as the target. DoS and DDoS attacks decrease the available bandwidth and consume resources of the controller and switches. Other attack types in our dataset do not have a significant effect on the network. Their purpose is to find vulnerabilities of the target and crash it if possible. Therefore, we focused on DoS and DDoS attacks in our network performance experiments. One malicious host was used to perform DoS attacks and effects of the attacks on the network were measured. The ONOS controller was configured to pull flow entries from the switch every second. Our SDN application retrieved flow entries from the controller and generated the 10 best features that were required by our random forest classifier for each flow entry. The SDN application waited for one second after creating features for all of the flow entries in the switch and then continued to create features by retrieving new flow entries from the controller. Our model classified every flow entry. When our application detected the third attack flow coming from a switch port, the mitigation process started. If an attack was detected, then the attacker was blocked based on the port through which it was connected to the switch through installation of a new flow rule. The installed flow rule had a priority of 1000, which is higher than the default flow rule priority (10). Figure 11 shows a flow rule installed by our application to drop the packets coming from port 1, and Figure 12 shows a flow rule for a packet classified as normal. Here, "Selector" shows the packet match fields and their values. The "Immediate" field of the "treatment" shows the action upon matched packets. If "OUTPUT" is specified, then packets are forwarded to the specified switch port. "NOACTION" means dropping the packet. Network Performance Results In the following subsections, performance measurements of our intrusion detection and mitigation system are reported. Time Measurements We measured the feature retrieval time and also feature retrieval and classification time using our SDN application. The counter was started before our application pulled flow entries from the switch and stopped when feature calculation and classification was over for all of the flow entries in the switch. Figures 13 and 14 show the feature retrieval times of our 10 best features used by the RF model. Figure 13 corresponds to the network without presence of attacks. Figure 14 corresponds to the network under a DoS attack. The feature retrieval time of all features for 20,000 flow entries was 411 milliseconds, whereas it was 327 milliseconds for retrieving the 10 best features. When our application calculates the common features, it also creates a hash map that uses source IP, source port, destination IP, and destination port as the key and byte count, packet count, and packet rate as values. This map is later used for obtaining reverse flow statistics, i.e., destination-to-source packet count, byte count, and packet rate. Converting source and destination IP to a string for the key of the map takes a long time. Our model uses destination-to-source byte count (Dbytes) as a feature, as shown in Table 7. This is the reason why improvement on the feature retrieval time was not much. Figure 15 shows the feature retrieval times of the 10 best features and classification time for up to 1000 flow entries in the switch. It is fairly low and it does not affect the performance of the network. Figure 16 shows the feature retrieval times of the 10 best features and classification time for up to 20,000 flow entries in the switch, which corresponds to the DoS attack with spoofed addresses. Feature retrieval and classification take around 900 milliseconds for 20,000 flow entries. However, the SDN application does not wait to finish classifying every flow entry in the switch before taking action. The attackers are blocked immediately when they reach the detection threshold. Therefore, most of the time, attacks are mitigated before a huge number of attack flows are installed into the switch. Our application calculated the feature vectors and classified them in nine milliseconds when the switch had 100 flow entries. This procedure takes 49 milliseconds when the switch has 1000 flow entries. We believe that these times are fairly low and they do not affect normal operation of the controller and the network. Under a DoS attack, feature vector calculation and classification take less than a second for all 20,000 flow entries. The flow entries are classified one-by-one and mitigation is performed immediately upon attack detection. Therefore, attacks are swiftly mitigated before they can cause serious damage to the target and the network. Bandwidth Measurements The maximum available bandwidth of all the links between the switch and hosts in our network were set to 100 Mb per second. The iPerf3 tool [44] was used to measure the available bandwidth between one of the IoT devices and the server with and without the presence of DoS attacks. One malicious host was used to perform a DoS attack targeting the server. The packet sending rate was 1000 packets per second and the payload of the packets was 1000 bytes. Attacks started after five seconds. Figure 17 shows the available bandwidth under TCP SYN flood attack without spoofing. All of the packets coming from the attacker passed over the same flow entry in the case of no spoofing. Therefore, it took three detection processes to exceed the threshold. Available bandwidth between one of the IoT devices and the server was around 95 Mb per second during the normal operation of the network. Without protection, the bandwidth decreased to 37 Mb per second. When our protection was active, the attacker was blocked based on the physical port after exceeding the threshold. Bandwidth returned back to normal after a couple of seconds. Figure 18 shows the available bandwidth under TCP SYN flood attack with spoofing. Every packet coming from the attacker that missed the flow rules in the switch caused a new flow rule installation. This process slowed the forwarding of malicious packets. The available bandwidth under attack decreased to 45 Mb per second. When our protection was active, bandwidth decreased to 82 Mb per second only for a second and then returned back to normal. The threshold was exceeded in the first detection process and the attacker was blocked immediately. Overall, the available bandwidth returned back to normal within one to three seconds, depending on the attack properties when our protection was active. Our protection quickly prevents attackers from causing damage to the target and networks. For the DoS attacks with spoofed addresses, attackers are detected within a second and the network recovers immediately. For the DoS attacks without spoofed addresses, our application waits until attackers reach the threshold (around three seconds) and then blocks the attackers. The network recovers in 1-2 s after detection. CPU Measurements DoS and DDoS attacks with spoofed addresses waste the resources of both the controller and switches. Spoofed match fields of the attack packets cause "table miss" events for each packet. Switches buffer these packets and send a packet_in message to the controller for every attack packet. The controller processes these packets and decides the route. The controller sends a packet_out message to the switch, which contains the determined action for the packet. The controller also installs flow rules for every attack packet. The ONOS controller and switch were running on the same machine in these experiments. Therefore, we used the Linux top command to measure CPU usages of their processes. Our machine had six cores and two threads per core, which makes the maximum CPU utilization 1200%. One malicious host was used to perform the DoS attack with spoofed IP addresses. Packet sending rate was 1000 packets per second and payload of the packets was 0 byte. Attacks started after five seconds. Under normal conditions, most of the time the CPU utilization of the hosts were 0%. Rarely, CPU utilization of the two benign hosts that sent data to the server were 5.9-6.1%. During DoS attacks, the CPU utilization of the attacker was around 30%. Figure 21 shows the CPU usage of the controller under TCP SYN flood attack. CPU usage was around 2% for our normal network traffic. During the attack without protection, CPU utilization reached 500% within two seconds and stayed there for 7-8 s. Subsequently, it dropped to 400%. When our protection was active, CPU utilization reached 180% for a second and then dropped to around 35% for the next 15 s. Afterwards, CPU utilization returned to normal. Even though the attacker was blocked in the first detection process, attack flows were installed into the switch until the controller installed the block rule. Our classification model kept classifying them in the following detection processes until these flow entries timed out. This is the reason why CPU utilization remained around 35% for a short time. Figure 22 shows the CPU usage of the switch under TCP SYN flood attack. The CPU usage was around 1% for our normal network traffic. During the attack without protection, the CPU utilization of the switch process reached around 370%. When our protection was active, CPU utilization increased to 135% for a second and then returned back to normal within a couple of seconds. The controller installed the flow block rule in the first detection process and all of the packets coming from the attacker were dropped by matching the installed flow rule. Figures 23 and 24 show the CPU utilization of the controller and the switch. The results are similar to the TCP SYN flood attack experiments. Overall, without our protection, both the controller and switch consumed around 400% of the CPU, 800% total. The DoS attack wasted a huge part of the CPU of the switch and the controller when considering the maximum CPU utilization of our machine was 1200%. One attacker caused the network to use 2/3 of its available CPU. When we performed the DDoS attack with four attackers, CPU utilization reached to a maximum in a short time and the SDN controller crashed after some time. When our protection was active, attacks were detected within a second and the attackers were blocked immediately and the switch's CPU utilization went back to normal, which is close to 2%. All of the packets coming from the attacker matched with the block rule and were dropped. Our application kept classifying attack rules remaining in the switch until they timed out. Therefore, the CPU utilization of the controller was around 40% for 15-20 s after attack detection. Subsequently, CPU utilization returned back to normal. Time (s) process reached around 370%. When our protection was active, CPU utilization increased to 135% 587 for a second and then returned back to normal within a couple of seconds. The controller installed 588 the flow block rule in the first detection process and all of the packets coming from the attacker were 589 dropped by matching the installed flow rule. protection was active, attacks were detected within a second and attackers were blocked immediately 598 and switch's CPU utilization went back to normal, which is close to 2%. All of the packets coming 599 from the attacker matched with the block rule and got dropped. Our application kept classifying attack 600 rules remaining in the switch until they timed out. Therefore, CPU utilization of the controller was 601 around 40% for 15-20 seconds after attack detection. Then, CPU utilization returned back to normal. 603 In this work, we proposed an automated, intelligent intrusion detection and mitigation approach 604 for SDN, which aims to provide explainable security in the IoT networks of the 5G era. The proposed 605 approach relies on automated flow feature extraction and highly accurate classification of network 606 flows by a random forest classifier in the SDN application layer, for detecting various classes of attacks 607 and taking remedial action through installation of new flow rules with high priority at the data plane. 608 We presented our SDN-specific dataset modeling a realistic IoT environment, which includes flow 609 data for common network attacks as well as normal traffic, and provided results on the accuracy of 610 intrusion detection as well as performance results in the presence and absence of our proposed security 611 mechanism. 612 The proposed security approach is promising to achieve real-time, highly accurate detection and 613 mitigation of attacks in SDN-managed networks, which will be in widespread use in the 5G and beyond 614 era. We believe the created dataset will also be a useful resource for further research in ML-based 615 intrusion detection in SDN-managed IoT networks. Our future work will include extension of the 616 created dataset with more attack types and network topologies, as well as evaluation of the proposed 617 security approach with these additional network conditions. We also aim to integrate an interface for 618 Figure 24. Switch CPU under UDP flood. Conclusions In this work, we proposed an automated, intelligent intrusion detection and mitigation approach for SDN, which aims to provide explainable security in the IoT networks of the 5G era. The proposed approach relies on automated flow feature extraction and highly accurate classification of network flows by a random forest classifier in the SDN application layer, for detecting various classes of attacks and taking remedial action through the installation of new flow rules with high priority at the data plane. We presented our SDN-specific dataset modeling a realistic IoT environment, which includes flow data for common network attacks as well as normal traffic, and provided results on the accuracy of intrusion detection as well as performance results in the presence and absence of our proposed security mechanism. The proposed security approach is promising for achieving real-time, highly accurate detection and mitigation of attacks in SDN-managed networks, which will be in widespread use in the 5G and beyond era. We believe that the created dataset will also be a useful resource for further research in ML-based intrusion detection in SDN-managed IoT networks. Our future work will include an extension of the created dataset with more attack types and network topologies, as well as an evaluation of the proposed security approach with these additional network conditions. We also aim to integrate an interface for interpretability by human experts to further enhance the explainability of the security model. While the proposed approach has achieved successful results in the network environment it has been trained for, applicability to different networks will require training the model actively through online learning. This will not only provide the capability to detect previously detected attack types, but also to correctly classify recently arising attacks through continuous learning. While transfer learning approaches are successful to a certain extent, their performance cannot compete with the performance of training ML models with datasets being obtained in the real operation environment in many cases. Therefore, our future work will also focus on building a continuous learning with human-in-the-loop system, which is expected to be achieve high performance in a variety of network structures.
13,473.6
2020-12-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
A Bus Reservation System On Smartphone – The use of bus in traveling is a large growing business in the world. Hence, bus reservation system deals with maintenance records of each passenger who had reserved a seat for a journey. Moreover, the ticketing system includes maintenance of schedule, fare and details of each bus traveling. This paper is a web-based application that will manage the scheduling of buses in all bus terminals of a transport company and also analyses the basic needs of passengers and design requirements of the transfer algorithm, and influence factors for effective running. This software developed can be used by any transport company as it wasn’t designed for a particular bus station/company. The scheduling of buses which was the major addition to the bus booking application was implemented using round robin for proper bus assignment in a way that improves operational efficiency. The system thus designed will provide a scenario for the customer/passengers and the bus company to attain a win-win situation. It is an adaption of the speed-up technique. This aim is achieved through the use of object-oriented methodology. I. INTRODUCTION Transportation assumes an urgent part in urban areas as it altogether impacts the nature of individuals' lives and is frequently the vital methods for getting to schooling, work and fundamental administrations. All the more along these lines, throughout the long term, a great many people like to go by transports since it's more advantageous and generally reasonable than different methods for street transport. The essential commitment of a vehicle association is to give customers satisfaction to the extent organization transport by decreasing the proportion of holding on schedule for each customer. On this note, the use of reservation has used a subject of phenomenal interest. The usage of booking targets organizing a way to deal with improve the utilization of a fleet of vehicles [1]. This is characterizing the highway a vehicle should be allotted by contemplating the exercises of those vehicles. Transport working organizations depend on some huge factors, for example, populace, culture, atmosphere, and economy. As referenced already, transports are by a wide margin the most utilized procedure of street transport inferable from their adaptability, high accessibility, and availability, which normally prompted this paper. [2]. Transport reservation done physically is a method of making a schedule for each transport in the diverse warehouse by drafting the plan to contain their everyday course and arrangement of outings. At the point when a given course has less sum or no travelers, transports doled out to this course, will be reassigned to more bustling courses regardless of whether it's in another takeoff station of the vehicle organization. Transport booking otherwise called transport booking is one of the trademark exercises of the arranging cycle in a Transport Company in that it deals with the most ideal undertaking of transport task to serve the normal voyager demand [3]. A Bus reservation System will assist the traveler with knowing when he is booked to travel which thus will assist with staying away from considerable delays and objections at the transport terminal. For example, if a traveler shows up at the bus stop and there is no transport or there are no travelers for his objective, this traveler may need to stand by the entire day and still will be unable to travel. All the more in this way, Bus reservation System will fill in as an organization chief to control and screen the development of transports and the everyday reservation of transports to various courses. Transport reservation System will fill in as a warning framework to help organizations use sound judgment and augment benefit. Throughout the long term, people/travelers have thought that it was hard to grasp a Bus reservation System for their movements, reasons may be that they either don't design appropriately for their excursion or they are not mechanically slanted. These entanglements have been considered in this work, hence it is made to be easy to use and organizations/firms utilizing Bus reservation System might have staff(s) prepared for this reason to encourage the utilization of Bus reservation System. II. THEORETICAL BACKGROUND The theoretical background gives a synopsis of the technologies used in developing the system "Bus Reservation System" and the general concept of the research topic as seen by the other researchers. The technology is chosen in other to present a more userfriendly system. The site developed comprises various web contents written in HTML, PHP, JAVASCRIPT and MYSQL server used in the development of the database of the system. Scheduling Algorithm Scheduling is the act of following a schedule, while a schedule is an outline of things to be done and the time when they will be done. However, the concept of scheduling is mostly used in operating systems where processes are scheduled to run within a particular time after which the CPU may be preempted from it, and this brings us to types of scheduling Scheduling may be 1. Preemptive Scheduling: the CPU can be preempted from a process even while it has not exhausted its CPU burst. 2. Non-preemptive Scheduling: Once a process is assigned the CPU, it cannot be taken away until the process completes its CPU burst. And this brings us to the types of scheduling algorithms. High Leads to starvation especially when long jobs arrive first on the queue Shortest Job First Associate with each process the length of its next CPU burst and Uses these lengths to schedule the process with the shortest time and uses e.g., FCFS to break ties Low Leads to the starvation of long jobs Round Robin A preemptive scheduling scheme for time-sharing systems Low Solves the problem of starvation as each job is allocated a time slice. Bus Reservation Techniques Round robin scheduling (RRS) is quite possibly the most prepared, leased buildings, most attractive and most extensively used booking calculations ever utilized [4]. It is a preemptive type of planning and furthermore a work booking calculation that is accepted to be reasonable, as it utilizes time cuts that are allotted to each cycle in the line or line. Each cycle is then permitted to utilize the CPU for a given time measure, and on the off chance that it isn't finished inside the assigned time, it is appropriated and afterward moved at the rear of the line with the objective that the accompanying system in line can use the CPU for a comparative proportion of time. Cooperative planning is a calculation predominantly utilized by working frameworks, working structures, and frameworks that serve different customers or clients that expect assets to be utilized. It handles all solicitations in a round first-in-first-out (FIFO) request and evades need so that all cycles/applications can utilize similar assets in a similar measure of time and have a similar measure of holding up time each cycle; henceforth it is additionally considered as a cyclic chief. How Round Robin Is Applied However, this project makes use of round robin in assigning buses for a particular route. Here, buses for the same route are assigned different departure time but if a bus departure time has elapsed and the bus is yet to leave the bus station within the time slice allocated to it as a result of insufficient passengers, then the passengers will be reassigned to the next bus plying that route whose departure time has not elapsed. Then the bus returned to the back of the queue where it waits for its turn again to be loaded. This process will continue as long as the bus has not yet reached its maximum sitting capacity. More so, the last bus will depart once the time slice assigned to it has elapsed (i.e. run to completion), this will ensue regardless of the number of passengers on the bus. Advantages Of Round Robin Over Other Scheduling Algorithms 1. It is very simple to achieve because there are no complex timings or priorities to consider, Simply put it is a FIFO with time allocated to each job or process to ensure equal distribution of the CPU across all jobs. 2. It helps to solve the problem of starvation (a situation in which a job is not able to use the CPU because it is always preempted by other jobs that are usually considered being more important). In this case of the bus station, it helps to avoid the overstressing of some buses while others are left less busy. 3. It helps in a win-win situation. In which resources will be properly harnessed by the bus station while ensuring customer satisfaction. III. REVIEW OF RELATED LITERATURE A bus reservation system is not a new concept in use as it has been implemented for different bus stations around the globe. However, in Nigeria and most developing countries, bus reservation is basically done manually i.e. the manager picks which bus to include on the traveling queue and the bus is then assigned to passengers. In turn, a passenger goes to a bus station and books for a ticket and is manually issued a ticket which is a slip containing his name, seat number, destination and amount paid. This may be done online or offline. Different publications have also been published on this subject as a consequence of comprehensive studies over the past decades. Several model approaches, as well as specific solving strategies have been provided for the issue and its extensions. These are discussed below based on the benefits of public bus transport, the need for customer satisfaction, bus reservation and other online bus systems. [5] Recognized issues with the present type of the Russian urban transport scheme, since the existing modes and strategies of transport growth may not always be relevant in certain conditions. Afterward, a solution or feasible way to improve their transport system was found. The analysis was done by using the successful nominated six cities as demonstrator cities to develop a roadmap for sustainable mobility together with the city government and related stakeholders. [6] Proposed an intelligent transport system composed of three parts: a sensor system, a surveillance system, and a display system. A sensor system gets information from a global positioning system (GPS) and near-field communication (NFC), temperature and moisture sensors. The surveillance scheme extracts significant information from the raw data collected from the sensor scheme and gives it to the bus driver. The presentation framework shows transportation and travel-related information at the bus stop to commuters. A paper by [7] acknowledged that the increase in the public transport traveler stacks in the USA is decreasing fuel utilization by around 11 million gallons yearly -the proportionate advantage of expelling 23,813 vehicles from the street. The benefits of public bus transport include but are not limited to: It is more economical for commuters, It helps to de congest the roads as it reduces the number of vehicles that would have filled the road if commuters had travelled with private vehicles, It helps to reduce noise pollution. [8] Conducted a survey to determine the reasons for traffic congestion in Lebanon and discovered that the reasons are simply as a result of a high number of private cars and the absent of a good transportation code. While the later can be solved only by the government the former can be reduced by providing a good public transport system. The study tried to investigate the problems associated with the transport system in Lebanon. The problems were highlighted as: Accidents, Traffic congestion, Noise and Air pollution [4] Noted that there was only one functional public bus service enterprise that provides transport services in and around the city. The enterprise uses a fixed bus schedule system to serve passengers in 110 routes. However, this type of bus assignment system created a problem in the company's operational and financial performances. Hence the researchers studied the operation pattern of the enterprise and developed a model to best schedule buses for the day to day activities of the bus company. The number of passengers at each period of the day was noted and these time periods were referred to as shifts. Since the enterprise uses a fixed number of buses scheduled per route in its day to day operations, optimal bus/resource utilization was not ensured. [9] Addresses schedule design for a bus route with one intermediate bus stop also known as time point. The authors tried to minimize passenger waiting time, the delay time for through passengers, delay/early penalty and total operation cost. Unlike. Used a schedule based holding control strategy to achieve this. Schedule-based holding control strategy involves withholding a bus ready to depart from a bus station at a time earlier than the scheduled departure time until the scheduled departure time but if it is delayed beyond the scheduled departure time, it will depart once it has completed all necessary requirements for departure. [10] Designed a schedule to minimize waiting time at bus stops. Used the time control point strategy, which is bus arrival time at each of its time control point (i.e. bus stop) on the bus route. The strategy was chosen by because it is the type used by most bus companies in China and Singapore. This type of control involves using the expected bus arrival time at bus stops to determine when a bus will arrive at its final destination. [11] Stressed the need for a bus information system that offers a range of helpful data for customers in towns and particularly distant regions where bus transport is the only type of transportation accessible. Again, because these remote areas contain fewer amounts of commuters, the time spent in waiting for buses at bus stops is high. [12] Proposed implementation of a crowd-participation bus arrival time forecast scheme using cellular signals. The scheme bridges the gap between customers questioning about the arrival time of the bus and customers ready to share data, providing them real-time bus data, regardless of any bus company. A querying user sends the server the bus stop and path of concern. A sharing customer sends the server a series of a cell tower. The server then matches the sequence of cell towers to the bus route and predicts the arrival time of the bus. [13] Suggested a wireless sensor network with which the bus information system can provide customers with the current bus position and estimated bus arrival times. Bus nodes, router nodes, bus stop nodes, and concentrators are part of the network. [14] Studied GPS, Remote Sensing (RS) and Geographic Information System (GIS) methods and suggested using them all to depict the real-time status of each bus and bus arrival time on maps. [15] Introduced an intelligent public transport system composed of bus modules fitted with a GPS receiver, digital speedometer, telecommunications modem, and other server modules, bus stop modules and client applications. The system supplied customers with data about the present place of buses approaching the bus stop. While [16] suggested using the genetic algorithm to discover the shortest driving time with various situations of actual traffic environments and variable car speeds. [17] Proposed a web-based system. It allows a customer to check the ticket availability and search for the most possible prices. The system is always available online, but the basic benefits of the system is in its ability to allow the customers to search and choose his/her seat position and ticket payment procedure. They collected data to define the new application's demands. IV. ANALYSIS OF PROPOSED SYSTEM The main aim of this software is to help bus transport companies to schedule their buses to ensure maximum resource utilization in their day to day operation. This software developed can be used by any transport company as it wasn't designed for a particular bus station/company. The scheduling of buses which was the major addition to the bus booking application was implemented using round robin for proper bus assignment in a way that improves operational efficiency. The system thus designed will provide a scenario for the customer/passengers and the bus company to attain a win-win situation. Hence timers were assigned to each bus and before the designated time is exhausted the bus is checked if not at least half-filled, then the passengers in the bus are moved to the next bus in line of the same route. This is continued till the last bus to ply the route cannot accommodate the spillover passengers. Nevertheless, if there is only one bus assigned for to a particular route, it will leave the bus station once its allotted time has elapsed. The framework to be planned is an upgraded bus reservation application. Booking in accordance with travelers and reservation in accordance with the administration of the everyday exercises and assets of the bus stop to guarantee consumer loyalty and decrease operational expense. Examination has indicated that a great many people like to go by transport because of its vicinity and moderateness, yet the helpless administrations given by these transport organizations will in general disperse travelers who wish to go by transport. Consequently, the framework has been planned utilizing Round robin Algorithm. The cooperative calculation includes allotting time cuts to each transport that has been doled out a course. The framework naturally checks the transports at stretches and tunes in to know whether the transport is half plowed to its ability before an hour to the takeoff time. In the event that it is up to a large portion of its ability, the transport is dispatched from the line once its time cut (takeoff time) pass, Otherwise, the travelers are moved to the following transport in the prepared line and a message is sent across to all travelers in the transport revealing to them that their transport has been deferred. The framework consequently sends the message across educating them regarding what to do in the event of any crisis. The product configuration measure model utilized in this work is the cascade model. This is on the grounds that it takes into consideration appropriate arranging and assists with including all the plan elicitations of the clients. The plan stage began with correspondence for example visiting the transport organization to get an outline of how the current framework runs and to determine what different functionalities they wished the framework had. The undertaking will utilize a social data set and a web engineering since it will run on the web. The social information base was picked because of its adaptability and ability to meet all kinds of information requires. The plan apparatus utilized in this work is Unified Modeling Language (UML). The UML is a standard graphical documentation for portraying programming examination and plans. It has images to help with portraying and reporting all aspects of the application improvement measure. The class diagram contains 8 classes in which the user is a superclass to admin, customer, and agents. The aforementioned are subclasses which inherit the attributes and functions of the superclass, user. The relationships between them are specified through the multiplicity. System implementation has to do with the transformation of the framework's calculated and intelligent plans into an actual execution. Usage exercises incorporate choice and establishment of the picked language, coding, investigating, testing, documentation, preparing and client manual. In this venture work, a XAMP worker was introduced on a framework and the product was coded on its IDE and conveyed on localhost. The coding includes a methodical portrayal of the framework's model in a robotized structure under a decision advancement climate. Investigating and testing the product incorporate eliminating the mistakes of the framework at the various phases of its turn of events and running the framework severally as every blunder is repaired. The product runs well on a nearby worker following the building structure specified in the plan. Development Environment The development environment (IDE) utilized in this undertaking is Netbeans 8.1 IDE and Notepad++. In spite of the fact that, Netbeans is generally utilized as a Java IDE however it can likewise be utilized for so numerous other programming dialects including PHP. The significant favorable circumstances of NetBeans over other IDEs like scratch pad which was utilized in this work, include: 1. Ability to make test classes, run and see the code inclusion straightforwardly from IDE interface. 2. There's an implicit neighborhood history that allows you to contrast code changes and return with a particular amendment. Supportive when source code record is unintentionally overwritten. 3. The capacity to effectively leap to work usage from work call by squeezing (Ctrl + click), this element makes it simpler to investigate and adjust capacities; 4. The way it oversees source code as bundles stacking all documents identified with the venture in its record director segment instead of going forward and backward the windows record voyager. Choice Of The Programming Language Used The choice of the programming language used depends on the suitability of the language attributes to the scope and usage of the system developed. They are PHP, HTML, CSS. PHP is a scripting language. It facilitates the development of a web-based program and creation of web-pages. The WAMP server has some sets of scripts, logs, SQL manager and PHP code that enable communication between the MYSQL database and HTML. The system developed is an online system that allows multi usage. The wamp server enables data to be shared among users online and secures the data from the various users. The cascading style sheet formats the presentation of a web page to the end-user. It creates a suitable and user-friendly outlook for the user interface. Hence, these attributes informed the choice of the language used. VI. RESULT AND DISCUSSION Bus reservation system is another stage for travelers to book tickets by utilizing the application through their cell phones anyplace and whenever. The framework was intended to help alleviation course arranging and oversee abundance transports and transport lack at terminals. The BRS has been created to encourage travelers and give them an essential choice to book ticket as well as check their tickets anywhere and whenever utilizing their cell phone through Internet. What's more BRS will help the Admin and driver too in them day by day work to make their work more coordinated and effectively to Handle, and furthermore makes it quicker and simpler for the traveler to easily no the accessible transport and season of takeoff in the solace of their homes. The method for the framework's utilization is given in the client's manual. In an offer to diminish holding up time in transport stations while simplifying authoritative utilities, this examination was done. It is an improvement to the current reservation framework and electronic data stockpiling framework for transport stations. BRS was grown explicitly to fill in as a device for compelling transport stations the board. Below are screenshots of BRS. This is a test page created with paypal sandbox to test run the booking app. Once he clicks on "pay now" and a successful payment has been made. VII. CONCLUSION Bus Reservation System was developed to help passenger to book their ticket various like mobile devices or laptop and so on, also to help Admin and drivers for their daily work. In a bid to reduce waiting time in bus stations while maximizing organizational utilities, this project was carried out. It is an improvement to the existing booking system and electronic information storage system for bus stations. This project work was named BRS and was developed specifically to serve as a tool for effective reservation of bus management. The software was tested using Mozilla Firefox alongside google chrome. Notepad++ was the IDE (integrated development environment) that was used due to its ability to specify line numbers.
5,493.6
2021-02-23T00:00:00.000
[ "Computer Science" ]
Numerical simulation of the passive gas mixture flow . The aim of this paper is the numerical solution of the equations describing the non-stationary compressible turbulent multicomponent flow in gravitational field. The mixture of perfect inert gases is assumed. We work with the RANS equations equipped with the k-omega and the EARSM turbulence models. For the simulation of the wall roughness we use the modification of the specific turbulent dissipation. The finite volume method is used, with thermodynamic constants being functions in time and space. In order to compute the fluxes through the boundary faces we use the modification of the Riemann solver, which is the original result. We present the computational results, computed with the own-developed code (C, FORTRAN, multiprocessor, unstructured meshes in general). Introduction The aim of this work is to simulate the complicated behaviour of the perfect gas mixture.In this contribution we consider the Reynolds-Averaged Navier-Stokes equations with the k-ω model of turbulence.This system is equipped with the equation of state in more general form, and with the mass conservation of the additional gas specie.We choose the well-known finite volume method to discretize the analytical problem, represented by the system of the equations in generalized (integral) form.In order to apply this method we split the area of the interest into the elements, and we construct a piecewise constant solution in time.The crucial problem of this method lies in the evaluation of the so-called fluxes through the edges/faces of the particular elements.We use the analysis of the exact solution of the Riemann problem for the discretization of the fluxes through the boundary edges/faces.Own algorithms for the solution of the boundary problem were coded, and used in the numerical examples. Formulation of the Equations We consider the conservation laws for viscous compressible turbulent flow of ideal gas with the zero heat sources in a domain Ω ∈ I R N , and time interval (0, T ), with T > 0. The system of the Reynolds-Averaged Navier-Stokes equations in 3D has the form ∂I R s (w, ∇w) ∂x s +S in Q T = Ω×(0, T ). (1) Here x 1 , x 2 , x 3 are the space coordinates, t the time, w = w(x, t) = ( , v 1 , v 2 , v 3 , E) T is the state vector, f s = a e-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>v s , v s v 1 + δ s1 p, v s v 2 + δ s2 p, v s v 3 + δ s3 p, (E + p) v s ) T are the inviscid fluxes, I R s = (0, τ s1 , τ s2 , τ s3 , 3 r=1 τ sr v r + C k ∂θ/∂x s ) T are the viscous fluxes, S are additional sources.u = (v 1 , v 2 , v 3 ) T denotes the velocity vector, is the density, p the pressure, θ the absolute temperature, E = e + 1 2 u 2 + k the total energy.Further τ i j = (μ + μ T )S i j , i j (μ + μ T )S i j − 2 3 k, i = j , with S 11 = 2 3 2 ∂v 1 ∂x 1 − ∂v 2 ∂x 2 − ∂v 3 ∂x 3 , S 12 = ∂v 1 ∂x 2 + ∂v 2 ∂x 1 , S 13 = ∂v 1 ∂x 3 + ∂v 3 ∂x 1 , S 21 = S 12 , S 22 = 2 3 − ∂v 1 ∂x 1 + 2 ∂v 2 ∂x 2 − ∂v 3 ∂x 3 , S 23 = ∂v 2 ∂x 3 + ∂v 3 ∂x 2 , S 31 = S 13 , S 32 = S 23 , S 33 = 2 3 − ∂v 1 ∂x 1 − ∂v 2 ∂x 2 + 2 ∂v 3 ∂x 3 , where μ is the dynamic viscosity coefficient dependent on temperature, μ T is the eddy-viscosity coefficient.For the specific internal energy e = c v θ we assume the caloric equation of state e = p/ (γ − 1), c v is the specific heat at constant volume, γ > 1 is called the Poisson adiabatic constant.The constant C k denotes the heat conduction coefficient C k = ( μ P r + μ T P r T )c v γ, and P r is laminar and P r T is turbulent Prandtl constant number.In our application of flow in the gravitational field we set the source terms to S = (0, g 1 , g 2 , g 3 , g • u), where g = (g 1 , g 2 , g 3 ) is the gravity vector.For the gas mixture with two species we use the Dalton's law for the total mixture pressure where p 1 and p 2 are the partial pressures of the first and second component gas.Let 1 and 2 be the mass density of these components.Then the total mass density of mixture is Temperature θ is same for all gases in the mixture, and the equation of state holds where R g = 8.3144621 is universal gas constant, and m i denotes the mollar mass of the ith specie.We can introduce The thermodynamic constants of the mixture satisfy (using the decomposition of the internal specific energy and enthalpy) then the adiabatic constant γ, needed in the solution of (1), can be written as The system (1) is then extended with the conservation law of the mass for one gas component (specie) Here σ C is diffusion coefficient.The mass conservation for the second specie is automatically satisfied via the system (1). Here we assume the system (1),( 2) equipped with the two-equation turbulent model k−ω (Kok), described in [1].The effective turbulent viscosity is where k the specific turbulent kinetic energy and ω the turbulent dissipation are functions of time t and space coordinates x 1 , x 2 , x 3 .The production terms P k and P ω are given by formulas where functions τ are defined in (1) where σ d = 0.5 is constant. Numerical Method For the discretization of the system we proceed as described in [6].We use either explicit or implicit finite volume method to solve the systems sequentionally.Here we present the discretization of the system (1).By Ω h let us the denote the polyhedral approximation of Ω.The system of the closed polyhedrons with mutually disjoint interiors D h = {D i } i∈J , where J ⊂ Z + = {0, 1, . ..} is an index set and h > 0, will be called a finite volume mesh.This system D h approximates the domain Ω, we write Ω h = i∈J D i .The elements D i ∈ D h are called the finite volumes.For two neighboring elements D i , D j we set Γ i j = ∂D i ∩ ∂D j = Γ ji .Similarly, using the negative index j we may denote the boundary faces.Here we will work with the so-called regular meshes, i.e. the intersection of two arbitrary (different) elements is either empty or it consists of a common vertex or a common edge or a common face (in 3D).The boundary ∂D i of each element D i is Here the set Using the Green's theorem on D i it is Here n = (n 1 , n 2 , n 3 ) is the unit outer normal to ∂D i .Further we use (5), and we rewrite (6) We define a finite volume approximate solution of the system studied (1) as a piecewise constant vector-valued functions w k h , k = 0, 1, . . ., where w k h is constant on each element D i , and t k is the time instant.By w k i we denote the value of the approximate solution on D i at time t k .We approximate the integral over the element Further we proceed with the approximation of the fluxes. Usually the flux with w l i , w l j denoting the approximate solution on the elements adjacent to the edge Γ i j at the time instant t l .In the case of a boundary face the vector w l j has to be specified.Here we show the numerical flux based on the solution of the Riemann problem for the split Euler equations.By w l Γ i j let us denote the state vector w at the center of the edge Γ i j at the time instant t l , and let us suppose w l Γ i j is known.Evaluation of these values will be a question of the further analysis, here we use them to approximate the integrals with the one-point rule (10) Here ∇w l Γ i j denotes the ∇w at the center of the edge Γ i j at time instant t l .Now it is possible to approximate the system (8) by the following explicit finite volume scheme With this finite volume formula one computes the values of the approximate solution at the time instant t k+1 , using the values from the time instant t k , and by evaluating the values w k Γ i j at the faces Γ i j .In order to achieve the stability of the used method, the time step τ k must be restricted by the so-called CFL condition, see [4].The crucial problem of this discretization lies with the evaluation of the edge values w k Γ i j .Or one deals with the problem of finding the face fluxes H(w k i , w k j , n i j ).It is also possible to use the implicit scheme The crucial problem of this discretization lies with the evaluation of the face fluxes H(w k+1 i , w k+1 j , n i j ).One possibility is to use the linearization via the Taylor expansion of the vector function H(wI, wJ, n), this was shown in [6].One possibility of the face flux evaluation is to approximate the face values w k Γ i j and then compute the numerical flux To approximate the face values w k Γ i j at time instant t k we solve the simplified system (14) in the vicinity of the face Γ i j in time with the initial condition formed by the state vectors w k i and w k j .Using the rotational invariance of the Euler equations, the system is expressed in a new Cartesian coordinate system x1 , x2 , x3 with the origin at the center of the gravity of Γ i j and with the new axis x1 in the direction of n = (n 1 , n 2 , n 3 ), given by the face normal n = n i j . x 1 Then the derivatives with respect to x2 , x3 are neglected and we get the so-called split 3D Euler equations, see [4, page 138]: The values w k i and w k j adjacent to the face Γ i j are known, forming the initial conditions The transformation matrix Q is defined in [6].The problem ( 14), ( 15), ( 16) has a unique "solution" in (−∞, ∞) × (0, ∞), the analysis can be found in [4, page 138].Let q RS (q L , q R , x1 , t) denote the solution of this problem at the point ( x1 , t).We are interested in the solution of this local problem at the time axis, which is the sought solution in the local coordinates q Γ i j = q RS (q L , q R , 0, t).The backward transformation of the state vector q Γ i j into the global coordinates is The constructed numerical flux can be written as Let Γ i j be the face of the element D i laying at the boundary of the computational area.To approximate the face values w k Γ i j at time instant t k we solve the incomplete simplified system (19) in the vicinity of the face Γ i j in time with the initial condition (20) determined by the state vector w k i .By adding properly chosen equations into the system (19),(20) it is possible to reconstruct the boundary state q B such that the system (19),(20) has a unique solution at the boundary, see [3].We will refer to these added equations as to complementary conditions.Several choices of the complementary conditions were discussed in [3], [5]. The Riemann Problem for the Split Euler Equations For many numerical methods dealing with the two or three dimensional equations, describing the compressible flow, it is useful to solve the Riemann problem for the 3D split Euler equations.We search the solution of the system of partial differential equations in time t and space (x, y, z) equipped with the initial conditions Vector u = (u, v, w) denotes the velocity, density, p pressure, E = e + |u| 2 is the total energy, with e denoting the specific internal energy.We assume the equation of state for the calorically ideal gas e = p (γ − 1) . 'Split' means here that we still have 5 equations in 3D, but the dependence on y, z (space coordinates) is neglected, and we deal with the system for one space variable x.The system (19) is considered in the set The solution of this problem is fundamentally the same as the solution of the Riemann problem for the 1D Euler equations, see [4, page 138].In fact, the solution for the pressure, the first component of the velocity, and the density is exactly the same as in one-dimensional case.It is a characteristic feature of the hyperbolic equations, that there is a possible raise of discontinuities in solutions, even in the case when the initial conditions are smooth, see [8, page 390].Here the concept of the classical solution is too restrictive, and therefore we seek a weak solution of this problem.To distinguish physically admissible solutions from nonphysical ones, entropy condition must be introduced, see [8, page 396].By the solution of the problem (19),(20),(21) we mean the weak entropy solution of this problem in Q ∞ .The analysis to the solution of this problem can be found in many books, i.e. [4], [8], [9].The general theorem on the solvability of the Riemann problem can be found in [4, page 88]. The solution is piecewise smooth and its general form can be seen in Fig. 2, where the system of half lines is drawn.These half lines define regions, where solution is either constant or given by a smooth function.Let us define the open sets called wedges (see Fig. 2): Using the theory in [4], [8], [9], we can write the solution for the primitive variables as The folowing relations for these variables hold: Here a L = γp L / L , a R = γp R / R , and γ denotes the adiabatic constant.Further s 1 T L denotes "unknown left wave speed", s 3 T R "unknown right wave speed".Note, that the system (22) -(27) is the system of 6 equations for 6 unknowns p ,u , L R ,s 1 T L ,s 3 T R . Solution for the Pressure p The combination of the equations ( 22), (25) gives the implicit equation for the unknown pressure p with This is a nonlinear algebraic equation, and one cannot express the analytical solution of this problem in a closed form.The solution p can be found as the root of the function F(p) defined as The analysis of this function can be found in [4].Here we state, that F(p) is monotone and concave.Also the analytic expression for its derivative is available.For a positive solution for the pressure F(0) < 0 is required.This gives the pressure positivity condition The Newton method can be used to find the root of F(p) = 0.With the pressure p known, we use the equations ( 22)-( 27) to obtain the whole solution. Remarks -Once the pressure p is known, the solution on the lefthand side of the contact discontinuity depends only on the left-hand side initial condition (20).And similarily, with p known, only the right-hand side initial condition ( 21) is used to compute the solution on the righthand side of the contact discontinuity.-The solution in Ω L ∪Ω HT L ∪Ω L (across 1 wave) (solvability in general case) The solution components in Ω L ∪ Ω HT L ∪ Ω L region must satisfy the system of equations ( 22)-(24).It is a system of three equations for four unknowns ( L , p , u , s 1 T L ).We have to add another equation in order to get the uniquely solvable system in Ω L ∪ Ω HT L ∪ Ω L .This is the key problem for the outlet boundary condition. -The equation ( 22) can be written as the equation for pressure, see [3] p = E 1 (u ), (30) In our work we use the analysis of this problem also for the solution of the initial-boundary value at the boundary faces. Boundary Condition by Preference of Pressure Using this boundary condition, we try to prescribe given static pressure at the face.This boundary condition corresponds to the real-world problem, when we deal with the experimentally obtained pressure distribution at the boundary.The conservation laws must be satisfied in close vicnity to the boundary face.We use the analysis of the incomplete Riemann problem to construct the values for the density and the pressure at each boundary face. Here p denotes the pressure in Ω L ∪Ω R part of the solution of the Riemann problem for the split Euler equations, shown in Section 4. This way we have prescribed the desired pressure value whenever it is possible (see the general form of the solution in Section 4).Now it is possible to use ( 22),( 23),(24) Further we must discuss all the possibilities of the left (shock, rarefaction) and the middle (discontinuity) wave in the solution, which is shown in figure 3.In the case of outlet (u ≥ 0), the transformation of the velocity into the global coordinates is Here u B denotes the computed velocity in the normal direction.In the case of an inlet (u < 0) we must prescribe another conditions in order to obtain a uniquely solvable system.Here we may choose (for example) arbitrary density GIVEN and the direction of the velocity d. Then it is The analysis of this problem was shown also in [3]. 02064-p.5 Boundary Condition by Preference of Velocity Here we show the boundary condition prefering the prescribed given velocity at the face.The conservation laws must be satisfied in close vicnity to the boundary face. Here u denotes the normal component of velocity in Ω L ∪ Ω R part of the solution of the Riemann problem for the split Euler equations, shown in Section 4. This way we have prescribed the desired normal component of velocity whenever it is possible (see the general form of the solution in Section 4).Now it is possible to use (30),( 23),(24).Further we must discuss all the possibilities of the left (shock, rarefaction) and the middle (discontinuity) wave in the solution, which is shown in figure 4. The tangential velocities are conserved (same as "left" values) in the case of outlet (u ≥ 0), and the velocity vector in the global coordinates is Here u B denotes the computed velocity in the normal direction.In the case of an inlet (u < 0) we must prescribe another conditions in order to obtain a uniquely solvable system.Here we may choose (for example) arbitrary density GIVEN and the whole vector of velocity For example, it is possible to choose the density R using the given entropy index p o o γ (p o , o are some reference values for the pressure and the density) EFM 2015 Another possibility is to conserve (choose) the total temperature θ 0 for the inlet case.Then, using thermodynamic relations (energy equation for a perfect gas), it is The algorithm for the construction of the primitive variables B , u B , p B at the boundary face is shown in Fig. 4. The complete analysis of this problem is shown in [3]. u L ≤ 0 (1-rarefaction wave) The velocity in local coordinates is (0, v L , w L ).The backward transformation of the computed values back into global coordinates (transformation of the velocity components) gives the sought velocity vector at the boundary Remark.It is possible to solve this boundary problem by prescribing the special right-hand side initial condition to the original Riemann problem, described in Section 4., i.e. Non-slip wall For the non-slip wall (viscous flow) we set the velocity vector u w = 0 at the boundary.The turbulent kinetic energy at such non-slip wall is k = 0.For the evaulation of the specific dissipation at the boundary ω we use the statements from [1].Let us use the following notation and relations θ w static temperature at the wall μ w dynamic viscosity at the wall, using Sutherland's law unit outer normal to face force (vector) acting on surface surface shear stress, friction tension (force magnitude in surface direction) friction velocity (velocity scale) In the real application it is necessary to prescribe the static temperature of the surface θ w . The law of wall, [1, pages 13-18] The law of the wall is empirically-determined relation between the streamwise velocity U and distance from surface y, see [ The coefficient S R is defined with the use of the roughness height k S = z S e 8.5κ . Here k S represents the roughness height.S R is a continuous function of k + S . The Flat Plate Example Here we show example of the compressible viscous flow along the flat plate.The velocity regime u = 15 m s −1 .The total pressure p o = 101325 Pa, total temperature θ o = 273.15K, and properties R = 287.04J K −1 mol −1 , γ = 1.4,gas flows from the left to the right side.Initial condition for the computation is given by isentropic approximation (and the use of energy equation for a perfect gas) The initial conditions for the free stream turbulent kinetic energy k and the specific turbulent dissipaton ω are given by the turbulent intensity I = 0.1 and μ T /μ = 0.01: Numerical Tests on Shortened Domains In order to achieve faster computations it is desired to use smaller computation areas.In such case, the boundary conditions must be chosen appropriately.Here we present numerical experiments with the boundary conditions on smaller domains.At first we assume the compressible viscous flow in the long channel (x-coordinate from -13 to 4), with only small area of interest (x-coordinate from -1 to 2).The velocity regime u = 15 m s −1 .The total pressure p o = 101325 Pa, total temperature θ o = 273.15K, and the gas constants set to R = 287.04J K −1 mol −1 , γ = 1.4,gas flows from the left to the right side.Initial condition for the computation is given by isentropic approximation (and the use of energy equation for a perfect gas).The initial conditions for the free stream turbulent kinetic energy k and the specific turbulent dissipaton ω are given by the turbulent intensity I = 0.1 and μ T /μ = 0.01, see also section 7.1.The results are depicted in figure 7. The computed results were used in another series of computations on shortened domain (with y-coordinate -1 to 2), in order to test various boundary conditions.The aim was to test various boundary and initial conditions in order to match previously computed data on the original (longer) channel.Here we present the results using the boundary conditions given by the preference of pressure at the outlet, and with the preference of total quantities and the direction of velocity at the inlet.The input values for these boundary conditions were given from the previous computation on larger domain (different values at each boundary face).Results are shown in figures 8 -10. Further we show the simulation of emission of another gas specie into the area.We choose the additional gas with the properties R = 461 J K −1 mol −1 , c V = 1450 J K −1 mol −1 , which enters the area.This emission is realized by introducing the fixed source located at the given cell (with cell center coordinates [0.092,0.0997,0.5]).The mass fraction of additional gas component at this cell was fixed to Y 1 = 1, and density was set accordingly to equation of state, using the computed pressure and temperature.Figure 11. shows the propagation of mixture in time. Conclusion This paper shows the formulation of the equations describing the mixture of two inert perfect gases in 3D, it is further focused on the boundary conditions and simulation of 02064-p.9 the wall roughness.The numerical method (finite volume method) is applied for the solution of these equations.Own software was programmed.The modification of the Riemann problem is used at the boundaries. Fig. 3 . Fig. 3. Algorithm for the solution of the problem (19),(20),(31),(32),(33) at the half line S B = {(0, t); t > 0}.The value of the pressure p is prescribed.Possible situations are illustrated by the pictures showing Ω L ∪ Ω HT L ∪ Ω L region.The sought boundary state located at the time axis, marked red. Fig. 5 . Fig. 5. Computational simulation, the flow along the flat plate and along the rough wall (z S = 0.0002) simulation, regime 15 m s −1 , isolines of k turbulent kinetic energy.The Y + − U + graph (log scale) at chosen line cuts (marked by the lines in the central picture).The dotted line shows U+ given by (39), full line shows the computed results. Fig. 6 . Fig. 6.Computational simulation, the flow along the flat plate and along the rough wall (z S = 0.0002) simulation, regime 15 m s −1 , isolines of the turbulent kinetic energy k and dissipation ω.The wall distancek (green) and ω (red) graphs at the chosen line cuts (marked by the lines in the central picture). Fig. 7 . Fig. 7.The flow inside the simple channel with bump, regime 15 m s −1 , isolines of u velocity component and k turbulent kinetic energy.The y − , u, p graphs at chosen line cuts (shown red). forms the boundary ∂D i .By n i j let us denote the unit outer normal to ∂D i on Γ i j .Let us construct a partition 0 = t 0 < t 1 < . . . of the time interval [0, T ] and denote the time steps τ k = t k+1 − t k .We integrate the system (1) over the set D i × (t k , t k+1 ).With the integral form of the equations we can study a flow with discontinuities, such as shock waves, too. The following notation is used (some values are used only for an INLET case) 7 Boundary Condition for the Imermeable Wall Slip wall (Eulerian, inviscid, symmetric boundary) Using this boundary condition, we set the normal component velocity at the wall to zero.The conservation laws must be satisfied in close vicnity to the boundary face.We use the analysis of the incomplete Riemann problem to construct the values for the density and the pressure at each boundary face.The following notation is used At the vicinity of the boundary we solve the following problem (modified Riemann problem for the split Euler equations), with the left-hand side initial condition and the complementary condition (zero normal velocity at the boundary)
6,472.2
2015-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Conduction of DNA molecules attached to a disconnected array of metallic Ga nanoparticles We have investigated the conduction over a wide range of temperature of $\lambda$ DNA molecules deposited across slits etched through a few nanometers thick platinum film. The slits are insulating before DNA deposition but contain metallic Ga nanoparticles, a result of focused ion beam etching. When these nanoparticles are superconducting we find that they can induce superconductivity through the DNA molecules, even though the main electrodes are non superconducting. These results indicate that minute metallic particles can easily transfer charge carriers to attached DNA molecules and provide a possible reconciliation between apparently contradictory previous experimental results concerning the length over which DNA molecules can conduct electricity. We have investigated the conduction over a wide range of temperature of λ DNA molecules deposited across slits etched through a few nanometers thick platinum film. The slits are insulating before DNA deposition but contain metallic Ga nanoparticles, a result of focused ion beam etching. When these nanoparticles are superconducting we find that they can induce superconductivity through the DNA molecules, even though the main electrodes are non superconducting. These results indicate that minute metallic particles can easily transfer charge carriers to attached DNA molecules and provide a possible reconciliation between apparently contradictory previous experimental results concerning the length over which DNA molecules can conduct electricity. Conductivity of DNA is a long-standing debate. Following the initial predictions that DNA molecules should conduct electricity, several types of experiments were attempted to probe the conduction mechanisms, ranging from emission and absorption spectroscopies [1] to microwave absorption [2]. Several groups have also attempted direct measurements of DNA conductivity by attaching DNA molecules to metallic electrodes [3,4]. The contradictory experimental results, with behaviors ranging from insulator to coherent quantum transport over distances in the hundred nanometer range led to a strong controversy [5]. The picture emerging in the past few years has been that DNA can conduct over distances of tens of nanometers: this was shown by STM and local probe techniques [7][8][9][10], as well as in a spectacular experiment [6] : a 3 nanometer long DNA molecule was inserted in a cut carbon nanotube, increasing its initial resistance only twofolds, and was subjected to biological manipulations that altered and then restored the conductivity. The importance of the environment of the molecules in order to have reproducible results was pointed out in [14]. Conduction over hundreds of nanometers, and up to several microns, was also reported by different groups [3,[11][12][13], including ours. In our previous experiments DNA was found to be conductive between platinum-carbon electrodes [16] and between rhenium-carbon electrodes [15]. In this last case as the samples were lowered below the superconducting critical temperature of the electrodes (rhenium is a superconductor with T c = 1.7 K) the sample resistance decreased, indicating coherent quantum transport through the DNA molecules. These results are both technologically and fundamentally important since long range transport in DNA molecules may lead to the creation of new nanoscale self-assembled electronic devices. From the fundamental point of view, DNA is one of the rare one-dimensional molecular wires that can be obtained in mono dispersed form with known chemical structure and chirality. It is thus important to understand the ingredients that lead to conduction over long distances. In this Letter we reconcile previous findings by showing that conduction over distances greater than hundreds of nanometers can occur if the DNA molecules are attached to a disconnected array of nanoparticles (typically 10 to 20 nm apart) that locally dopes the molecules, enhancing conduction. In addition in our case the nanoparticles are superconducting, which induces superconducting correlations in DNA at low temperatures. All our samples, including the previous ones, are fabricated with unconventional techniques: without electron beam lithography and with functionalization of the sample surface by a pentylamine plasma. Pentylamine was used because it is known to promote attachment of DNA to amorphous carbon films (such as those used in transmission electron microscopy, see [18]). The samples we describe hereafter are also fabricated using focused ion beam etching of a thin platinum carbon film deposited on mica, with subsequent pentylamine plasma treatment before deposition of DNA. Compared to our previous experiments we have gained a better understanding of this functionalization technique, establishing that pentylamine adheres only on carbonated surfaces and not directly on mica or metals. Thus fabrication begins with a mica substrate covered by a e-gun deposited platinum carbon film a few nanometers thick (5 nm thick, square resistance 1 kOhms). Although the carbon concentration is not known exactly, we checked that the concentration was high enough to anchor the penty-arXiv:1011.0383v1 [cond-mat.mes-hall] 1 Nov 2010 lamine, since no DNA attached to a platinum surface without co-deposited carbon. We deposit thick gold contact pads through a mechanical mask and divide the centimeter square mica substrate into roughly twelve sample regions using a UV laser with a 30 micron diameter laser beam,see Fig.1. We then proceed to etch away the metal over a thin, 50 µm long, region using a focused ion beam (magnification ×3000 and current 3.5 pA). In order to obtain narrow insulating regions we monitor the resistance of a first slit as we etch the platinum film one line-scan at a time. We stop the etching as soon as the resistance diverges, see Fig.1. The other slits are etched using a slightly larger (15%) number of scans than was necessary to open the first slit. We then check electrically with a probe station that all slits have a resistance above a few GΩ. The width of the slits fabricated with this technique ranges between 70 and 150 nm. The next step is pentylamine deposition, in a DC plasma discharge with pentylamine vapor pressure P = 0.1 Torr, and current I = 3 − 5 mA for a few minutes. A drop of λ-DNA [17] solution was incubated on the substrate surface for a few minutes and then rinsed away using a water flow created by a peristaltic-pump (flow few cm/s). Out of eight mica substrates on which DNA deposition was attempted [19], five were covered by DNA molecules as established by atomic force microscopy. These five substrates contained around 30 slits. All samples on two of these substrates were completely insulating. On the other three substrates 11 out of 15 samples were conducting. We have also prepared a control substrate, incubated with the same buffer but without DNA molecules, and rinsed like the other samples. We found that all 14 slits etched on these samples remained insulating. Room temperature conductance was measured in a probe station, using an ac voltage in the mV range at frequencies ranging from 1 to 30Hz. The resistance of conducting samples was found to vary, depending on the slit, between 5 kΩ and 50 kΩ. These values are consistent with previous findings [15,16], given that the number of deposited molecules across each slit varies between 10 and 100. The pentylamine plasma creates a positively charged organic layer that allows DNA molecules to bind to the carbonated hydrophobic electrodes. Conducting AFM characterization of this pentylamine layer on a smooth Pt/C film indicates that the pentylamine film forms a smooth insulating layer. This is not the case along the edge of the slits, where FIB etching as well as unavoidable carbon contamination introduce roughness, leading to defects and holes in the pentylamine coverage. As a result the edges of the slit remain metallic, as is needed to establish electrical contact to the DNA on both sides of the slit. We have used both atomic force atomic microscopy and high resolution scanning electron microscopy to characterize the structure of the FIB etched slit. We find that the insides of the slits are rather rough for two reasons: The incomplete etching of the platinum film leaves metallic disconnected islands of typical size 10 nanometers. In addition, some slits contain a disordered array of roughly spherical nanoparticles (see Fig.2). The regular shape of these spheres contrasts with the irregular shape of the etching residues of PtC. As confirmed by transport experiments presented below, these spherical nanoparticles result from condensed gallium drops generated by the FIB. Their size varies between 3 and 10 nm, their separation between 5 and 20 nm. Even if these nanoparticles do not directly contribute to electronic transport through the slits, which were insulating before deposition of DNA and remained so after a flow of saline buffer solution without molecules, we will see that they certainly modify the electronic properties of DNA molecules deposited across the slit. In the following we present low temperature transport measurements of DNA molecules deposited through slits decorated with gallium nanoparticles. The samples investigated have resistances ranging from 5 to 20 kΩ at room temperature, with roughly 10 to 30 connected molecules, as deduced from the density of molecules on the substrate far from the slit. The samples were electronically and mechanically connected by gold plated spring contacts [20] on the gold pads on the Pt/C film, and mounted in a dilution refrigerator operating down to 50 mK. The resistance was measured via lines with room temperature low pass filters. Measurements were performed in a current biased configuration using an ac current source of 1 nA operating at 27 Hz and a Lock-in detector with a low noise voltage pre-amplifier. Whereas the resistance was nearly independent of temperature between room temperature and 4 K, it dropped as T decreased, with a broad transition to a value of the order of 4 kΩ (which corresponds to the resistance of the normal Pt/C electrodes in series with the DNA molecules), see Fig. 3. This transition to partial proximity-induced superconductivity is shifted to lower temperatures in a magnetic field. It is the broadest for the most resistive sample, and exhibits the smallest magnetic field dependence. Another superconducting-like feature is the non linear IV curves at low temperature, see Fig. 4: The dc currentdependent differential resistance is lowest at small dc current and increases with increasing dc current. The increase is non monotonous, presenting several peaks up to a current of the order of 1µA, a sort of critical current, above which the resistance is constant and independent of dc current. The many peaks in the differential resistance curves are typical of non homogeneous superconductivity. For instance the differential resistance jumps seen in narrow superconducting wires (diameter smaller than coherence length) are associated with the weak spots of the wire. Since neither the Pt/C electrodes nor the DNA molecules are superconducting (as shown in previous experiments), these results suggest that the gallium nanoparticles, which are superconducting, induce superconductivity through the DNA molecules. The superconducting transition temperature of pure gallium is T c = 1 K but it is reasonable to expect that the gallium nanoparticles, because of their small size and their probable large carbon content, have a higher T c [22]. It is interesting to note that the low intrinsic carrier density in the DNA molecules may prevent the inverse proximity effect, i.e. the destruction of the superconductivity of the gallium nanoparticles. Those same nanoparticles could not induce any proximity effect in metallic wires because of the high density of carriers in metals. This The black curve represents the differential resistance dV /di as a function of DC current through the 10 kΩ sample at 100 mK. The color inset in the background shows the evolution of the differential resistance encoded as a color scale with yellow/violet representing maximal/minimal differential resistance. The x axis represents the DC-current as in the main figure, and the y axis indicates the magnetic field ranging from 0 to 5 Tesla. possibility of inducing long range superconductivity with superconducting nanoparticles was investigated recently in the context of graphene [21]. In the present case, it is also possible that the gallium nanoparticles could contribute to carrier doping of the DNA molecules in the normal state. The difference between the transitions of the various samples is probably related to the existence of nanoparticles of different sizes, leading to superconducting transitions more rounded and with a weaker T C (H) dependence in small particles than in large ones. The radius R of the nanoparticles inducing superconductivity in DNA can be estimated from the critical field H c = Φ 0 /πR 2 , for which the transition temperature extrapolates to 0. This field (see Fig.3) is of the order of 10 T, corresponding to a radius between 5 and 7 nm. A rough estimate of the number of nanoparticles bound to DNA molecules participating in transport can also be deduced from the number of peaks of differential resistance which varies from 3 to 6 depending on the samples (the largest number of peaks is observed in the lowest resistance samples). This corresponds to a typical distance between nanoparticles attached to a DNA molecule of the order of 10 to 20 nm, which is thus the length over which we probe electronic transport along the DNA molecules, and not the total width of the slit. The relatively low values of measured resistances, as well as the appearance of proximity induced superconductivity, indicates a strong electronic coupling between the DNA molecules and both Pt/C residues and Ga nanoparticles. This contrasts with previous measurements of DNA molecules linking gold nanoparticles [23], where the conductivity did not exceed 10 −4 Scm −1 for a distance between metallic nanoparticles # of substrates 12 # of FIB slits 100 # of substrates with visible λ DNA 5 # of substrates with conducting slits after λ deposition 3 # of slits on these 3 substrates 15 # of conducting slits after λ deposition 11 # of slits on the control sample 14 # of conducting slits after buffer 0 of 10 nm, whereas the conductivity in the present case can be estimated to be of the order of unity in the same units. Accordingly transport experiments on completely metallised DNA molecule [24] did not seem to indicate any intrinsic contribution of the DNA molecules to the conduction measured. These differences may originate in the nature of the binding between the metallic nanoparticles and the DNA which in ref. [23] was of covalent nature (involving alkanethiol molecules of low conductivity), whereas in the present case we believe that a good electrical contact between DNA molecules and the metallic nanoparticles is provided by the discontinuities and defects in the pentylamine film. Our results indicate that minute metallic particles can transfer charge carriers to attached DNA molecules and confirm that DNA molecules can be conducting on lengths of the order of 10 nm but we cannot conclude with these experiments on the conduction on longer length scales. Since in our previous experiments [15,16] the DNA molecules were connected across similarly etched slits in thin metallic films, the existence of metallic residues cannot be excluded, and the conduction of DNA molecules could thus also have been probed on distances no greater than 10 nm. These results invite to a systematic investigation of the possible carrier doping of DNA by metallic nanoparticles. We thank F. Livolant, A. Leforestier, D. Vuillaume and D. Deresmes for fruitful discussions and acknowledge ANR QuantADN and DGA for support.
3,523.8
2010-11-01T00:00:00.000
[ "Physics" ]
Observation of Dirac Charge-Density Waves in Bi2Te2Se While parallel segments in the Fermi level contours, often found at the surfaces of topological insulators (TIs), would imply “strong” nesting conditions, the existence of charge-density waves (CDWs)—periodic modulations of the electron density—has not been verified up to now. Here, we report the observation of a CDW at the surface of the TI Bi2Te2Se(111), below ≈350K, by helium-atom scattering and, thus, experimental evidence for a CDW involving Dirac topological electrons. Deviations of the order parameter observed below 180K, and a low-temperature break of time reversal symmetry, suggest the onset of a spin-density wave with the same period as the CDW in the presence of a prominent electron-phonon interaction, originating from Rashba spin-orbit coupling. Introduction Charge-density waves (CDWs)-periodic modulations of the electron density-are a ubiquitous phenomenon in crystalline metals and are often observed in layered or lowdimensional materials [1][2][3][4][5][6]. CDWs are commonly described by a Peierls transition in a one-dimensional chain of atoms, which allows for an opening of an electronic gap at the nesting wavevector causing Fermi-surface nesting. However, it has been questioned whether the concept of nesting is essential for the understanding of CDW formation [1,7]. Instead, CDWs are often driven by strong electronic correlations and wavevector-dependent electron-phonon (e-ph) coupling [8]. Similarly, a nesting of sections of the Fermi surface can induce a periodic spin-density modulation, a spin-density wave (SDW) [9]. The possibility of a simultaneous appearance of both CDW and SDW order has been studied theoretically in earlier works [10] and an SDW was recently predicted for Weyl semi-metals [11]. The material class of topological insulators (TIs) has recently attracted extensive attention in a different context [12][13][14][15][16][17] due to their unique electronic surface states which involve a Dirac cone with spin-momentum locking [18,19]. Here, we report, for the first time, experimental evidence for a Dirac CDW on the surface of the TI Bi 2 Te 2 Se, i.e., a CDW involving Dirac topological electrons. Atom-scattering experiments reveal a CDW transition temperature T CDW = 350 K of the surface Dirac electrons, corresponding to a hexastar Fermi contour. The break of time-reversal symmetry of the CDW diffraction peaks observed at low temperature suggests a prominent role of Rashba spin-orbit coupling with the possible onset of an SDW below 180 K. Archetypal TIs, such as the bismuth chalcogenides, share many similarities with common CDW materials, such as a layered structure (see Figure 1a) [20]. The hexagonal contours at the Fermi level often found in TIs also imply strong nesting which has led to speculations about the existence of CDWs in TIs [21]. Furthermore, the importance of The (111) cleavage plane, according to rhombohedral notation, with the red rhombus highlighting the hexagonal surface unit cell of the (0001) plane in hexagonal notation. (c) Surface band structure of Bi 2 Te 2 Se(111) calculated by Nurmamat et al. [25] (reproduced with permission, copyright 2013 by the American Physical Society) along the symmetry directions ΓK and ΓM. TSS labels the topological surface states forming the Dirac cone, while internal (quantum-well) surface states (ISS) are also found in the gap above the conduction band minimum. In the present sample, the Fermi level E F (purple dash-dotted line) is located about 0.43 eV above the Dirac point E D and 0.37 eV above the Fermi energy, i.e., the zero ordinate in [25]. Electronic Structure and Electron-Phonon Coupling Among the bismuth chalcogenides, Bi 2 Te 2 Se is much less studied. The surface-dominated electronic transport [29][30][31][32][33], as well as the surface electronic band structure [24,[34][35][36][37], have been subject to several investigations. Moreover, in terms of the electronic band structure it was shown that, for different Bi 2−x Sb x Te 3−y Se y compositions, the Dirac point (E D in Figure 1c) moves up in energy with increasing x [34]. Tuning these stoichiometric properties and the doping of materials may give rise to nesting conditions between electron pocket or hole pocket states at the Fermi surface (E F in Figure 1c). Figure 1c depicts the electronic surface band structure calculated by Nurmamat et al. [25] along the symmetry directions ΓK and ΓM, revealing the TSS which form the Dirac cone. The Fermi level E F (horizontal purple line) in our present sample is located about 0.37 eV above the Fermi energy, with respect to the calculations of Nurmamat et al. [25] and 0.43 eV above the Dirac point, giving rise to the formation of quantum-well states at the Fermi surface. Moreover, a near-surface, two-dimensional electron gas (2DEG) with pronounced spin-orbit splitting can be induced on Bi 2 Te 2 Se by adsorption of rubidium [38]. Surface oxidation may occur at step-edge defects after cleaving [39], but Bi 2 Te 2 Se seems to be less prone to the formation of a 2DEG from rest-gas adsorption compared to other TIs [40], as shown in angle-resolved photo-emission measurements of the present samples [31]. Dirac fermion dynamics in Bi 2 Te 2 Se were subject to a recent study by Papalazarou et al. [41]. One reason for Bi 2 Te 2 Se being less studied than the binary bismuth chalcogenides might be the difficulty in synthesising high-quality single crystals, which originates from the internal features of the specific solid-state composition and phase separation in Bi 2 Te 2 Se [42]. In this work, we present a helium-atom scattering (HAS) study of Bi 2 Te 2 Se, which is actually phase II of Bi 2 Te 3−x Se x (111) with x = 1 according to [42], as derived from the surface lattice constant a = 4.31 Å, measured by HAS for the structure shown in Figure 1. Since helium atoms are scattered off the surface electronic charge distribution, HAS [43,44] provides access to the surface electron density [45,46] and is, therefore, a perfect probe for experimental studies of TIs since the TSS properties are often mixed up with those of bulk-states [47][48][49]. As the surface electronic transport properties of TIs at finite temperature, as well as the appearance of CDWs, are influenced by the interaction of electrons with phonons, the e-ph coupling described in terms of the mass-enhancement factor λ has been subject to several studies [47,[50][51][52][53][54][55][56]. For Bi 2 Te 2 Se, it was reported that the electron-disorder interaction dominates scattering processes with λ = 0.12 [57], in good agreement with the value found from HAS [55]. Experimental Details The experimental data for this work was obtained using a HAS apparatus, where a nearly monochromatic beam of 4 He is scattered off the sample surface in a fixed sourcesample-detector geometry (for further experimental details, see [58] and Appendix A). The scattered intensity of a He beam in the range of 10-15 meV is then recorded as a function of the incident angle ϑ i with respect to the surface normal, which can be modified by rotating the sample in the scattering chamber. The momentum transfer parallel to the surface ∆K, upon elastic scattering, is given by with k i , the incident wavevector, and ϑ i and ϑ f the incident and final angle, respectively. The Bi 2 Te 2 Se sample was grown by the Bridgman-Stockbarger method, as detailed in [42], and further characterised using powder X-ray diffraction (PXRD), Seebeck microprobe measurements and inductively coupled plasma atomic emission spectroscopy. While the chemical composition varies along the grown crystal rod, with the Se content y in the formula Bi 2 Te x Se y changing along the distance from the growth starting point, as illustrated in [42], experimental PXRD shows that smaller sections of the entire crystal boule correspond to a single phase. The relation between the Se content y and the cell parameters a and c, allows to assign the here used section of the crystal rod to phase II of Bi 2 Te 3−x Se x (111) with x ≈ 1, according to the above-mentioned surface lattice constant a = 4.31 Å from HAS diffraction scans. As also outlined in [42], the composition along individual sections of the crystal rod is uniform, with the cell parameters a and c changing abruptly at specific points along the rod. The single-crystallinity of the present sample is further supported by HAS and low-energy electron diffraction measurements in the present work. Finally, the Bi 2 Te 2 Se sample was cleaved in situ, in a load-lock chamber [59] prior to the experiments. Due to the weak bonding between the quintuple layers in Figure 1a, the latter gives access to the (111) cleavage plane with a Te termination at the surface ( Figure 1b, see also Appendix A). an enlarged scale). The fact that these satellite peaks appear at the same momentum transfer ∆K ≈ ±0.18 Å −1 with respect to the specular, as well as to the first-order diffraction peaks, independently of the incident energy, shows that they are neither caused by bound-state resonances [45] nor other artifacts (see the section on CDW satellite peaks in Appendix B, as well as additional scans in Figure A1), but have, necessarily, to be ascribed to a long-period surface superstructure of the electron density, i.e., a surface CDW. These observations are consistent with the theoretical surface-band structure calculated by Nurmamat et al. [25] (Figure 1c) and, more specifically, with the distribution in parallel wavevector space of the spin polarisation perpendicular to the surface for the states at the Fermi level, when re-scaled for a Fermi energy E F = 0.43 meV above the Dirac point. As appears in Figure 3a, the re-scaled spin distribution (red and blue for spin-up and spin-down, respectively) exhibits extended parallel segments of equal spin separated by the nesting wavevector g CDW = 0. 18 The position of the Fermi level which accounts for the present data falls near the bottom of the band of surface quantum-well states (grey circular region in Figure 3a), allowing for a continuum of small-wavevector phonon-induced transitions across the Fermi level, which may possibly be associated with the observed diffraction structure around the HAS specular peak. Figure 3 shows that the hexastar shape provides a spin-allowed nesting which corresponds to the observed g CDW periodicity. In contrast to the hexastar, for a hexagonal shape, as found in several TIs or as also observed on Bi(111), opposite sides of the Fermi contour exhibit opposite spins-a situation which forbids the pairing needed for a CDW formation, but leaves the possibility of an SDW [60,61]. Finally, the transmission of momentum and energy to the lattice for the spin-allowed transition across the hexastar occurs, via the excitation of virtual electron-hole pairs, if one assumes an Esbjerg-Nørskov form of the atom-surface potential based on a conducting surface. CDW Temperature Dependence In the following, we consider the temperature dependence of the CDW diffraction peaks and the CDW critical temperature T CDW . Upon measuring the scattered intensities as a function of surface temperature, it turns out that the intensity of the satellite peaks decreases much faster than the intensity of the specular peak. As shown in Appendix B ( Figure A2), when plotting both peaks in a Debye-Waller plot, the slope of the satellite peak is clearly steeper than the one for the specular peak. Based on the theory of classical CDW systems, the square root of the integrated peak intensity can be viewed as the order parameter of a CDW [62,63]. Figure 4b shows the temperature dependence of the square root of the integrated intensity for the −g CDW peak on the left-hand-side of the specular peak (see right panel of Figure 4b for several scans). In order to access the intensity change relevant to the CDW system, as opposed to the intensity changes due to the Debye-Waller factor [64], the integrated intensity I(T) has been normalised to that of the specular beam [65]-a correction which is necessary in view of the low surface Debye temperature of Bi 2 Te 2 Se(111) [55]. The temperature dependence of the order parameter I(T) can be used to determine the CDW transition temperature T CDW and the critical exponent β belonging to the phase transition by fitting the power law I(T) to the data points in Figure 4b. Here, I(0) is the extrapolated intensity at 0 K. The fit is represented by the green dashed line in Figure 4b, resulting in T CDW = (350 ± 10) K and β = (0.34 ± 0.02). The exact peak position and width of the satellite peak was determined by fitting a single Gaussian to the experimental data. The right panel of Figure 4a shows a shift in the satellite peak position to the right with increasing surface temperature, i.e., g CDW decreases with increasing temperature, as illustrated by the grey line. Such a temperature dependence confirms the connection of the satellite peaks with the surface electronic structure. A shift of the Dirac point to lower binding energies with increasing temperature and, thus, a decrease in k F has been observed both for Bi 2 Te 2 Se(111) [67] and Bi 2 Se 3 (111) [68]. As reported by Nayak et al. [67], the temperature-dependent changes in the electronic structure at E F occur due to the shift of the chemical potential in the case of n-type Bi 2 Te 2 Se(111). Moreover, a strong temperature dependence of the chemical potential has also been observed for other CDW systems [69,70] and semiconductors [71]. It is noted, however, that, in the present case, changes in g CDW with temperature, as well as in the peak area, are concentrated in a region around 170 K, which, by itself, suggests another phase transition. Diffraction and the Role of Spin-Orbit Coupling The surprising disappearance of the +g CDW diffraction peak observed at low temperature (118 K) can indeed be related to the apparent phase transition occurring at about 170-180 K (illustrated by the orange arrow in Figure 4), possibly a spin ordering within the CDW, i.e., an SDW with the same period. The latter is indicated by a rapid shift in the −g CDW diffraction peak position around 170 K, corresponding to a slight contraction of the CDW period (right panel of Figure 4a) as a possible effect of spin-ordering. The CDW order parameter, expressed by (1 − T/T CDW ) β actually shows a small deviation from this law below about 180 K (T SDW in Figure 4b). As explained above, the He-atom diffraction process from the CDW occurs via parallel momentum transfer to the surface electron gas via an electron-hole excitation between a filled and an empty state of equal spin and well nested at the Fermi level. Thus, the unidirectionality of the process at low temperature suggests a prominent role of the Rashba term in the presence of spin ordering with strong implications for the e-ph contribution. The latter is in line with Guan et al. [72], who reported a large enhancement of the e-ph coupling in the Rashba-split state of the Bi/Ag(111) surface alloy. The larger overlap of He atoms with CDW maxima also selects electrons with the same spin, because the SDW and CDW exhibit the same period. Considering, in the present case, a free-electron Hamiltonian [73], where −E D is the energy of the Dirac point below the Fermi energy E F = 0, p is the surface electron momentum and m * its effective mass, σ is the spin operator,ẑ is the unit vector normal to the surface and α R is the Rashba constant. The modulation of the Rashba term produced by a phonon of momentum q, branch index s and normal mode coordinate A qs is viewed as the main source of e-ph interaction [74], causing inter-pocket coupling (∆p = q = ±g CDW ) and CDW gap opening. In a diffraction process, the exchange of parallel momentum between the scattered atom and the solid centre-of-mass is mediated by a virtual electron inter-pocket transition |k, n → |k + q, n weighed by the difference in Fermi-Dirac occupation numbers f q,n − f k+q,n . While the process q = −g CDW virtually casts the electron from a pocket ground state at the Fermi level to an empty excited state across that gap, the process q = +g CDW would virtually send the electron, because of the Rashba term, to a lower energy state and is, therefore, forbidden at low temperature. Such a scenario is equivalent to saying that the SDW-CDW entanglement makes HAS sensitive to the spin orientation via its temperature dependence. The inter-pocket electron transition across the gap accompanying a CDW diffraction of He atoms via the modulation of the Rashba term may only occur in one direction. Since the gap energy is of the order of room temperature, and at this temperature the spin ordering is removed, the above selection rule is relaxed and the diffraction peaks are observed in both directions (on both sides of the specular peak in Figure 3b). We note that the nesting condition and the changes between different Fermi level contours (e.g., hexagonal vs. hexastar shapes) depend strongly on the position of the Fermi level [40,75,76] and may, thus, be highly sensitive to the doping situation of the specific sample [77]. While the size of the CDW gap, as inferred from the slight asymmetry between +g CDW and −g CDW , does not seem to be large enough to be resolved in ARPES [21,70,77], HAS satellite diffraction peaks clearly indicate an additional long-period component of the surface charge-density corrugation. Summary and Conclusions In summary, we have provided evidence by means of helium-atom scattering, of a surface charge-density wave in Bi 2 Te 2 Se(111) occurring below 345 K and involving Dirac topological electrons. The CDW diffraction pattern is found to reflect a spin-allowed nesting across the hexastar contour at the appropriate Fermi level, re-scaled from previously reported ab initio calculations [25]. The CDW order parameter has been measured down to 108 K and found to have a critical exponent of 1/3. The observation of a time-reversal symmetry break at low temperature, together with deviations from the critical behaviour below about 180 K, are interpreted as being due to the onset of a spin-density wave with the same period as the CDW in the presence of a prominent electron-phonon interaction originating from the Rashba spin-orbit coupling. While it is difficult to make definitive statements about the generality of our observations, we anticipate that, by tuning the stoichiometric properties and doping level of topological insulators, thus changing the position of the Dirac point and possible nesting conditions, the condition for the CDW order may be changed or shifted to a different periodicity. It is, thus, expected that, from further experiments and validation, one may be able to evolve phase diagrams for Dirac CDWs as a function of stoichiometry, doping and Fermi-level position. Taken together the results promise also to shed light on previous experimental and theoretical investigations of related systems and how the CDW order affects lattice dynamics and stability. These also include possible connections between the CDW order and superconductivity [78][79][80], as well as the influence of certain energy dissipation channels on molecular transport [81,82]. Data Availability Statement: Experimental data supporting the results are available from the corresponding author upon reasonable request. Acknowledgments: We thank Marco Bianchi for his advice and help in terms of the sample preparation and additional characterisation of the samples. We would also like to thank Philip Hofmann (Aarhus University), Evgueni Chulkov (DIPC, San Sebastian, Spain) and D. Campi and M. Bernasconi (Unimib, Milano, Italy) for many helpful discussions. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Experimental Details The experimental data for this work were obtained at the HAS apparatus in Graz, whereby a nearly monochromatic beam of 4 He is scattered off the sample surface in a fixed source-sample-detector geometry with an angle of 91.5 • (for further details about the apparatus, see [58]). The scattered intensity of a He beam in the range of 10-20 meV is then monitored as a function of the incident angle ϑ i with respect to the surface normal, which can be modified by rotating the sample in the main chamber (base pressure p ≤ 2 · 10 −10 mbar). The momentum transfer parallel to the surface ∆K, upon elastic scat-tering, is given by ∆K = |k i | sin ϑ f − sin ϑ i , with k i , the incident wavevector, and ϑ i and ϑ f , the incident and final angle, respectively. Details about the sample growth procedure can be found in Mi et al. [42]. Bi 2 Te 2 Se exhibits a rhombohedral crystal structure, which is in accordance with other bismuth chalcogenides built of quintuple layers (QL, see Figure 1 in the main text) which are bound to each other via weak forces of van der Waals character [42]. The weak bonding between the QLs gives access to the (111) cleavage plane in terms of the primitive rhombohedral unit cell. However, it is more common to illustrate the conventional hexagonal unit cell, which consists of 3 QLs with each QL layer being terminated by a Te layer (Figure 1a in the main text). The Te termination is parallel to the (0001) plane of the conventional hexagonal unit cell and exhibits a surface lattice constant of a = 4.31 Å (Figure 1b in the main text). The Bi 2 Te 2 Se sample was fixed on a sample holder using thermally and electrically conductive epoxy and cleaved in situ in a load-lock chamber [59]. The sample temperature can be varied by cooling via a thermal connection to a liquid nitrogen reservoir and heating based on a button heater. After cleavage, the cleanliness and purity of the sample can be further checked using low-energy electron diffraction (LEED) and Auger electron spectroscopy (AES). (a) (b) Figure A1. (a) Additional full diffraction scans taken at room temperature and with varying incident beam energy E i further illustrate the spin-allowed CDW periodicity of Bi 2 Te 2 Se(111) along ΓM. The corresponding satellite peaks (grey dashed vertical lines) appear next to the specular and first-order diffraction peaks (grey dash-dotted vertical lines). (b) An additional full diffraction scan (logarithmic scale), with the sample cooled down, illustrates that the asymmetry between +g CDW and −g CDW , as described in the main text, is also evident for the satellite peaks next to the first-order diffraction peaks. Appendix B. CDW Satellite Peaks and Anomalous Debye Attenuation Both the position of the peaks assigned as CDW and the described temperature dependence cannot be attributed to a lack of cleanliness or to a superstructure formed by adsorbates at the surface. While we cannot exclude that the structure very close to the specular may be caused by twinning/different domains, which can occur on layered crystals such as the binary TIs [23], such an effect cannot cause the appearance of the ±g CDW satellite peaks next to both the specular and the diffraction peaks. Figure A2. Debye-Waller plot of the temperature dependence for both the specular, as well as the satellite, peak at the left-hand-side of the specular peak. Therefore, Figure A1 shows a number of additional full diffraction scans with varying incident-beam energies. The full scan in Figure A1b, illustrates that the asymmetry between +g CDW and −g CDW for the cooled sample, as described in the main text, is also evident for the satellite peaks next to the first-order diffraction peaks. Small intensity variations occurring between the right-and left-hand side of the specular may be present due to misalignment effects; however, Figure A1b shows that such artefacts cannot explain the asymmetry between +g CDW and −g CDW . It should further be noted that the CDW peaks are very sensitive to the azimuthal orientation of the crystal, with the spin-allowed CDW periodicity of Bi 2 Te 2 Se(111) occurring only along ΓM, which clearly speaks against specular scattering. The temperature dependence of the peak intensities with the typical critical exponent is further strong evidence for the satellite peaks originating from a CDW. Finally, the semi-logarithmic plot ( Figure A2) of the obtained temperature dependence for the diffraction intensities of the specular reflection (blue) and the satellite peak (red) clearly indicates the anomalous attenuation of the CDW satellite peak compared to the Debye attenuation of the specular peak. The data plotted in Figure A2 are further used for the derivation of the order parameter in Figure 4b in the main text.
5,679.8
2021-11-03T00:00:00.000
[ "Physics" ]
Stable One-Dimensional Periodic Wave in Kerr-Type and Quadratic Nonlinear Media We present the propagation of optical beams and the properties of one-dimensional 1D spatial solitons “bright” and “dark” in saturated Kerr-type and quadratic nonlinear media. Special attention is paid to the recent advances of the theory of soliton stability. We show that the stabilization of bright periodic waves occurs above a certain threshold power level and the dark periodic waves can be destabilized by the saturation of the nonlinear response, while the dark quadratic waves turn out to be metastable in the broad range of material parameters. The propagation of 1 1 a dimension-optical field on saturated Kerr media using nonlinear Schrödinger equations is described. Amodel for the envelope one-dimensional evolution equation is built up using the Laplace transform. Introduction The discrete spatial optical solitons have been introduced and studied theoretically as spatially localized modes of periodic optical structures 1 .A standard theoretical approach in the study of the discrete spatial optical solitons is based on the derivation of an effective discrete nonlinear Schr ödinger equation and the analysis of its stationary localized solitonsdiscrete localized modes 1, 2 . The spatial solitons may exist in a broad branch of nonlinear materials, such as cubic Kerr, saturable, thermal, reorientation, photorefractive, and quadratic media, and periodic systems.Furthermore, the existence of solitons varies in topologies and dimensions 3 . The theory of spatial optical solitons has been based on the nonlinear Schr ödinger NLS equation with a cubic nonlinearity, which is exactly integrable by means of the inverse scattering IST technique.From the physical point of view, the integrable NLS equation describes the 1 1 -dimensional beams in a Kerr cubic nonlinear medium in the framework of the so-called paraxial approximation 4 . Bright solitons are formed due to the diffraction or dispersion compensated by selffocusing nonlinearity and appear as an intensity hump in a zero background.Solitons, which appear as intensity dips with a CW background, are called dark soliton 3 . Kerr solitons rely primarily on a physical effect, which produces an intensitydependent change in refractive index 3 . The periodic wave structures play an important role in the nonlinear wave domain so that they are core of instability modulation development and optics chaos on continuous nonlinear media, modes of quasidiscrete systems or discrete system on mechanic and electric domain.Thus, periodic wave structures are unstable in the propagation process.For example, photorefractive crystals accept relatively high nonlinearity of saturated character at an already known intensity for He-Ne laser in continuous regime. Methodology The propagation of the optical radiation in 1 1 dimensions in saturable Kerr-type medium is described by the nonlinear Schr ödinger equation for the varying field amplitude Φ ς, ρ 5 : The transverse ς and the longitudinal ρ coordinates are scaled in terms of the characteristic pulse beam width and dispersion diffraction length, respectively; S is the saturation parameter; σ −1 1 stands for focusing defocusing media 5 2.2 The simplest periodic stationary solutions of 2.1 have the following form: where h is the propagation constant.By replacing the field in such a form into 2.1 , one gets To perform the linear stability analysis of periodic waves in the saturable medium, we use the mathematical formalism initially developed for periodic waves in cubic nonlinear media 5 . 2.5 With the boundary conditions, From 2.5 we get the Laplace transform of the field: i direct form: ii inverse transformation form: where u is a finite number. For the integration on real h > 0 and imaginary h < 0 poles, we calculated the complex amplitude of nonlinear equation such as 4 Mathematical Problems in Engineering For the harmonic case h < 0 integration form of the complex amplitude is 2.10 By using the integration, we get 2.12 The total phase of the optical field envelope is as follows: We assume a frequency ω as a speed variation of total phase such as We have the complex amplitude of envelope field with the following form: The hyperbolic secant plays this equation resulting in a conservative effect.The longitudinal component is 2.16 Some numerical simulations of the complex amplitude of the nonlinear equation and the total phase of the optical field depending on the propagation constant and an integer number n are illustrated in Figure 1. Figure 1 represents the model amplitude and the phase functions of the complex total number, which explained the theoretical model presented.Thanks to the complex model, the initial solution includes the hyperbolic secant and the conjugate complex part 2.17 Conclusions We have described the propagation in quadratic nonlinear media of the periodic waves in saturated Kerr type.The analytic solution for one-dimensional, bright and dark spatial solitons was found.To describe the spatial optical solitons in saturated Kerr type and the quadratic nonlinear media, we propose an analytical model based on Laplace transform.The theoretical model consists in solving analytically the Schr ödinger equation with photonic network using Laplace transform.The propagation properties were found by using different forms of saturable nonlinearity.However, an exact analytic solution of the propagation problem presented herein creates possibilities for further theoretical investigation.As a result, it is a useful structure, which obtains one-dimensional "bright" and "dark" solitons with transversal structure and transversal one-dimensional periodic waves. Figure 1 : Figure 1: Numerical simulations of complex amplitude and phase.
1,229.2
2012-04-11T00:00:00.000
[ "Physics" ]
Synthesis and Photocatalytic Degradation of Water to Produce Hydrogen from Novel Cerium Dioxide and Silver-Doped Cerium Dioxide Fiber Membranes by the Electrospinning Method The sol-gel method combined with the electrospinning technique were used to synthesize CeO2 nanofiber membranes and CeO2 fiber membranes doped with different contents of nano-silver. The thermal degradation behavior, phase structure, morphology, and optical and photocatalytic hydrogen production efficiency of CeO2 nanofiber membranes and CeO2 fiber membranes doped with different contents of nano-silver were studied. X-ray diffraction (XRD) results indicate that the increase of silver concentration can inhibit the formation of CeO2 crystal. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) observations show that in the prepared CeO2 with a diameter of about 100 nm and fiber membrane material doped with nano-silver, the fiber is made of a large number of accumulating grains. Analysis of optical properties found that the doped nano–silver CeO2 fiber membranes enhance the absorption of visible light and reduce the band gap of the material. Photocatalytic experiments show that the cerium dioxide nanofibers doped with nano-silver can greatly improve the photocatalytic performance of materials than that of pure CeO2. The Ag/CeO2 fiber membrane with the Ag/CeO2 molar ratio of 3:50 possesses the highest photocatalytic hydrogen production efficiency because of its high electron hole transfer and separation efficiency. This novel synthesis strategy can be used to prepare other broad band gap semiconductor oxides and enhance their photocatalytic activity. INTRODUCTION With the rapid development of the global economy, the demand for energy continues to grow, while the concern of greenhouse gas and aerosol emissions is increasing, the development of clean, renewable new energy has become the most urgent task for countries all over the world . As a secondary energy source, hydrogen energy has the advantages of being abundant, economical, clean, efficient, storable, and transportable, and is generally regarded as one of the most ideal pollution-free green energy sources in the 21st century (Prekob et al., 2020). At present, one of the main means of hydrogen production is photocatalytic decomposition of water to produce hydrogen, and the key of this is to choose a good photocatalyst (Bashiri et al., 2020;Abd-Rabboh et al., 2021;Hamdy et al., 2021). Therefore, it is interesting to develop new photocatalysts to construct specific defect structures and to study their photocatalytic activity. As the one of the most abundant rare elements in nature, cerium has a unique 4f electronic structure which makes its compounds widely used in optical, electrical, and magnetic fields (Rajesh et al., 2020;Abd-Rabboh et al., 2021;Wang et al., 2021a;Wang et al., 2021b;Syed et al., 2021). Cerium dioxide is a promising semiconductor photocatalyst, because it has the properties of n-type semiconductors such as good light-resistant corrosivity and excellent storage and release of oxygen, and its unique Ce 3+ /Ce 4+ valence activity makes it highly oxidative and gives it a reducing ability (Mishra et al., 2018;Wen et al., 2018;Xing et al., 2020;Wang et al., 2021c;Wang et al., 2021d). Nano-cerium dioxide possesses more special properties and applications than cerium dioxide, so researchers have more stringent requirements about morphology, size, and others (Gao et al., 2018;Gong et al., 2019;Li et al., 2021). However, research on cerium dioxide nanofibers is relatively rare. Generally, the single component CeO 2 photocatalyst has a large band gap and is difficult to respond to visible light, which greatly limits its application in the field of photocatalytic decomposition of water to produce hydrogen (Li et al., 2021). Up to now, there are three methods to improve the photocatalytic activity of CeO 2 photocatalysts (Gao et al., 2018;Malyukin et al., 2018;Mohammadiyan et al., 2018;Wen et al., 2018;Xing et al., 2020;Wang et al., 2021d;Mikheeva et al., 2021): 1) using special preparation methods to synthesize CeO 2 photocatalysts with special defect structures, 2) combining other metal oxides with a small band gap value to construct special heterojunction structure composites to enhance their light response ability, and 3) a CeO 2 photocatalyst was modified by noble metal particles to enhance the charge transfer and migration ability of the system, thus improving the photocatalytic activity of CeO 2 . Noble metal particles-doped cerium dioxide is expected to show excellent physical and chemical properties (Mikheeva et al., 2021). Therefore, the preparation of cerium dioxide and noble metal particles-doped cerium dioxide by special technology and the study of their photocatalytic decomposition of water to produce hydrogen has important research significance. In this paper, cerium dioxide composite fiber membranes and CeO 2 fiber membranes doped with different contents of nanosilver were prepared by the sol-gel method combined with electrospinning technology. The thermal decomposition behavior, phase structure, morphology, and optical and photocatalytic decomposition of water to produce hydrogen of cerium dioxide composite fiber membranes and CeO 2 fiber membranes doped with different contents of nano-silver were studied by various characterization methods. Based on the energy band theory and photocatalytic experiment results, a photocatalytic mechanism is proposed. Materials All reagents were analytical grade and were used without further treatment. Hexahydrate nitrate and polyvinylpyrrolidone (PVP) were purchased from Aladdin Reagent (China) Co., Ltd. Anhydrous ethanol, silver nitrate, and acetic acid were purchased from Sinopharm Group Chemical Reagent Co., Ltd. Preparation of Cerium Dioxide Fiber Membranes A total of 3 g of polyvinylpyrrolidone (PVP) was weighed and added to 50 ml of ethanol, then the mixture, known as solution A, was stirred for 3 h until completely dissolved. In total, 2.171 g of cerium nitrate hexahydrate was dissolved in 10 ml of ethanol. After stirring, the solution was slowly added dropwise to solution A and stirred for about 6 h. The solution was spray-coated by the electrospinning method with a temperature of 20°C and a relative humidity of 50%. The film was dried in a vacuum oven at 40°C for 12 h. The dried film was then calcined in a muffle furnace at 550°C and incubated for 0.5 h. And finally a pale yellow cerium dioxide film was obtained. Preparation of Cerium Dioxide Fiber Membrane Doped With Nano-Silver Solution A and the cerium source solution were prepared under the same conditions and the two solutions were mixed under stirring. A total of 0.017 g of AgNO 3 was added to 5 ml of deionized water and stirred for 20 min. The silver source solution was slowly added dropwise to the above mixed solution and stirred for 2 h. In addition, by changing the content of AgNO 3 in the spinning solution, the morphological characteristics and properties of the spinning film under different silver contents were investigated. The solution of 0.051 and 0.085 g of silver nitrate was prepared by the same method, which means that the molar ratios of AgNO 3 /Ce (NO 3 ) 3 in the spinning solution were 1:50, 3:50, and 5:50. The film was prepared by an electrospinning method. After drying under the same conditions, the film was calcined in a muffle furnace to 600°C for 1 h. Finally, cerium dioxide nanofiber membranes doped with nano-silver were obtained. Materials Characterization The products were characterized by thermogravimetric analysis (TG) and differential scanning calorimetry (DSC). The heating rate, the air flow rate, the injection volume, and the temperature range were 20 K/min, 100 ml/min, 2 mg and 25-800°C, respectively. The composition of membranes was characterized by a Brook D8 Advance X-Ray diffractometer with a scanning angle of 20-80°, a scanning step length of 0.02°, and using Cu target Kα (λ 0.154056 nm) radiation with a working voltage of 40 kV and a current of 40 mA. The microstructures of membranes were observed by scanning electron microscopy, while the fibers and particles constituting membranes were Frontiers in Materials | www.frontiersin.org November 2021 | Volume 8 | Article 776817 characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). UV-visible absorption spectra of prepared samples were measured by an ultraviolet and visible spectrophotometer. Photocatalytic Experiments In order to investigate the photocatalytic properties of the prepared cerium dioxide fiber membranes and the cerium dioxide fibers doped with different proportions of nano-silver, they were applied to the reaction of photocatalytic degradation of water to produce hydrogen and compared with the bulk pure cerium oxide. The Labsolar H 2 photolysis system was developed by Beijing Prefectlight Technology Co., Ltd., and the detection device was a Shanghai Tianmei GC7900 Gas Chromatograph, with a Microsolar300 high performance analog daylight xenon lamp used as the simulation light source. A 100-mg sample was added to 100 ml of deionized water, and sodium sulfite was added as the sacrificial agent to carry out photocatalytic hydrogen production. The hydrogen production of each material was compared after 6 h of illumination. RESULTS AND DISCUSSION Thermogravimetric Analysis-Differential Scanning Calorimetry Analysis Figure 1A shows the TG-DSC curves of PVP/Ce (NO 3 ) 3 composite membranes prepared by the electrospinning method. It can be seen from the figure that the sample has a large weight loss process at room temperature to 100°C, and the weight loss process is the elimination of the moisture absorbed by the sample and the possible residual solvent (Wang et al., 2013). It can be also proved that membranes have certain water absorption capacities. After that there are two large weight loss processes which correspond to two distinct exothermic peaks, especially the second exothermic peak. The first exothermic peak appears at 327°C, which indicates that from 100 to 400°C, the sample first absorbs heat, and the outer layer of the composite fiber PVP begins to decompose, then the cerium nitrate in the fiber is decomposed into cerium dioxide; the weight loss of the system during this decomposition is 15% (Mohammadiyan et al., 2018). After 550°C, the reaction is basically completed, the sample weight and heat flow curve are stable, and the final quality is about 10% of the original quality. Thus, the calcination temperature of the PVP/Ce (NO 3 ) 3 composite film can be set at 550°C. Figure 1B shows the TG-DSC analysis of PVP/Ce(NO 3 ) 3 / AgNO 3 for composite membrane materials. It can be seen from the figure that the final mass of the sample is about 14% of the original mass throughout the reaction. The weight loss phase from room temperature to 100°C is the residual solvent and moisture contained in the sample. There are three exothermic peaks since then, with two adjacent exothermic peaks from 100 to 450°C, then the process undergoes a more complex reaction. As can be seen from Figure 1B, the exothermic peak at 368°C corresponds to the initial decomposition of the polymer template, then cerium nitrate decomposes and oxidizes to cerium oxide. and the weight loss is about 30%. The exothermic peak at 420°C is a sign that the silver nitrate is thermally decomposed into silver nanoparticles with a weight loss ratio of 15%. The complete decomposition of PVP occurs between 450 and 600°C. From the TG-DSC curve, it can be seen that the thermal decomposition has been basically completed at 600°C. After the sample stabilizes, the heat flow curve tends to be smooth. The calcination temperature of the PVP/Ce (NO 3 ) 3 / AgNO 3 composite film can be set at 600°C. When silver nitrate is introduced, a higher sintering temperature is needed to obtain the target product. card of cerium dioxide (JCPDS card no. 43 -1002). Because the fibers in membranes are covered and encapsulated by a large number of polymer PVPs, two amorphous bread peaks appear in the XRD pattern which are the diffraction peaks of the polymer. In addition, there were no other sharp diffraction peaks, and it was found that the composite fiber membrane synthesized by the electrospinning method was formed into a face-centered cubic cerium dioxide with standard card JCPDS card no. 21-1272 after calcination. Figure 2B shows the XRD pattern of PVP/Ce (NO 3 ) 3 /AgNO 3 composite fiber membranes with different silver content after calcination at 600°C for 1 h in muffle furnace, which contains the molar ratios of AgNO 3 /Ce (NO 3 ) 3 of 1:50, 3:50, and 5:50 respectively. From the XRD curve, it can be seen that at 2θ 28.4°, 33.0°, 47.4°, 56.1°, 59.0°, 69.2°, 76.4°, and 78.8°, the diffraction peaks correspond to (111) (311) planes of silver, which match the standard card of the nano-silver cubic crystal structure (JCPDS card no. 04-0783). All the diffraction peaks for the three samples are cerium dioxide and nano-silver. And with the increase of the relative content of silver, the peak of nano-silver shows more strongly, while the diffraction peak of cerium dioxide is weakened. All of these have proven that the prepared material is a thin cerium dioxide fiber film material loaded with nano-silver. Figure 3 shows the real photographs and SEM images of the PVP/ Ce(NO 3 ) 3 composite fiber membranes and the membranes after calcination at 550°C in air. Figure 3A shows the real photographs of cerium dioxide after calcination. Obviously, the membrane has a greater degree of contraction and is pale yellow, the weight is very light and fragile, but still retains the structure of lamellae. Figure 3B shows the SEM image at a magnification of 100 μm, which shows that the sample is relatively flat and compact and has a high porosity. Figure 3C shows the SEM image at a magnification of 20 μm. It can be seen that the sample consists of a large number of nanofibers, and the fibers are disordered, within layers, and with no accumulation of fibers. Figure 3D shows the high resolution FESEM picture, the scale is 1 μm, the fiber is straight, the thickness and the distribution are uniform, the fiber diameter is about 100 nm, and has a very high aspect ratio. Figure 4 shows the real photographs and SEM images of the Ag/CeO 2 fiber membrane prepared at different molar ratios of Ce(NO 3 ) 3 /AgNO 3 including 1:50, 3:50, and 5:50. Figure 4A shows the real photographs of the sample after calcination at 600°C. It can be seen that the sample still retains the morphology of the membranes, and the addition of silver ions significantly increases the color of the sample which presents as brownish yellow when compared to the CeO 2 fiber membrane. Figures 4B-D show the SEM images of the Ag/CeO 2 fiber membrane prepared at different molar ratios of Ce(NO 3 ) 3 /AgNO 3 including 1:50, 3:50, and 5:50. It can be seen that the sample still retains the fiber morphology. The fiber thickness of the Ag/CeO 2 membranes loaded with nano-silver is relatively uneven and the fiber distribution is more cluttered compared with the pure cerium oxide fiber. There is a phenomenon of heap and fracture in the microstructure of the membranes. Because of the introduction of metallic silver ions in the spinning solution, the introduction of this inorganic salt changes the electrostatic parameters of the spinning solution, making the electrospinning process of the spinning fluid become more complex and changeable and making fibers of uneven thickness. With the increase of silver ions, the overall morphology of the fiber becomes more uneven, and agglomeration is becoming more and more serious. Figure 5 shows the TEM and HRTEM images of the nano-cerium dioxide membrane material. From Figure 5A, it can be observed that the cerium dioxide is a fibrous structure and the diameter of the fiber is about 100 nm. The morphology is well preserved and has a large aspect ratio. Figure 5B shows the high-resolution transmission electron microscopy (HRTEM) image. It can be seen that the fiber is actually made of nano-sized cerium dioxide grains. The cerium dioxide grains forming the fibers have a grain size of 15 nm, and there is a large number of lattice lines of cerium dioxide crystals. By measuring the interplanar spacing, it can be seen that the exposed active surface is a (111) plane and d (111) 0.313 nm. Figure 6 shows the TEM and HRTEM images of the Ag/CeO 2 fiber membrane with an Ag/CeO 2 molar ratio of 3:50. Figure 6A shows the TEM image of the sample at a 200-nm scale. It can be seen from the figure that the fibrous Ag/CeO 2 has a diameter of about 100 nm. Figure 6B shows an enlarged TEM image at 20 nm, and it can be clearly seen that the fibers are deposited from a large number of crystal particles with a grain size of about 10 nm. Figure 6C shows the HRTEM image, at 10 nm, of the Ag/ CeO 2 fiber membrane with the Ag/CeO 2 molar ratio of 3:50. A large number of lattice lines are shown in which the spacing of most of the lattice lines is 0.31 nm corresponding to the (111) plane of the cerium dioxide cubic crystal structure, and there is also a lattice line with a crystal plane spacing of 0.22 nm corresponding to the (111) crystal face of Ag nanoparticles. All of the above analyses prove that the material is cerium dioxide fiber material doped with nano-silver, which corresponds to the XRD result. Figure 7 shows the UV-Vis absorption spectra and its corresponding forbidden band width calculation for the pure cerium oxide fiber membrane and the cerium oxide fiber membrane with different amounts of loaded silver. As can be seen from Figure 7A, the light absorption of the pure cerium dioxide fiber membrane in the wavelength range of 400-800 nm is weak, and gradually increases before 400 nm. While compared with the nano-silver-loaded material, the absorption of nano-silver-loaded material in the visible light area has significantly improved. This is because the nano-silver loaded in the CeO 2 semiconductor brings defects, resulting in changes in the band structure and reduction of the band gap width. And the nano-silver particles have the capability to absorb light, so the The band gap energy (Eg) values of pure cerium dioxide nanofiber membranes and cerium dioxide nanofiber membranes doped with different amounts of nano-silver were obtained by absorbance spectra using the Tauc relation (Tang et al., 2020;Wang and Tian, 2020;Wang et al., 2021e;Gao et al., 2021). Optical Properties Where ν is the frequency, A is the absorption coefficient, and n is equal to 2. The band gap of the samples are obtained by a simple intercept method. In Figure 7B, the band gap width of pure cerium oxide fibers is consistent with the literature (Gao et al., 2018) (3.15 eV). When the molar ratios of Ag/CeO 2 are 1:50, 3:50, and 5:50, the corresponding band gap values are 3.08, 3.01, and 2.96 eV, respectively. It can be seen that the band gap of the Ag/ CeO 2 fiber membrane decreases gradually with the increase of the relative content of metallic silver. After modification of the CeO 2 fiber membrane by Ag nanoparticles, the band gap of the CeO 2 fiber membrane is reduced. Photocatalytic Decomposition of Water to Produce Hydrogen Figure 8 shows the hydrogen production of bulk cerium dioxide, cerium dioxide nanofibers, and three different silver-doped cerium dioxide nanofiber membranes. It can be seen from the figure that the hydrogen production of the sample is increasing with the progress of the reaction, but the efficiency of hydrogen production decreases with the consumption of the sacrificial agent and the decrease of the catalyst activity. The hydrogenation efficiency of bulk cerium dioxide and cerium dioxide nanofibers are very low because the cerium dioxide only absorbs ultraviolet light, which leads to low photocatalytic activity. The photocatalytic hydrogen production efficiency of the noble metal nano-silver-doped cerium dioxide fiber material has been greatly improved due to the nano-silver as it is in contact with the cerium dioxide crystal to form a heterojunction, and the band of the semiconductor is bent at the interface. Then, the electrons shift from a high Fermi level to a low Fermi level, forming a Schottky barrier in the materials. Comparing 3 M ratios of Ag/ CeO 2 , the photocatalytic hydrogen is the highest when the molar ratio of Ag/CeO 2 is 3:50, and the hydrogen production is about 402 µmol/g after 6 h of illumination. The photocatalytic activity of CeO 2 fiber increases with the addition of nano-silver, but the photocatalytic activity decreases when the doped noble metal nano-silver goes over a certain limit. This is because the excess of nano-silver particles will become the recombination center of the carrier, which will capture the hole on the surface of the CeO 2 fiber, leading to a decrease of photogenerated carrier density and a reduction in photocatalytic performance. While too much precious metal particles will cover the active center of CeO 2 , which will also reduce the photocatalytic performance of the system. Moreover, it can be seen from the SEM image that CeO 2 has a tendency to accumulate into blocks as the amount of silver increases which can make the specific surface area reduce. Photocatalytic Mechanism Based on band theory and experimental results, Figure 9 shows the photocatalytic mechanism of CeO 2 and Ag/CeO 2 photocatalysts. When visible light irradiates on the surface of CeO 2 , because the band gap of CeO 2 is large, it can only respond to ultraviolet light, so the probability of valence band electron transition to the conduction band is small. At the same time, the photocatalytic hydrogen production efficiency is low because a small number of electrons in the conduction band are easily recombined with the holes in the valence band. When Ag is loaded on the surface of CeO 2 , the electron transition from the valence band of CeO 2 to its conduction band is accelerated, so that the Ag/CeO 2 photocatalyst can respond to visible light. The electrons that transition to the CeO 2 conduction band will accelerate the transfer to Ag particles, preventing the electron from recombination with the holes of the CeO 2 conduction band. The transfer and separation of electrons and holes play an important role in the whole process of photocatalytic hydrogen production. Ag particles act as the carrier of charge carrier transfer during the whole process. CONCLUSION CeO 2 nanofiber membranes and CeO 2 fiber membranes doped with different contents of nano-silver were prepared by the sol-gel method and electrospinning technique. The CeO 2 and Ag/CeO 2 nanofiber membranes were characterized by TG-DSC, XRD, SEM, TEM, and UV-Vis, and their photocatalytic activity was investigated by the photocatalytic decomposition of water. The electrospinning method successfully prepared CeO 2 with a diameter of about 100 nm and fiber membrane materials doped with nano-silver; the fiber is made of a large number of accumulated grains. The increase of silver concentration can inhibit formation of CeO 2 crystal. With the increase of the relative content of silver ions, the difficulty of spinning increases, the morphology of the fibers becomes more and more cluttered, and the accumulation phenomenon becomes more and more serious. The doped nano-silver CeO 2 fiber membrane enhances the absorption of visible light and reduces the band gap of the material, so its photocatalytic performance is significantly improved. In the photocatalytic hydrogen production, the cerium dioxide nanofibers doped with nano-silver can greatly improve the photocatalytic performance of materials. The Ag/CeO 2 fiber membrane with the Ag/CeO 2 molar ratio of 3:50 exhibits the highest photocatalytic hydrogen production efficiency, and the hydrogen production after irradiation for 6 h is about 402 µmol/g due to its high electron hole transfer and separation efficiency. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. Frontiers in Materials | www.frontiersin.org November 2021 | Volume 8 | Article 776817 8
5,464.2
2021-11-02T00:00:00.000
[ "Materials Science", "Chemistry", "Environmental Science", "Engineering" ]
Functional Dissection of an Innate Immune Response by a Genome-Wide RNAi Screen The innate immune system is ancient and highly conserved. It is the first line of defense and the only recognizable immune system in the vast majority of metazoans. Signaling events that convert pathogen detection into a defense response are central to innate immunity. Drosophila has emerged as an invaluable model organism for studying this regulation. Activation of the NF-κB family member Relish by the caspase-8 homolog Dredd is a central, but still poorly understood, signaling module in the response to gram-negative bacteria. To identify the genes contributing to this regulation, we produced double-stranded RNAs corresponding to the conserved genes in the Drosophila genome and used this resource in genome-wide RNA interference screens. We identified numerous inhibitors and activators of immune reporters in a cell culture model. Epistatic interactions and phenotypes defined a hierarchy of gene action and demonstrated that the conserved gene sickie is required for activation of Relish. We also showed that a second gene, defense repressor 1, encodes a product with characteristics of an inhibitor of apoptosis protein that inhibits the Dredd caspase to maintain quiescence of the signaling pathway. Molecular analysis revealed that Defense repressor 1 is upregulated by Dredd in a feedback loop. We propose that interruption of this feedback loop contributes to signal transduction. Introduction As a typical metazoan suffers numerous microbial assaults during its lifespan, survival depends on robust defense strategies. Metazoan defenses are classified as either innate or adaptive. Adaptive immunity is characterized by elaborate genetic rearrangements and clonal selection events that produce an extraordinary diversity of antibodies and T-cell receptors that recognize invaders as nonself. While of profound importance, the adaptive responses are slow and limited to higher vertebrates. In contrast, the machinery of innate immunity is germ-line encoded and includes phylogenetically conserved signaling modules that rapidly detect and destroy invading pathogens (Medzhitov and Janeway 2000;Janeway and Medzhitov 2002). Model organisms, particularly insects, have played an important role in uncovering the wiring of innate immune pathways (Hoffmann 2003). Importantly, these organisms have provided powerful genetic approaches for identifying molecules that sense pathogens, elucidating steps that trigger innate defenses, and uncovering the weaponry used to kill or divert potential pathogens . We have further refined the experimental approaches for rapid functional dissection of immune responses and describe new steps in an important pathway of the innate immune response. Signaling in innate immunity consists of three steps: detection of pathogens, activation of signal transduction pathways, and mounting of appropriate defenses. The first step is triggered by the detection of pathogen-associated molecular patterns by host pattern recognition receptors (Akira et al. 2001). Typical pathogen-associated molecular patterns are b-1,3-glucan of fungi, peptidoglycan and lipopolysaccharides (LPS) of bacteria, and phosphoglycan of parasites. Signaling engages several pathways, including Toll, tumor necrosis factor, mitogen-activated protein kinase (MAPK), and Jun kinase pathways. NF-jB-type transcription factors form an important downstream nexus of the signaling pathways, and their activation promotes important defense responses. Although the defense responses are diverse and often tailored to the type of pathogen, some of the defense strategies, such as production of a panel of antimicrobial peptides, activation of phagocytic cells, and production of toxic metabolites, are evolutionarily conserved. Interest in Drosophila as a model for analyzing innate immune signal transduction had a serendipitous origin. The Toll signaling pathway was discovered and characterized in Drosophila because of its role in specification of the embryonic dorsal ventral axis (Anderson et al. 1985). Similarities of pathway components to genes involved in mammalian immunity stimulated a hallmark study showing that the Toll pathway is a central mediator of antifungal and gram-positive bacterial defenses in Drosophila (Ip et al. 1993;Lemaitre et al. 1996). It is now recognized that Toll signaling is a conserved mediator of innate immune responses. A combination of classical genetics and molecular approaches has since identified numerous components of Toll signaling in Drosophila immunity, and it has highlighted similarities to mammals at the level of signal transduction and differences at the stage of pathogen detection (Ip et al. 1993;Rosetto et al. 1995;Nicolas et al. 1998;Drier et al. 1999;Manfruelli et al. 1999;Meng et al. 1999;Rutschmann et al. 2000aRutschmann et al. , 2002Tauszig et al. 2000;Horng and Medzhitov 2001;Michel et al. 2001;De Gregorio et al. 2002;Ligoxygakis et al. 2002;Tauszig-Delamasure et al. 2002;Gobert et al. 2003;Weber et al. 2003). A second pathway, the Immune deficiency (Imd) pathway, mediates responses to gram-negative bacterial infection in Drosophila (Lemaitre et al. 1995). Although similar to the mammalian tumor necrosis factor pathway, there are several differences between the two signaling cassettes, particularly at the level of activation. As it is presently understood, the Imd pathway is headed by an apparent pattern recognition receptor, the transmembrane peptidoglycan recognition protein LC (PGRP-LC; Choe et al. 2002;Gottar et al. 2002;Ramet et al. 2002). Although the mechanisms are largely unknown, signaling proceeds through Imd (homolog of mammalian receptor interacting protein), dTAK1 (MAP3K homolog), and a complex of Ird5/Kenny (homologous to the IKKb/IKKc kinase). The active IKK complex phosphorylates the p105 homolog Relish, and Dredd (caspase-8 homolog) cleaves Relish, separating an N-terminal NF-jB domain of Relish from a C-terminal ankyrin domain (Lemaitre et al. 1995;Dushay et al. 1996;Wu and Anderson 1998;Hedengren et al. 1999;Hu and Yang 2000;Leulier et al. 2000Leulier et al. , 2002Rutschmann et al. 2000b;Silverman et al. 2000;Stoven et al. 2000;Georgel et al. 2001;Lu et al. 2001;Vidal et al. 2001;De Gregorio et al. 2002;Gottar et al. 2002;Khush et al. 2002;Naitza et al. 2002;Silverman et al. 2003;Stoven et al. 2003;Ryu et al. 2004). The N-terminal domain enters the nucleus and promotes transcription of genes encoding proteins with defense functions such as the antimicrobial peptide Diptericin (Dipt), whose expression provides a signature for activation of the pathway. Unlike the Toll pathway, which was thoroughly studied in its developmental capacities, analysis of the Imd pathway is relatively recent. Its more complete genetic dissection may well define another conserved and fundamental pathway of immune signaling. Of particular interest, a pivotal step in the Imd pathway-the regulation of Dredd-mediated cleavage of Relish-is not understood. To begin to address this, we developed a powerful RNA interference (RNAi)-based approach to functionally dissect the Imd pathway. In collaboration with others at the University of California, San Francisco, we produced a library of 7,216 doublestranded RNAs (dsRNAs) representing most of the phylogenetically conserved genes of Drosophila. We developed a cell culture assay that allowed application of this library to a highthroughput RNAi evaluation of Imd pathway activity. This screen identified numerous components of signal transduction (including negative and positive regulators of innate immune signaling), defined a hierarchy of gene action, and identified a novel gene, sickie (sick), required for activation of Relish. Focusing on regulation of the Dredd caspase, we identified a novel inhibitor of Dredd, Defense repressor 1 (Dnr1), which is upregulated by Dredd in a feedback loop that maintains quiescence. We propose that interruption of this feedback loop contributes to signal transduction. A Drosophila Reporter Cell Line of Imd Pathway Activity To facilitate rapid dissection of Imd pathway signaling, we established an S2 reporter cell line that expresses bgalactosidase under control of the promoter from a gene, Dipt, that encodes an antimicrobial peptide, Dipt-lacZ. Commercial preparations of LPS contain bacterial cell wall material capable of activating the receptor PGRP-LC and act as gratuitous inducers of antimicrobial peptide genes in Drosophila tissue culture cells (Samakovlis et al. 1992;Engstrom et al. 1993;Dimarcq et al. 1997). Consistent with previous studies, 20-hydroxyecdysone enhanced Dipt-lacZ induction by LPS ( Figure 1A; Silverman et al. 2000Silverman et al. , 2003. Inactivation of critical Imd pathway members (PGRP-LC, Imd, Ird5, and Dredd) by RNAi virtually eliminated Dipt-lacZ induction by LPS ( Figure 1B). In contrast, inactivation of the Toll pathway members Spaetzle, Tube, or Dif by RNAi had no effect on LPS-dependent induction of Dipt-lacZ. We conclude that LPS-dependent induction of Dipt-lacZ requires an intact Imd signaling pathway. To identify additional modulators of Dipt-lacZ expression, we prepared a library of 7,216 dsRNAs representing most of the phylogenetically conserved genes of Drosophila. Using the Dipt-lacZ cell line, we performed a high-throughput RNAi screen for genes whose inactivation impinges on Dipt-lacZ induction. In one screen, we identified dsRNAs that altered Dipt-lacZ induction by LPS, either enhancing or suppressing activation. In a second screen performed without addition of LPS, we identified genes whose inactivation spontaneously activated the reporter. The phenotypes defined three categories of genes, which we named-based on the phenotype of their inactivation-decreased defense by RNAi (DDRi) genes, enhanced defense by RNAi (EDRi) genes, and constitutive defense by RNAi (CDRi) genes ( Figure 1). Identification of DDRi, EDRi, and CDRi Genes In an initial visual screen, dsRNAs that altered the induced or constitutive expression of b-galactosidase were selected as candidate innate immunity genes. We subjected all the initial positives to a more stringent retest where we resynthesized the candidate dsRNAs, retested these under identical conditions, and counted the number of b-galactosidasepositive cells. We defined DDRi dsRNAs as reducing the frequency of Dipt-lacZ-expressing cells to below 40% of LPStreated controls, EDRi dsRNAs as increasing the frequency of Dipt-lacZ-expressing cells more than 2-fold, and CDRi dsRNAs as inducing Dipt-lacZ-expressing cells to a level equal to or higher than that induced by LPS. About 50% of the initial positives met these criteria, yielding 49 DDRi dsRNAs, 46 EDRi dsRNAs, and 26 CDRi dsRNAs ( Table 1). The entire process of screening and retesting was performed without knowing the identity of the dsRNAs. Nonetheless, we successfully identified all of the known Imd pathway components in the library (PGRP-LC, Dredd, and Relish) as DDRi genes, supporting the validity of this approach for identifying genes that affect Imd pathway signaling. The dsRNAs that enhance, and those that constitutively activate, the immune reporter are both expected to target inhibitors of the immune response. Nonetheless, there was only a small overlap between the EDRi genes and CDRi genes. Of the 46 confirmed EDRi dsRNAs, only five caused a CDRi phenotype, suggesting that the mechanisms that silence Imd pathway activity in the absence of infection are largely distinct from those moderating or downregulating the response to infection. We distinguish the five EDRi genes capable of constitutive activation and designate them EDRi C . EDRi C genes are listed as both EDRi and CDRi ( Figure 2B and 2C, indicated with an asterisk). Approximately half of the CDRi dsRNAs also caused morphological defects ( Figure 2C, indicated with a pound sign), i.e., enlarged cells with irregular cytoskeletal structures (see Figure 1J). While we do not know the basis for the altered morphology, gene expression profiling showed that LPS induces numerous cytoskeletal regulators, suggesting that cytoskeletal rearrangement is a component of the innate immune response (Boutros et al. 2002). We also observed EDRi and CDRi phenotypes upon inactivation of Act5C and Act42A. Due to extensive sequence homology, RNAi against either actin triggers destruction of both transcripts (A. Echard, G. R. X. Hickson, E. Foley, and P. H. O'Farrell, unpublished data). Inactivation of either actin with dsRNA directed to the actin UTRs demonstrated that both actin transcripts must be inactivated for an observable EDRi or CDRi phenotype ( Figure 2B and 2C). Epistatic Evaluation of CDRi Genes As RNAi of CDRi genes leads to ectopic Dipt-lacZ induction, we reasoned that CDRi genes are required to maintain quiescence in the absence of LPS and that induction by a CDRi dsRNA corresponds to release of inhibition of the Imd pathway. The large number of CDRi genes makes it likely that individual CDRi genes inhibit distinct steps in the Imd pathway. We sought to determine the position at which the individual CDRi genes impact the Imd pathway. In contrast to Caenorhabditis elegans, several genes can be inactivated by RNAi in Drosophila without an obvious drop in the efficiency of gene inactivation (Li et al. 2002;Schmid et al. 2002). The ability to inactivate two different gene products in sequence by RNAi provides a powerful tool to position CDRi genes relative to known Imd pathway components. In a first step, we inactivated one of three known Imd signaling components-either Imd, Dredd, or Relish. In a second step, we inactivated individual CDRi genes and monitored Dipt-lacZ induction. We reasoned that inactivation of Imd, Dredd, or Relish would not block pathway derepression by a CDRi dsRNA if the cognate CDRi impinged on the pathway at a step beyond the actions of Imd, Dredd, or Relish. Using this approach, we subdivided 20 CDRi genes into five epistatic groups ( Figure 2C; Table 2). Group I contained four CDRi dsRNAs whose action was independent of Imd, Dredd, and Relish. Group II contained 12 dsRNAs whose CDRi phenotype was independent of Imd and Dredd, but depended on Relish. Group III contained two dsRNAs whose CDRi phenotype was Dredd-independent, but was reduced in the absence of Imd and Relish. Group IV contained a single dsRNA whose phenotype was independent of Imd, but dependent on Dredd and Relish. Finally, Group V contained three dsRNAs whose ability to activate the immune reporter depended on Imd, Dredd, and Relish. The epistatic relationships demonstrate that genes in Groups II-V have inputs into the known Imd pathway, while Group I might have inputs in independent pathways required for effective Dipt-lacZ expression. Sick Is a Conserved Gene Required for Relish Activation We are particularly interested in regulators contributing to activation of the Relish transcription factor by the caspase Dredd, because this is such a pivotal step in the Imd pathway and its regulation is not understood. To identify regulators that affect Relish processing, we developed an assay that more directly monitored Relish activation. We produced an S2 cell line that expresses a copper-inducible N-terminal green fluorescent protein (GFP)-tagged Relish (GFP-Relish; Figure 3). GFP-Relish is predominantly cytoplasmic in untreated cells ( Figure 3A) and rapidly translocates to the nucleus upon treatment of cells with LPS or exposure to Escherichia coli ( Figure 3B and 3D). Western blot analysis with a monoclonal anti-GFP antibody showed that GFP-Relish is rapidly processed from a full-length form to a shorter form after exposure to LPS ( Figure 3C). These findings indicate that the For exact Dipt-lacZ expression values for each dsRNA refer to accompanying supplemental tables. In (B), the color scale (right) is compressed and extended compared to (A), and an asterisk indicates genes that also caused a CDRi phenotype. In (C), the pound sign indicates morphological defects and an asterisk indicates genes that also caused an EDRi phenotype, and the division of the genes into epistatic groups is shown. To the immediate left a false-color bar (coded as in [B]) indicates the effect of the dsRNAs on Dipt-lacZ expression without LPS addition. The block of colored columns shows the results of epistasis tests. Here, we set the undisturbed level of CDRi activation to 100% (as indicated in the left column in this group and the color code below), and to the right we represent reduction of this activation by prior RNAi of different Imd pathway genes. Five epistatic clusters (I-V) were identified (indicated by the lines to the left). DOI: 10.1371/journal.pbi0.0020203.g002 21 GFP-Relish cell line is a reliable reporter for Relish activation. Additionally, inactivation of PGRP-LC by RNAi prevented nuclear translocation of GFP-Relish in response to bacterial exposure ( Figure 3E), indicating that the reporter can be used to assay function of Imd pathway genes. We tested all DDRi dsRNAs for their effects on the response of GFP-Relish to LPS ( Figure 3F). Most DDRi dsRNAs did not affect GFP-Relish levels or its LPS-stimulated nuclear concentration, suggesting that their effects on Dipt-lacZ are independent of this step of Relish activation. Four DDRi dsRNAs (Relish, ubiquitin, CG8129, and Asph) severely reduced GFP-Relish levels, indicating that these dsRNAs directly or indirectly interfered with Relish expression or stability. The ability of these dsRNAs to block Dipt-lacZ induction suggests that Dipt-lacZ induction requires substantial levels of Relish. We identified four DDRi dsRNAs that prevented LPS-stimulated nuclear translocation of GFP-Relish: PGRP-LC, Dredd, Dox-A2, and CG10662. We named CG10662 sick. While prolonged Dox-A2 RNAi caused cell lethality, cell viability appeared unaffected by sick RNAi for up to 8 d. As sick RNAi prevents nuclear translocation of GFP-Relish and decreases Dipt-lacZ induction after LPS treatment, we propose that the Imd pathway requires sick activity for Relish-dependent Dipt-lacZ induction. Epistasis provides a second approach for positioning a DDRi gene in the hierarchy of gene action. To this end we assessed the relationship of sick to the five CDRi epistatic groups that we defined (above). We inactivated sick by RNAi and subsequently tested dsRNAs representing the five CDRi epistatic subgroupings for their ability to activate Dipt-lacZ expression in the absence of Sick ( Figure 3G; Table 3). Group I and II CDRi do not require Sick, indicating that Sick acts upstream of, or in parallel to, their action, which is at the level of Relish or downstream of Relish. Induction of Dipt-lacZ by Group III and IV CDRi dsRNAs requires Sick, suggesting that Sick is required for the effective induction of Dipt-lacZ by Dredd and Imd. Combined with the observed Sick requirement for Dipt-lacZ induction and the nuclear translocation of Relish by LPS, these data imply that Sick either mediates or supports Relish activation by Dredd and Imd. Dnr1 Is a Novel Inhibitor of Dredd Negative regulators are likely to participate in the circuitry that controls Dredd activation of Relish. The key candidate for action at this level was the single Group IV CDRi gene, CG12489, which showed epistatic relationships consistent with a role in inhibiting Dredd. RNAi of CG12489 induced Dipt-lacZ expression without immune stimulus, indicating that CG12489 normally prevents Dipt expression. As CG12489 inactivation fails to induce Dipt-lacZ in the absence of Dredd or Relish (see Figure 2C), we reasoned that CG12489 normally suppresses Dredd-dependent induction of Dipt-lacZ. We named this gene dnr1 and discuss its actions more fully below. Dnr1 is a conserved protein with an N-terminal ezrin/ radixin/moesin domain and a C-terminal RING finger ( Figure 4A). To confirm that Dnr1 inactivation stimulated Dipt-lacZ production, we measured the b-galactosidase activity of lysates from Dipt-lacZ cells treated with dnr1 dsRNA. Exposure of Dipt-lacZ cells to LPS reproducibly increased Dipt-lacZ production 4-to 5-fold ( Figure 4B). Importantly, in three independent experiments, Dnr1 RNAi stimulated Dipt-lacZ production to a similar degree in the absence of LPS. Furthermore, Dipt-lacZ activation in response to LPS was essentially reduced to background levels upon inactivation of sick. These findings provide additional support for negative and positive regulation of Relish by Dnr1 and Sick, respectively. While we detected genes with similarity to dnr1 in many higher eukaryotes, we failed to find a homolog of Dnr1 in C. elegans. Interestingly, C. elegans does not rely on an Imd pathway for innate defenses (Kurz and Ewbank 2003). Other RING finger proteins are E3 ubiquitin ligases that target a variety of substrates for proteolytic destruction. The RING finger motif in Dnr1 has greatest sequence homology to the RING fingers found in inhibitor of apoptosis proteins (IAPs; Figure 4C). IAPs are critical inhibitors of caspase activity that ubiquitinate their targets and promote autoubiquitination (Bergmann et al. 2003). Previous reports demonstrated that caspase inhibitors activate their own destruction and that this activity is RING finger mediated . Consistent with these reports, we observed surprisingly low levels of accumulation of a hemagluttanin (HA)-tagged Dnr1 in transfected cells. A point mutation in a residue critical for RING finger function resulted in increased accumulation of transfected HA-Dnr1 ( Figure 4D). We also detected a protein processing event that appears to depend on the RING finger. Upon expression of C-terminally HA-tagged Dnr1, we observed a slightly lower molecular weight isoform, suggesting N-terminal processing of Dnr1 ( Figure 4E). The absence of this lower molecular weight isoform in cells transfected with the N-terminally HA-tagged Dnr1 ( Figure 4D) is consistent with processing near the N-terminus. This processed isoform was absent in cells transfected with constructs containing the RING finger mutation ( Figure 4E). The presence of the RING finger motif, and its apparent role in destabilizing Dnr1, argues that Dnr1 is a caspase inhibitor and that, given its functional role and epistatic position as an inhibitor of Dredd, it is likely to act directly to inhibit this caspase. Dnr1 Protein Levels Are Regulated by Dredd Activity While LPS had no dramatic effect on the subcellular localization of HA-Dnr1 ( Figure 4F and 4G), exposure to LPS had a transient effect on the levels of Dnr1 protein. Addition of LPS caused an increase in HA-Dnr1 levels ( Figure 5A), which rose 4-to 5-fold 2 h after treatment with LPS and then gradually declined. Since LPS-dependent processing of Relish by Dredd proceeded in a similar manner (see Figure 3C), we tested whether Dredd inactivation affected Dnr1 protein levels. Cotransfection of the caspase inhibitor p35 along with HA-Dnr1 blocked HA-Dnr1 accumulation ( Figure 5B). Similarly, even transient treatment with the caspase inhibitor z-VAD-FMK at concentrations sufficient to prevent Relish processing ( Figure 5C) reduced LPS-dependent HA-Dnr1 accumulation ( Figure 5D). As these data implicated caspase function in Dnr1 accumulation, we tested the five Drosophila caspases represented in our library for their influence on Dnr1 stability. Only Dredd RNAi reproducibly reduced HA-Dnr1 levels ( Figure 5E and 5F). Consistent with a role for Dredd as the critical caspase in LPS-dependent Relish activation, of all caspases tested only Dredd inactivation blocked LPS-dependent Dipt-lacZ induction ( Figure 5G). In summary, addition of LPS to S2 cells activates Dredd and stabilizes Dnr1, while inactivation of Dredd by RNAi or caspase inhibitors reduces Dnr1 protein levels. We conclude that Dnr1 protein levels are regulated by Dredd activity. While it is not presently known how Dredd caspase function might influence Dnr1 accumulation, we note that the data are consistent with a negative feedback loop in which Dredd activity promotes accumulation of its own inhibitor, Dnr1. Discussion It was previously recognized that the Drosophila macrophage-like S2 cell line responds to bacterial cell wall components with the induction of antimicrobial peptide expression. This model lacks the complexities of communication between tissues that drive the spread of the immune response in larvae (Foley and O'Farrell 2003), but it offers an exceedingly powerful system for identification of mediators of antimicrobial peptide induction. To develop a genetic approach to identify novel signal transduction components, we produced reporter cell lines to follow innate immune signaling and a library of 7,216 dsRNAs representing the conserved genes of Drosophila to inactivate genes by RNAi. We focused on a screen for immune response genes in the Imd pathway because it is the less thoroughly understood of the two immune response pathways in Drosophila. A central aspect of our strategy for dissection of the pathway was to identify negative regulators as well as positively acting genes. In addition to modulating signal transduction pathways, negative regulators participate directly in signaling when downregulated by the inducing signal. Beyond the inherent importance of this relatively unexplored group of regulators, we were interested in their potential utility as an experimental lever: Identification of inhibitors acting at numerous levels of the pathway provides tools for ordering the action of the positively acting genes in the pathway and vice versa. The experimental approach and strategy proved highly efficient, yielding numerous regulators and defining a cascade of gene action by epistasis. A secondary test for the influence of positively acting genes on the nuclear translocation of Relish and the epistasis order allowed us to narrow our focus to genes that are centrally involved in the immune response. Focusing on the unresolved issue of Dredd regulation, we characterized a negative regulator, Dnr1, that provides a critical check on unwarranted Dredd activity. Our results suggest that Dredd controls Dnr1 stability in a negative feedback loop that restricts Dredd function. Normal activation of the Imd pathway may include release or bypass of this negative feedback loop. Categories of Innate Immune Inhibitors A priori, we considered two roles for inhibitors of the Imd pathway: either suppression of spontaneous activation of immune responses in the absence of infection, or downmodulation of a response to limit or terminate it. We designed screens for both these types of activities. In conjunction with the RNAi screen for dsRNAs that blocked response to LPS, we identified dsRNAs that enhanced the response-EDRis. This phenotype represents a failure to downmodulate the response. In an independent screen without LPS, we identified dsRNAs that resulted in constitutive activation of the pathway-CDRis. Surprisingly, there was remarkably little overlap in the genes identified in these two screens: Of 26 CDRis and 46 EDRis only five were in common. At present, we do not understand the functional underpinnings of the distinctions between inhibitors that sustain quiescence (CDRis) and those that downregulate an ongoing response (EDRis). Interestingly, groups of inhibitors implicate distinct pathways in immune regulation. For example, of the 17 genes that had the strongest EDRi phenotype, four encode splicing factors and four encode products that appear to interact with RNA. This functional cluster suggests that disruptions to some aspect of RNA processing/metabolism can substantially increase the number of S2 cells that activate expression of the Dipt-lacZ reporter in response to LPS exposure. While we do not know how RNA metabolism contributes to this phenotype, the repeated independent isolation of genes lying in a functional cluster reinforces a conclusion that the process is involved. Several other functional clusters were picked up in our screens. Three genes involved in Ras signaling (MESR4, Ras, and Cnk) were identified as EDRi genes. In addition, we noted weak EDRi phenotypes with three additional Ras signaling components (rolled/MAPK, Dsor1, and Pointed). These findings argue that Ras signaling downregulates responses to LPS. This might represent a negative feedback circuit. However, the finding that MESR4 also has a CDRi phenotype suggests that the Ras/MAPK pathway may also impinge on the maintenance of quiescence. Several genes involved in cytoskeletal structure or regulation were identified among the inhibitors. Genes encoding tubulin (a-Tub84D), a kinesin motor (Klp10A), and microtubule-severing function (CG4448/katanin) were isolated as EDRi genes. Perhaps an event involving microtubule structures helps limit immune responses. The two cellular actin genes (Act5C and Act42A) were individually dispensable, but their joint inactivation produced both EDRi and CDRi phenotypes. A regulator of actin function, SCAR, was also identified as a CDRi, and both actin and SCAR CDRi phenotypes fell into epistasis Group II. This suggests that disruption of the actin cytoskeleton in quiescent cells can activate the immune response in a Relish-dependent fashion. Since S2 cells are induced to phagocytose bacteria, and changes in cell shape are induced in response to LPS, it would not be surprising if cytoskeletal functions contribute to immune responses. Indeed, microarray studies showed induction of numerous cytoskeletal components in S2 cells upon incubation with LPS (Boutros et al. 2002). Our findings, however, suggest a different involvement of the cytoskeleton in which it functions to constrain S2 cells, preventing or limiting their innate immune responses. A previous conventional genetic screen for mutations leading to constitutive action of the Imd pathway in Drosophila larvae demonstrated that Relish basal signaling is maintained at a low level by proteosomal destruction of processed Relish (Khush et al. 2002). A Skp1/Cullin/F-box (SCF) component was identified as involved in ubiquitination of the N-terminal Relish domain. We did not include any genes in the category of ubiquitination and proteasome function in our CDRi group. This might mean that this pathway does not influence the cellular responses in the S2 tissue culture system. However, our first round of screening suggested that RNAi to a Drosophila F-box resulted in increased basal signaling (unpublished data). This and other tentative indications of involvement of this pathway were either not reproduced or fell below the threshold in retesting. We are left uncertain about SCF contributions to immune induction in our system. A GFP-Relish Reporter Line Subdivides DDRi Genes As in the case of the CDRi and EDRi phenotypes, our screen for DDRi phenotypes identified numerous genes (G) The number of Dipt-lacZ-expressing cells after LPS treatment is greatly reduced after Dredd RNAi, while RNAi against Dcp-1, Ice, Nc, or Decay has no effect. DOI: 10.1371/journal.pbi0.0020203.g005 Figure 6. A Schematic of the Proposed Relationships of the Novel Immune Regulators, Sick and Dnr1, to Dredd and Rel Pointed and blunt arrows indicate activation and inhibition, respectively. Both Sick and Dredd are required for translocation of Rel to the nucleus and for activation of Dipt expression and they are consequently positioned upstream of Rel as activators. In the absence of Sick or Dredd, Dnr1 function is not needed to maintain pathway quiescence. Thus, Dnr1 is ordinarily required to either inhibit Sick and Dredd functions or to negate their actions, and we have indicated these regulators as being downstream of Dnr1 (A). Although we have no epistatic data that separates the action of Sick and Dredd, Dredd appears to directly cleave Rel and is hence likely to be immediately upstream of Rel. Sick might function in conjunction with Dredd or as an activator of Dredd. Since dnr1 RNAi does not enhance the response to LPS, we suggest that its inhibitor activity is either repressed or bypassed upon exposure to LPS. Consequently, we have shown that treatment with LPS counteracts Dnr1-dependent Dredd inhibition (B). We do not mean to preclude other actions of LPS that might contribute to induction, but it is notable that inactivation of Dnr1 is sufficient to activate signaling. Finally, we have shown that Dnr1 levels are affected by Dredd, and we have indicated this with a positive feedback arrow. DOI: 10.1371/journal.pbi0.0020203.g006 falling into functional categories. One potential limitation of our approach for identification of DDRi is that some genes required for ecdysone maturation may be selected as immune deficient. Additionally, one of the largest functional categories was genes involved in translation and included four ribosomal proteins, three initiation factors, two amino acyl-t-RNA synthases, and an elongation factor. It seems likely that RNAi of genes in this category affects translation of the Dipt-LacZ reporter, as opposed to affecting modulation of signaling events. To cull our collection of DDRis of such indirect modulators of the response, we developed a secondary screen that does not rely on de novo gene expression. Based on the previously described phenotypes of Imd pathway members, we reasoned that inactivation of the core components transducing the signal would compromise activation of the Relish transcription factor. To identify DDRi dsRNAs that prevented Relish activation, we prepared a GFP-Relish reporter cell line and rescreened DDRi dsRNAs for loss of GFP nuclear translocation in response to LPS. In addition to confirming a requirement for Dredd and PGRP-LC in Relish activation, we implicated a proteosomal regulatory subunit Dox-A2 and identified a novel gene sick as involved in Relish nuclear translocation in response to LPS. Although cells treated with sick dsRNA failed to mount an immune response, the cells were otherwise healthy through the course of the experiment. Dox-A2 RNAi reduced the survival of cells and was effectively lethal within a few days of the scoring of the immune response. We conclude from this that Sick and Dox-A2 contribute to the central signal transduction process, but it is presently unclear whether Dox-A2 has a significant specific input or if its effects are secondary to a global effect on cell viability. It is notable that only two DDRi genes passed our secondary screen based on GFP-Relish localization. Does this mean that all the other DDRis are not really involved? While we have not yet analyzed all these genes, we suspect that many of them will modify the Imd pathway, either impinging on the pathway at a point beyond Relish translocation, or quantitatively or kinetically modifying Relish translocation in a manner that we did not detect in our screens. Insight into this issue is likely to be derived from further epistasis tests that might place some of these DDRis in the signaling pathway. An Epistatic Network to Position CDRi and DDRi Genes We identified an unprecedented large number of immune response inhibitors (CDRi genes) in our screens. As there are diverse steps within and potentially outside the Imd signaling pathway at which the CDRi inhibitors might act, we sought to position their actions with respect to known Imd pathway functions by RNAi epistasis tests. By sequential inactivation of known Imd pathway components and CDRi gene products, we tested whether constitutive activation of immune reporters by CDRi dsRNAs depends on steps in the signal transduction pathway. In this way, we defined five distinct epistatic categories of CDRi gene products. The four CDRi genes that continue to activate immune responses despite inactivation of Imd, Dredd, or Relish are likely to act on signal-transduction-independent factors that maintain transcriptional quiescence of Dipt. The largest group of CDRis (12) depends on Relish function but not on upstream activators of Relish. These are likely to include two types of regulators: one type that sets the threshold of response so that basal activity of Relish does not trigger pathway activity, and a second type that contributes to suppression of Relish activity. The latter type of regulator might include inhibitors that impinge on the late steps in the signal transduction cascade. For example, genes whose normal function inhibits the activity of the full-length Relish transcription factor might be required to make the pathway activator dependent, and these would be found in this category. The remaining upstream epistasis groups that rely on additional signal transduction components are strongly implicated as significant contributors to the immune induction pathway. As all of the CDRis induced robust immune responses in the absence of ecdysone (unpublished data), we propose that the CDRis have their input into the Imd pathway at a level that is the same or lower than the level of the input from ecdysone. Given that this is true for all five epistatic groups of CDRis, the result suggests that ecdysone has its input at an early level of the Imd pathway. The identification of five epistasis groups of inhibitors also provides reference points for a second round of epistasis tests that position novel DDRi genes within the Imd pathway. We used this approach to show that the novel DDRi sick is required for constitutive activation of the responses by inactivation of CDRi genes in Groups III, IV, and V genes but not for the action of CDRi Group II or Group I genes. If we assume a simple linear pathway, this would indicate that Sick functions upstream of Relish and downstream of Imd and Dredd. It is noteworthy that the epistatic data are consistent with molecular data indicating that Sick is required for Dipt-lacZ induction and the nuclear translocation of Relish in response to LPS. This combination of phenotypic, epistatic, and molecular data argues for participation of Sick in the regulated activation of the Relish transcription factor. Dnr1 Prevents Ectopic Dredd-Dependent Relish Activation One epistatic group struck us as particularly interesting. While Dnr1 inactivation caused ectopic Dipt-lacZ expression, simultaneous loss of Dredd or Relish restored cells to their resting state. These data indicate that the wild-type function of Dnr1 is to prevent Dredd-dependent activation of Relish. Consistent with this hypothesis, we identified a C-terminal RING finger in Dnr1 with greatest similarity to the RING finger motifs observed in the C-terminus of IAP proteins. In addition to regulating caspase activity, IAPs also regulate their own stability through ubiquitin-mediated proteolysis. Similarly, we observed that mutation of a critical RING finger residue greatly stabilized Dnr1. These features suggest that Dnr1 is a caspase inhibitor, suggesting that it might act directly to inhibit Dredd activity. We observed that exposure of cells to LPS transiently stabilized Dnr1 and that this stabilization directly paralleled the period of Dredd-dependent Relish processing. This suggested to us that Dnr1 stability and accumulation might be regulated by its target, Dredd, a regulatory connection that could establish a negative feedback loop. We confirmed that Dredd activity is required for accumulation of Dnr1. These results suggest that Dredd modulates a RING-finger-dependent Dnr1 destruction pathway ( Figure 6). Our results are consistent with a feedback inhibitory loop where Dredd activity promotes accumulation of its own inhibitor ( Figure 6); however, it is not clear under what circumstance this loop functions. Since Dnr1 inactivation did not enhance Dipt-lacZ production by LPS, we propose that Dnr1 inhibition of Dredd is suppressed or bypassed by LPS treatment and that Dnr1 is not essential for downregulation of an ongoing response. Further, as suppression of Dnr1 by RNAi is sufficient to activate immune responses, Dnr1 functions in the absence of induction and this function is required for quiescence. Thus, LPS inactivation of Dnr1 function ought to be sufficient to trigger Dredd-dependent cleavage of Relish in the Imd pathway, and it could make a significant contribution to pathway activation. In summary, a new and powerful screening approach has provided many candidate regulators of the Imd pathway of the innate immune response, and we suggest that the newly identified contributors Dnr1 and Sick will govern central steps in the regulatory cascade that activates the Relish transcription factor. While our analysis has led to a focus on these two regulators, we suspect that other genes among those isolated will also make important direct contributions to the Imd pathway. Furthermore, some of the groups of genes falling into functional clusters are likely to define physiologically relevant inputs into the induction pathway.
8,653.4
2004-06-22T00:00:00.000
[ "Biology" ]
An Artificial Gene for Human Porphobilinogen Synthase Allows Comparison of an Allelic Variation Implicated in Susceptibility to Lead Poisoning* Porphobilinogen synthase (PBGS) is an ancient enzyme essential to tetrapyrrole biosynthesis (e.g. heme, chlorophyll, and vitamin B12). Two common alleles encoding human PBGS, K59 and N59, have been correlated with differential susceptibility of humans to lead poisoning. However, a model for human PBGS based on homologous crystal structures shows the location of the allelic variation to be distant from the active site with its two Zn(II). Previous microbial expression systems for human PBGS have resulted in a poor yield. Here, an artificial gene encoding human PBGS was constructed by recursive polymerase chain reaction from synthetic oligonucleotides to rectify this problem. The artificial gene was made to resemble the highly expressed homologous Escherichia coli hemB gene and to remove rare codons that can confound heterologous protein expression in E. coli. We have expressed and purified recombinant human PBGS variants K59 and N59 in 100-mg quantities. Both human PBGS proteins purified with eight Zn(II)/octamer; Zn(II) binding was shown to be pH-dependent; and Pb(II) could displace some of the Zn(II). However, there was no differential displacement of Zn(II) by Pb(II) between K59 and N59, and simple Pb(II) inhibition studies revealed no allelic difference. ological results with protein structure/function studies is in order. PBGS catalyzes the first common step in the biosynthesis of all tetrapyrroles (heme, chlorophyll, vitamin B 12 , cofactor F 430 , etc.). Human PBGS is a Zn(II) metalloenzyme unique in its sensitivity to inhibition by lead. Although all PBGSs appear to be metalloenzymes (7), metal ion usage varies dramatically between species (8,9). One outcome of this variation is that microbial and plant PBGSs are poor models for studying the effect of lead on human PBGS function. Lead inhibition of human PBGS is one of the earliest physiological responses to lead intoxication and as such is believed to be related to the detrimental effects of low level lead poisoning. However, mapping the common human polymorphism onto the x-ray crystal structure of the related yeast PBGS protein does not indicate a structural variation that would obviously affect either metal binding or catalytic function. A model for human PBGS is presented in Fig. 1 and shows the location of amino acid 59, which is lysine in the ALAD1 gene product and asparagine in the ALAD2 gene product. Production of human PBGS for study has been problematic. Purification of PBGS from human blood gives relatively low yield, is plagued by considerations of blood-borne diseases, and contains a mixture of the isozymes (10,11). The human ALAD gene was cloned and sequenced more than a decade ago and found to express poorly in Escherichia coli (12). To generate a better expression system, the gene was cloned into yeast (13), but the levels of protein expression remain insufficient for thorough functional analysis. For a novel approach to heterologous expression of human PBGS, we chose to mimic a construct used for overexpression of the E. coli hemB gene in E. coli (14,15). This system can generate hundreds of milligrams of E. coli PBGS (up to 30% of the soluble protein). The reasons for high level expression are not fully understood. In this case, the hemB gene is apparently downstream from a naturally strong E. coli promoter, and other poorly understood aspects of gene structure may contribute to the phenomenal levels of constitutive expression observed. A second consideration in artificial gene design is codon usage. The human ALAD gene was analyzed and found to contain clusters of codons that are rarely used by E. coli (see Fig. 2A). In contrast, the E. coli hemB gene contains only one rare codon. Kane (16) has described how clusters of six specific rare codons can be detrimental to both the quality and quantity of heterologous proteins expressed in E. coli, and specific translational errors have been documented for such clusters (17). Hence, in the design of an artificial gene for human PBGS, the E. coli hemB gene structure was mimicked to the greatest extent possible, and rare codons were replaced. Here we describe the design and synthesis of a PCR product (EJhum) containing an artificial gene coding for human PBGS. EJhum mimics a pUC119 version of pCR261 (14), which gives high level constitutive expression of the E. coli hemB gene (encoding E. coli PBGS) in a variety of E. coli host strains. Successful large-scale protein production of human PBGS was obtained by expression under control of the T7 polymerase system using a plasmid named pMVhum, which encodes the K59 protein corresponding to the ALAD1 allele inserted into the pET3 vector. We present the expression and purification of human PBGS from this construct and a basic characterization of the two human PBGS proteins K59 and N59, the latter a product of site-directed mutagenesis. We also compare the Zn(II) binding properties of the two proteins and address the ability of Pb(II) to displace the essential Zn(II). EXPERIMENTAL PROCEDURES Materials-Oligonucleotides were synthesized in-house in the Fannie Rippel Biotechnology Center and used without further purification. Cloned Pfu DNA polymerase was obtained from Stratagene and used at 2.5 units/50 l of PCR mixture. Epicurian Coli ® XL1-Blue MRFЈ Kan supercompetent cells and pPCR-Script Amp SK(ϩ) cloning vector were also purchased from Stratagene. Calf intestine alkaline phosphatase and T4 DNA ligase were purchased from Life Technologies, Inc. Plasmids pET3a and pET11a as well as BLR(DE3) and BLR(DE3) pLysS competent cells were obtained from Novagen. Library efficient competent cells of DH5␣ TM and HB101 were obtained from Life Technologies, Inc. The restriction enzymes BamHI, EcoRI, HpaI, and NdeI were from New England Biolabs Inc. Plasmids were purified using QIAGEN plasmid purification kits, and PCR products were extracted from 1-2% agarose using QIAEX II kits from QIAGEN Inc. DNA sequencing was carried out in-house using ABI sequencing technology. Mutagenesis of plasmid pMVhum was carried out using the QuikChange technology of Stratagene. Design of the Nucleotide Sequence of the Artificial Gene and Plasmid pEJhum-The E. coli gene construct pCR261 is an outstanding constitutive expression system (14). The pUC119 version, denoted pLM1228, is illustrated in Fig. 2B. The artificial gene for human PBGS was placed in the same context (Fig. 2C); the plasmid is denoted pEJhum (Fig. 2D). Four steps were included in the design process. First, the sequence of pLM1228 was determined from original sequence data and put in the context of pUC119 (u07650.gb_sy) and the 6Ј-8Ј region of the E. coli chromosome (u73857.gb_new). The region of pLM1228 containing E. coli DNA is illustrated in Fig. 2B; nucleotides 1-271 are the first 271 nucleotides of pUC119. Nucleotides 272-1744 are identical to a region of the E. coli chromosome (the reverse of nucleotides 97393 to 95921 of u73857.gb_new); these include 75 nucleotides of 5Ј-untranslated sequence upstream from hemB and the entire hemB gene, followed by 364 bases 3Ј to hemB, which include a part of the coding sequence of the yaiG gene. The remaining sequence derives from pUC119. A schematic of the target PCR product is shown in Fig. 2C. Plasmid pEJhum was designed to retain all of the sequence of pLM1228 that is upstream and downstream of the E. coli hemB gene as illustrated in Fig. 2D. The second step was to alter the codons of the human ALAD gene to mimic those of the E. coli hemB gene in all cases where the amino acids of the two proteins are identical. This was done manually and was assisted by the Genetics Computer Group programs FRAMEALIGN ( Fig. 2A) and BESTFIT. The third step removed any remaining rare codons (CTA, ATA, AGG, CCC, AGA, and AGT) that have been identified as particularly problematic in heterologous expression (16). The sequence of the human ALAD gene (GenBank TM accession number M13928) contains 20 of these codons, which Fig. 2A shows are relatively frequent and that some fall in clusters. The Genetics Computer Group programs CODONFRE-QUENCY and FRAMEALIGN were used to assist the identification of these codons, which were arbitrarily altered to common E. coli codons. In addition, the sequence was searched for possibly detrimental "overrepresented" codons as described by Irwin et al. (18). Three codon pairs were found with a CHISQ3 value of Ͼ50; these were altered to common codons to generate the final artificial gene. The E. coli hemB gene was also found to contain three over-represented codon pairs (one CTCGAC and two CTGGTG). The significance of altering over-represented codons remains untested. The fourth step assembled and analyzed the sequence of the target plasmid pEJhum using the Genetics Computer Group program AS-SEMBLE. This file contained the first 347 nucleotides of pLM1228, followed by the coding region of the artificial gene (990 nucleotides) and nucleotides 1320 -4636 of pLM1228. The resulting plasmid contained the beginning of the lacZ gene in the first reading frame (starting at nucleotide 217), no extended translatable sequence in the second reading frame, and the artificial gene in the third reading frame (starting at nucleotide 348). To insert a stop site in the lacZ fusion without interfering with any of the possibly important transcriptional regions, we altered nucleotide 343 from A to T. Design of the Synthetic Route for EJhum-The synthetic target EJhum, illustrated schematically in Fig. 2C and in detail in Fig. 3, contains nucleotides 264 -1371 of pEJhum and extends from a unique BamHI site to a unique HpaI site. Additional nucleotides were added to the 5Ј-and 3Ј-ends to serve as handles. Using the program OLIGO (National Biosciences), eight templates and eight primers were designed for the total synthesis of EJhum using recursive PCR. The templates were each 155-175 nucleotides in length with at least a 25-base pair overlap so that adjacent oligonucleotides would prime each other. The end primers were each 30 -35 nucleotides in length. All were matched as well as possible in melting temperatures. Generation of PCR Fragments to Yield EJhum- Fig. 3 illustrates the sequence of EJhum with notations for the location of the eight templates and the eight primers. Sequential primer pairs and their cognate templates were used to synthesize four fragments, each consisting of ϳ300 nucleotides. Fragments A-D were flanked by primer pairs 1/2, 3/4, 5/6, and 7/8, respectively. The fragments were agarose gel-purified and used as templates for the recursive PCR synthesis of two ϳ580-base pair fragments. These fragments, AB and CD, were flanked by primer pairs 1/4 and 5/8, respectively. Finally, the 1.1-kilobase PCR product EJhum arose from the recursive PCR of fragments AB and CD using The two active-site Zn(II) are shown as black balls in the red monomer only, consistent with a plethora of data on human PBGS (33,35). Amino acid 59, shown as lysine, is shown as a surface residue illustrated in blue. The model does not include residues 209 -222, which are disordered in 1AW5. The corresponding residues are seen as an ordered lid over the active site in one-half of the monomers of the P. aeruginosa PBGS structure (26). primer pair 1/8. The product EJhum theoretically contains the artificial gene encoding human PBGS. However, this synthetic strategy did not generate an error-free version of EJhum, and a second strategy was needed (see below). Creation of pEJhum and Its Transformation into a Host Strain-Plasmid pLM1228 and the PCR product EJhum were digested with BamHI and HpaI. The vector was dephosphorylated, and the vector and the 1.1-kilobase PCR product were gel-purified and ligated with T4 DNA ligase to form pEJhum (Fig. 2D). If correct, this plasmid, when used to transform E. coli strain RP523 (hemB mutant) (19), should complement the hemin auxotrophy. pEJhum-containing transformants were not obtained under conditions where pLM1228 could successfully be used to prepare complementing transformants of RP523. However, transformants of pEJhum in E. coli strain HB101 were obtained. Ninety-two transformants were screened by colony PCR using primers 3 and 6 ( Fig. 3). Fourteen colonies were selected, but none overexpressed human PBGS. EJhum was amplified by PCR from each of these 14 colonies using primers 1 and 8, and the PCR products were sequenced and uniformly found to contain multiple randomly located deletions (1-30 bases) and some errors. To determine whether the errors derived from the unpurified templates or from the PCR process, each fragment (A-D) was blunt endligated into pPCR-Script Amp SK(ϩ) at the SrfI site and transformed into Epicurian Coli XL1-Blue MRFЈ Kan for plasmid purification and . Concerns for the quantity or quality of the expressed protein stem from clusters of these rare codons. All codons shaded and/or in large type were subject to change in the design of the artificial gene according to the rationale described under "Experimental Procedures." B-F, shown are maps of the DNA constructs used in this study. B, plasmid pLM1228 was originally prepared for expression of site-directed mutants of E. coli PBGS. The E. coli hemB gene and its flanking DNA were cloned into the EcoRI site of pUC119 (14). Section I is part of the natural hemB promoter region; section II is the coding region of hemB; section III is 3Ј-untranslated DNA; and section IV is part of the yaiG gene. For unknown reasons, this construct gives high constitutive expression of E. coli PBGS in a variety of E. coli hosts. C, the PCR target EJhum resembles its cognate portion of pLM1228 except that the artificial gene encodes human PBGS (see "Experimental Procedures"). D, the plasmid pEJhum, designed for constitutive expression of the artificial gene, did not yield stable constitutive expression. E, the PCR target MVhum contains the coding region of the artificial gene alone plus the NdeI and BamHI sites (and some flanking DNA) for insertion into pET3a or pET11a. F, the final plasmid pMVhum contains the artificial gene under control of T7 polymerase in a pET3 background. sequence verification. In almost all cases, the fragments contained deletions of significant length, but correct copies of each fragment could be obtained by PCR from these or one of the 14 original colonies of HB101(pEJhum). Fragments A-D were used to prepare fragments AB and CD by PCR, which in turn were used to prepare EJhum. In this case, virtually error-free synthesis was obtained, and the correct sequence was verified. The correct copy of pEJhum was not able to produce complementing transformants of RP523 under conditions where it could produce transient transformants of HB101. HB101(pEJhum) was found to be unstable, with gradual loss of the plasmid and no significant overexpression of the artificial gene. We conclude that constitutive expression of human PBGS from EJhum is toxic to E. coli, and controlled expression was pursued. Creation of the PCR Product MVhum and Plasmid pMVhum, a Controlled Expression System-To control the expression of human PBGS in E. coli, EJhum was reengineered 1) to remove the 5Ј-and 3Ј-flanking regions derived from the E. coli genome and 2) to allow incorporation into one of the pET plasmids for T7 polymerase-directed expression in a derivative of BL21(DE3). Two oligonucleotides (AACA-TCAGGCTGCATATGCAGCCTCAGTCCGTTC and GGCTGAGAGGA-TCCAAAATTATTCCTCCTTCAGCCAC) were used as PCR primers to excise the artificial gene from pEJhum, to align the start site with the NdeI site of pET3a or pET11a, and to add a BamHI site past the termination codon (Figs. 2E and 3). The PCR product MVhum was gel-purified, blunt end-ligated into the pPCR-Script AMP SK(ϩ) vector, and transformed into XL1-Blue MRFЈ Kan. White transformants were screened by colony PCR using the above-mentioned primers. Following sequence confirmation, the artificial gene was excised and ligated into the NdeI and BamHI sites of both pET3a and pET11a to yield plasmids pMVhum3 and pMVhum11, respectively. Because of high sequence identity (ϳ60%) between the artificial gene and the E. coli hemB gene, the recA1 hosts BLR(DE3) and BLR(DE3) pLysS were selected as potential hosts for both pMVhum3 and pMVhum11. In all cases, transformants were obtained, and overexpression was observed upon induction with IPTG using SDS Phastgels as the analytical tool. BLR(DE3)(pMVhum3) showed the lowest basal expression and the highest induced expression; thus all further work was done with this strain, renamed BLR(DE3)(pMVhum). An illustration of pMVhum is included in Fig. 2F. Expression of Human PBGS from BLR(DE3)(pMVhum)-BLR(DE3) (pMVhum) was grown in 1-liter batches of Luria broth, 100 g/ml ampicillin, and 0.4% glucose at 37°C in an air shaker, starting the inoculation from a single colony from a fresh transformation. After 16 h, the cells typically reached an A 600 of 4 -5, at which point they were spun down and resuspended in 1 liter of Luria broth at 42°C. After 30 min to 1 h of shaking, the flasks were cooled to 37°C, and IPTG was added to a concentration of 10 M (20). Overnight growth at 37°C under these conditions in an air shaker led to a final A 600 of 6 -11, and 10 -20% of the total PBGS (K59 variant) was in the soluble fraction of the cell lysate. The N59 variant partitions more favorably into the soluble fraction (ϳ60%). An alternate expression procedure that improves protein solubility omitted the 42°C heat shock and used 100 M IPTG to induce expression at 15°C for a period of 48 h in the presence of 20 M Zn(II). Purification of Recombinant Human PBGS from BLR(DE3) (pMVhum)-Frozen BLR(DE3)(pMVhum) cells were suspended (2 ml/g of cells) in 50 mM potassium phosphate, pH 8.0, 170 mM KCl, 5 mM EDTA, 10 mM 2-mercaptoethanol, and 0.1 mM phenylmethylsulfonyl fluoride (PMSF). After the cells were thawed and dispersed, lysozyme was added to 0.4 mg/ml, and the suspension was stirred at room temperature for 1.5 h. At this point, an equal volume of 0.1 M potassium phosphate, pH 7.0, 12 mM MgCl 2 , 40 M ZnCl 2 , 10 mM ␤-mercaptoethanol, and 0.1 mM PMSF was added with DNase I (ϳ65 units/g of cells), and the suspension was stirred an additional 20 min. All of the subsequent procedures were carried out at 4°C. The cell mixture was passed through a French press at 20,000 p.s.i. The resulting cell lysate was centrifuged at 20,000 ϫ g for 20 min. The supernatant was subjected to a 20 -45% saturated ammonium sulfate fractionation. The 45% saturated ammonium sulfate pellet was dissolved in 30 mM potassium phosphate, pH 7.0, 10 M ZnCl 2 , 10 mM ␤-mercaptoethanol, 0.1 mM PMSF, and 20% saturated ammonium sulfate and applied to a phenyl-Sepharose column that had been equilibrated with the same buffer. The column was washed with ϳ1 column volume of the same buffer and then subjected to a linear 2-column volume gradient that ended at 2 mM potassium phosphate, pH 7.0, 10 M ZnCl 2 , 10 mM ␤-mercaptoethanol, and 0.1 mM PMSF. Human PBGS eluted after the gradient in a 1-column volume wash of the ending buffer and was pumped directly onto a DEAE-Bio-Gel column that had been equilibrated with 30 mM potassium phosphate, pH 7. Eight synthetic oligonucleotide templates of 155-175 bases were prepared. These are depicted with an overbar when the template is identical to the coding strand. An underbar is used for regions where the template is the reverse complement of the coding strand. Overlapping adjacent templates were designed to act as primers to each other. These regions of complementary priming have both overbars and underbars. Eight shorter oligonucleotide primers were synthesized. These are depicted with strings of arrowheads and are numbered on their 3Ј-ends. Arrowheads placed above the sequence indicate primers that are identical to the sequence depicted, and arrowheads placed below the sequence symbolize a primer that is the reverse complement of the sequence depicted. As described under "Experimental Procedures," each set of primers (1/2, 3/4, 5/6, and 7/8) serves to amplify ϳ300-nucleotide fragments, each comprising about one-quarter of EJhum and named consecutively A, B, C, and D. Primer sets 1/4 and 5/8 serve to amplify the ϳ580-nucleotide halves of EJhum named AB and CD. Finally, primer pair 1/8 serves to amplify the entire 1.1-kilobase PCR product EJhum. The sequence regions in italics are from the naturally occurring 5Ј-and 3Ј-flanking regions of the E. coli hemB gene. The BamHI and HpaI sites (near the 5Ј-and 3Ј-regions, respectively) were used for insertion of the gene into pEJhum in place of the corresponding region of the hemB gene of pLM1228 (see also Fig. 2). The 3Ј-ClaI site was added as an additional handle. Finally, the shaded boxes depict the primers used to redesign EJhum to MVhum for insertion into the NdeI and BamHI sites of pET3a. These primers are marked at their 3Ј-and 5Ј-ends. was passed down a 1-m-long S-300 column (column volume ϳ 70 ϫ sample volume) at a flow rate of 0.1 cm/min. The S-300 buffer contained 0.1 M potassium phosphate, pH 7, 10 mM ␤-mercaptoethanol, and 10 M Zn(II). Zn(II) Binding by Equilibrium Dialysis-Aliquots (1.0 ml) of purified recombinant human PBGS (K59, 0.56 mg/ml) were placed in Slidealyzer cassettes and equilibrated against 250 ml of buffer (0.1 M potassium phosphate, pH 7, and 10 mM ␤-mercaptoethanol) containing various initial concentrations of Zn(II) (0 -30 M). Following 24 h of dialysis at 4°C, the protein was removed from the cassette and analyzed for both protein concentration by Pierce Coomassie assay and for total Zn(II) concentration by direct reading at 213.9 nm using a flame atomic absorption spectrometer. Dialysis buffers were read directly using the same method to determine free Zn(II). Bound Zn(II)/octamer was calculated from the protein samples as follows: ((Zn total Ϫ Zn free )(280 mg/mol of octamer))/protein concentration (mg/ml). The apparent binding constants were determined by nonlinear best fit using a model for two sites of equal stoichiometry as we found best fit the Zn(II) binding data for bovine PBGS obtained previously (21). Additional Zn(II) binding data were obtained at room temperature following overnight dialysis in 50 mM sodium acetate and 10 mM ␤-mercaptoethanol, pH 5.0, which served to strip the Zn(II) from the protein. The Slidealyzer cassettes were then moved to 0.1 M potassium phosphate and 10 mM ␤-mercaptoethanol at pH values of 6 -8 containing variable concentrations of Zn(II) (0 -30 M). Some of these experiments also included 20 M Pb(II) in the dialysis buffer. These data uniformly fit to a simple hyperbolic binding equilibrium with a single n value and a single K d value. Building a Model for Human PBGS-A model for the human PBGS dimer was prepared from Protein Data Bank code 1AW5 using the programs PSI-BLAST and SCWRL (22)(23)(24)(25). Because the Zn(II) ligands seen in yeast PBGS (22) are conserved in human PBGS, the two Zn(II) are included in the model, which is illustrated as a dimer in Fig. 1. However, since the human PBGS octamer binds a total of eight Zn(II) (four Zn A that are bound to Cys-223 and His-131 and four Zn B that are bound to Cys-122, Cys-124, and Cys-132), the two Zn(II) are included only in one subunit of the dimer. This mimics the metal binding asymmetry seen in the crystal structure of Pseudomonas aeruginosa PBGS (26). Enzyme Activity Assays-Enzymes were preincubated in 0.1 M potassium phosphate, pH 7, 10 mM ␤-mercaptoethanol, and 10 M Zn(II) for 10 min at 37°C prior to the addition of 5-aminolevulinate-HCl to a final concentration of 10 mM. All assays were allowed to proceed for 5 min prior to termination with 0.5 volume of Stop reagent (50% trichloroacetic acid and 0.1 M HgCl 2 ). Porphobilinogen formed was determined by absorbance at 555 nm ϳ8 min after the addition of 1.5 volume of modified Ehrlich's reagent. The extinction coefficient of the pink complex formed (⑀ 555 ) is 62,000 M Ϫ1 . RESULTS Design and Synthesis of an Artificial Gene Encoding Human PBGS-Recursive PCR (27) was used to prepare an artificial gene encoding the protein formed by the more frequent allele of the human gene for porphobilinogen synthase. The gene was designed to resemble the homologous E. coli hemB gene to the greatest extent possible and to remove codons rarely used by E. coli. As described under "Experimental Procedures," the synthetic process and constitutive expression were problematic. We conclude that it was unwise to use unpurified synthetic oligonucleotide templates 150 -175 bases in length for recursive PCR. The gene toxicity problem may be related to alternative protein folding/degradation functions reported for PBGS proteins (28,29). The artificial gene encoding human PBGS in pMVhum is optimized for controlled T7-directed expression in an E. coli BL21-type host and has been shown to yield high levels of expression as determined by SDS-polyacrylamide gel electrophoresis (Fig. 4). Optimized Expression of Human PBGS in the Soluble Extract of BLR(DE3)(pMVhum3)-Standard conditions for expressing genes from pET vectors in BL21 or its derivatives often yield high expression of proteins that are found aggregated in inclusion bodies, as we have found for human PBGS. The inclusion body might be considered as a partially purified form of the protein that can sometimes be denatured, purified, and refolded into an active protein. Human PBGS is a homooctameric enzyme containing four active sites, reactive cysteines, and two different types of Zn(II)-binding sites and thus would have made the refolding exercise a challenge. Instead, we used active enzyme purified from the soluble extract of the cells in the characterization of human PBGS expressed from pMVhum. The rationale for the growth/expression protocol is based on the following points. Glucose was included in the first growth to repress expression of genes under the control of the lac promoter such as the T7 RNA polymerase in BL21(DE3). Prior to induction, the cells were transferred to fresh medium that did not contain glucose. To increase the basal level of chaperones in the cells, a 42°C heat shock preceded induction. Finally, low level IPTG induction was used to promote slow expression from pMVhum so as to optimize the opportunity of the protein to fold correctly (20). Fig. 4 illustrates the low level of expression seen after the first glucose-supplemented growth and good expression following the IPTG induction. This procedure can be performed on an 8-liter scale using air shakers and yielding up to 50 g of cells. The 15°C procedure gives equivalent expression with somewhat better partitioning of the protein into the soluble fraction. Purification of Recombinant Human PBGS from BLR(DE3) (pMVhum)-The method for purifying human PBGS from the soluble fraction of BLR(DE3)(pMVhum) drew on our experience and published purifications of bovine, human, E. coli, and Bradyrhi zobium japonicum PBGSs (8, 15, 30 -32). In all cases, human PBGS behaved as expected from prior experience with mammalian PBGS. Following a 20 -45% ammonium sulfate fractionation, K59 constituted ϳ11% of the total protein, and N59 constituted ϳ30% of the total protein. The phenyl-Sepharose column removed the majority of the UV-absorbing components and yielded a 2-fold or greater increase in specific activity. Human PBGS eluted near the end of the DEAE column gradient and yielded protein of high specific activity (60 -85% of the maximal value). The final pure protein pool from the Sephacryl column step (Fig. 4, lane E) for the protein product of pMVhum (peak at ϳ58% of the column volume) typically had a specific activity of ϳ45 mol/h/mg, which is ϳ50% larger than the highest value reported for any mammalian PBGS (31). The less common N59 protein had a specific activity of ϳ24 mol/ h/mg when expressed under identical conditions. The ϳ2-fold difference in the specific activity of K59 and N59 was highly reproducible between growths and preparations. The overall yield of purified protein was ϳ10 mg/liter for K59 and ϳ35 mg/liter for N59; the difference relates to the differential partitioning of these isozymes between the soluble fraction and the inclusion bodies. Zn(II) Interactions with Recombinant Human PBGS-The protein purified in the presence of 10 M Zn(II) and 10 mM ␤-mercaptoethanol was found by atomic absorption spectroscopy to contain eight Zn(II)/octamer, as was found earlier for bovine PBGS (30). A generally consistent model for mammalian PBGS is a homo-octameric protein with four functional active sites, each of which contains two zinc ions denoted Zn A and Zn B (33,34). One piece of data supporting this model is the binding of Zn(II) to bovine PBGS, which shows tight binding of four Zn(II)/octamer (K d Ͻ Ͻ 0.1 M) and looser binding of a second four Zn(II)/octamer (K d ϳ 5 M) when holoenzyme is dialyzed at 4°C against 0 -30 M Zn(II) (21). In the case of recombinant human PBGS, Fig. 5A illustrates that this model for Zn(II) binding gives a good fit to data obtained under these conditions. Here the fit shows a tight Zn(II) at n 1 ϭ 4.2 and K d Ͻ Ͻ 0.1 M and a looser Zn(II) at n 2 ϭ 4.2 and K d ϳ 7 M (Table I). Fig. 5A and Table I include data obtained previously on bovine PBGS for comparative purposes. Also consistent with the four-Zn A and four-Zn B model is that only four Zn(II)/octamer are required for full activity of bovine PBGS in the presence of ␤-mercaptoethanol (30,35). Fig. 5B shows a Zn(II) activation curve for N59, where the initial slope corresponds to full activation upon addition of 0.5 Zn(II)/subunit. The apparent concentration of Zn(II) available from assay components is ϳ0.2 M, as shown by the x intercept in Fig. 5B. The activity of the purified protein was found to be insensitive to the addition Table I. B, Zn(II) activation of recombinant human PBGS is shown using 1.0 M PBGS subunit in the assay. The initial slope indicates full activation at 0.5 Zn(II)/subunit (four Zn(11)/octamer). C, shown is the pH dependence of Zn(II) binding to K59 at pH 5 (q), pH 6 (OE), pH 7 (f), and pH 8 (ࡗ). Binding parameters are reported in Table I and also include data for N59. For these studies, the protein was first dialyzed overnight at pH 5 before placement in Zn(II)-containing buffers. Dialysis was done at room temperature. of Mg(II), as expected for human PBGS based on prior studies (15). PBGS is often purified with tightly bound product (15), and one active-site model for Zn(II)-containing PBGS includes the amino group of porphobilinogen as a Zn(II) ligand (33). Mildly acidic pH can be used to strip both divalent metals and tightly bound product from a variety of microbial PBGSs (8,36). Hence, we elected to investigate Zn(II) binding to human PBGS following dialysis at pH 5, which was shown to strip all preexisting Zn(II) (Fig. 5C). By analogy to prior studies, low pH is also presumed to strip all enzyme-bound product. The results illustrated in Fig. 5C for K59 show pH-dependent Zn(II) binding that did not distinguish between the two different types of Zn(II) sites that are illustrated in Fig. 1 and that are apparent in Fig. 5A. The data for N59 are not illustrated, but the binding parameters are included in Table I. There was no significant difference in Zn(II) binding between human PBGS K59 and N59. At pH 5, virtually no Zn(II) bound to the apoenzyme; at pH 6, the apparent K d is 4 -5 M, and the number of Zn(II) sites fits eight or more sites/octamer; and at pH 7 and pH 8, the apparent K d is 0.7-0.9 M, and the number of Zn(II) sites is approximately eight/octamer. We conclude that the low pH treatment causes a loss of asymmetry in human PBGS, but the structural basis for the difference between Fig. 5A and Fig. 5C is unknown. Equilibrium Dialysis with Pb(II) in Competition for the Zn(II) Sites-Lead has been shown to be a slow-binding inhibitor of mammalian PBGS (37). Precise analytical studies using controlled metal ion buffers have shown that the K d for Pb(II) is ϳ20-fold tighter than the K d for Zn(II) (38). As a first step in determining the relative ability of Pb(II) to displace Zn(II) from the two isozymes of human PBGS, equilibrium dialysis studies were carried out at pH 7 at initial Zn(II) concentrations of 1 and 10 M and Pb(II) concentrations of 0 and 20 M. These experiments started with apoenzyme prepared by low pH dialysis. The results illustrated in Fig. 5D show that Pb(II) competed effectively for about one-half of the Zn(II) sites. There was no differential displacement of Zn(II) by Pb(II) for K59 relative to N59. Similar results were seen in simple fixed-time inhibition studies. When the holoenzymes were assayed for the standard 5 min, 100% activity was seen at 10 M Zn(II); 75-80% activity was seen at 10 M Zn(II) plus 20 M Pb(II); ϳ80% activity was seen with no added Zn(II); and 10 -12% activity was seen with no added Zn(II) plus 20 M Pb(II). Because Pb(II) is a slowbinding inhibitor (37), a more thorough investigation of Pb(II) inhibition may still reveal some differences between K59 and N59. DISCUSSION Design of Artificial Genes-Advances in recombinant DNA technology have yielded spectacular results in heterologous protein expression in E. coli. In some cases such as PBGS, the E. coli homolog can be expressed at very high levels, whereas expression of the heterologous human protein is poor. Part of the problem with the poorly expressed protein may be clusters of codons rarely used in E. coli. Correcting for rare codons alone can be a good solution and does not necessarily require total gene synthesis (39). Alternatively, it may be possible to use a host E. coli strain engineered to contain adequate amounts of the rare tRNAs (40). A less well understood phenomenon is the high level and often constitutive expression of the homologous E. coli gene. The approach described herein acknowledges ignorance of the complex factors that control gene expression in E. coli and uses simple mimicry to optimize expression of the heterologous protein. The artificial gene encoding human PBGS is designed to resemble the E. coli hemB gene to the greatest extent possible while still encoding the human protein and avoiding rarely used E. coli codons. In the case of human PBGS, the approach was successful and is recommended for others facing similar problems in protein expression. The design of artificial genes is not a novel concept and has previously been used, for instance, to engineer in specific restriction sites (41). Our considerations in mimicking the homologous E. coli gene and minimizing rare codons were intended to increase both protein expression levels and protein quality by decreasing translational errors. Production of heterologous proteins of questionable quality is a significant concern for the biotechnology industry, and specific examples of errors in translations are appearing in the literature (17). Although the current studies do not prove the efficacy of the following additional factors, expression in fresh medium, induction with low levels of IPTG, and induction following a heat shock to increase natural chaperone levels can also be used to enhance translational fidelity or proper protein folding. Model for Human PBGS-PBGS is an ancient and highly conserved protein with specific phylogenetic sequence variations in regions that recent crystal structures have shown to be essential to metal ion binding. Three different types of divalent metal ion-binding sites have been delimited that correspond to metals that have been called Zn A , Zn B , and Mg C (42). The two Zn(II) are apparent in the yeast PBGS structure 1AW5 (22), and the Mg(II) is apparent in the P. aeruginosa PBGS structure 1B4K (26). Yeast PBGS crystallizes as a symmetric octamer with eight somewhat disordered active-site regions, and the occupancy of the Zn(II) sites is not stoichiometric. In contrast, the P. aeruginosa PBGS octamer is composed of four asymmetric dimers wherein only one of the monomers of each dimer contains Mg C and only this monomer has a well ordered lid over the active site. Human PBGS has a 53% identity (61% similarity) to the yeast sequence and contains the ligands seen to bind to Zn A and Zn B in 1AW5. Because human PBGS displays half-site reactivity, implying that a dimer is needed to make one functional active site, these two Zn(II) are shown in the model to be bound to only one of the monomers. It is interesting to note that the ligands to the four tight Zn(II), as determined by extended x-ray absorption fine structure (Zn A with mostly oxygen and/or nitrogen ligands) (34), are not the same as the ligands to the highly populated Zn(II) in the crystal structure of yeast PBGS (Zn B with mostly sulfur ligands) (22). We have shown that factors such as pH and substrate binding can control the disproportionation of metal ions between the Mg(II)-binding sites of B. japonicum PBGS (43), and similar factors may confound reconciliation of different studies on the various Zn(II)-binding PBGSs. These factors also confound drawing functional conclusions on Pb(II) inhibition of PBGS based strictly on where Pb(II) binds in the absence of substrate. Conclusion-We have prepared an expression system for human PBGS that is designed for optimal heterologous expression with a low translational error frequency. This system was used for the purification of human PBGS encoded by the two common alleles (K59 and N59) found in human populations. The purified proteins exhibit characteristics consistent with other mammalian PBGSs and can be used for physical, chemical, and structural analysis of human PBGS and mutants thereof. The only significant difference seen between the proteins encoded by the two alleles is an ϳ2-fold variation in specific activity. There is no differential effect of the isozymes on Zn(II) binding, Pb(II) competition for the Zn(II) sites, or inhibition of activity by Pb(II). Further studies will probe deeper into differential effects of lead on Zn(II) binding, activity, and protein folding to determine a more subtle basis for the reported genetic susceptibility of N59-expressing individuals toward lead poisoning. Finally, although statistically significant epidemiological data correlate the allelic variation in human PBGS with lead poisoning parameters (1,2,5), such data do not establish a causal relationship, and one may not exist.
8,738.8
2000-01-28T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Photocatalytic Degradation of Reactive Yellow in Batch and Continuous Photoreactor Using Titanium Dioxide Titanium dioxide (TiO2) has been used as photo-catalyst for the degradation of reactive yellow (RY) in batch and continuous mode under UV irradiation. Titanium dioxide (TiO2) was immobilized onto the ceramic plate using cement as binder. The effects of various parameters such as initial dye concentration, solution layer thickness, presence of catalyst, residence time, and catalyst loading on degradation have been investigated. The results showed that without catalyst no degradation was achieved. The maximum sorption capacity of TiO2 was found to be 0.01447 kg/kg. The degradation of RY followed pseudo-first order kinetics with rate constant k = 0.001 min. A decrease in degradation of RY was observed with an increase in initial concentration and solution layer thickness. A comparison of photocatalytic performance between batch and continuous mode was performed and the batch mode provided better degradation performance. About 60% degradation of dye was achieved at 360 min for 200 ppm RY solution in batch mode. Introduction Dyes are one of the largest pollutants released in wastewater from textiles and other industrial processes.Because of potential toxicity of the dyes and their visibility in surface water, removal and degradation of reactive dyes have been a matter of considerable interest.Due to the synthetic nature of reactive dyes, biological treatment of wastewater alone is not usually effective for the removal of these chemical species.To meet increasingly stringent regulations, additional processes like coagulation, membrane separation or adsorption for the removal of these contaminants have been applied [1][2][3].However, these processes simply transfer pollutants from aqueous to another phase, thus causing secondary pollution problem [4,5].Photocatalytic oxidation is cost effective and capable of degrading any complex organic chemicals when compared to other purification techniques [6].Usually, semiconductor particles with suitable band gap and flat band potentials/energy levels are used as photocatalysts.The semiconductors such as TiO 2 , ZnO, Fe 2 O 3 , CdS, ZnS, and ZrO 2 are employed.Heterogeneous photocatalysis is a process in which the irradiation of an oxide semiconductor produces photoexcited electrons (e -) and positively charged holes (h + ).The photoexcitation of semiconductor particles, by means of light with a higher energy than the electronic band gap energy of the semiconductor, generates excess electron in the conduction band (e−) and an electron vacancy in the valence band.The excitation of TiO 2 generates highly reactive electronhole pairs that in turn produce highly potent radicals (such as •OH and O 2 • -) to oxidize organic and inorganic pollutants [7].Among the semiconductors, TiO 2 is the most widely used photocatalyst, mainly because of its photo stability, non-toxicity, low cost and water insolubility under most environmental conditions [8].In recent years, TiO 2 photocatalysis has been successfully applied to remove organic and inorganic pollutants [9], to inactivate microorganisms [10], and to control disinfection by-product formation [11].Although TiO 2 photocatalysis was found to be effective for the destruction of a wide variety of environmental contaminants present in water and wastewater, this technology has not yet been successfully commercialized because of the costs and problems connected to the separation of TiO 2 particles from the suspension after treatment. In order to solve this problem, supported photocatalysts have been developed [12][13][14].In this study, Reactive Yellow (RY) was chosen as model dye to test a novel photocatalytic reactor.The reactor containing TiO 2 coated ceramic plate was operated in batch and continuous mode under UV irradiation at various conditions (with/without catalyst, different initial concentration and solution layer thickness).The performance of the reactor in batch and continuous mode were also compared. Materials RY was supplied by a local textile industry and used as received.The catalyst TiO 2 nanopowder primarily in the anatase form (70:30 anatase to rutile) was purchased from Merck, Germany.Water used in our experiments was triple distilled and produced in our laboratory. Preparation of photocatalytic plate TiO 2 was mixed with cement at different ratios and the paste was made from the mixture with water.A thin layer of coating of the paste was made onto ceramic plate.There after it was kept in wet condition for a day and then dried at room temperature.After hardening the plate was subjected to heat treatment at 200 o C for 2 h in a muffle furnace and the temperature was increased up to 450 o C for another 2 h.After heat treatment the plate was fully prepared for using in the photocatalytic reactor. Operation of the reactor The photocatalytic degradation of RY was evaluated in an aqueous solution under illumination of UV light (UV lamp, 100 watt) in a photoreactor system.The schematic diagram of the photoreactor is shown in Fig. 1.The reactor consists of a rectangular type box made by ceramic plates.The photocatalytic plate was placed on the floor of the box.The dimension of the reactor and photocatalytic plate were 40.5 cm x 8.5cm x10cm and 34.5cm x 8.5cm, respectively.The UV lamp was placed on the top of the reactor at 10 cm apart from the catalyst plate.For continuous operation, residence time was adjusted by tuning the input and output valves.For batch experiments, the input and output valves were closed.The degradation process of RY was assessed by sampling 2 mL solution at appropriate time intervals.The concentration of RY was determined using UV spectrophotometer (UV-1650, SHIMADZU, Japan) by monitoring the absorbance. Adsorption isotherm for RY-TiO 2 system In order to find out the maximum adsorption capacity of TiO 2 , the experimental data of equilibrium RY adsorption on TiO 2 were fitted to the Langmuir isotherm model.The Langmuir isotherm model assumes uniform energies of adsorption onto the surface of adsorbent and is represented by the following equation [15]: where, q e is the equilibrium adsorption capacity of the adsorbent (kg dye/kg TiO 2 ), q ∞ is the maximum adsorption capacity (kg dye/kg TiO 2 ) , C e is the equilibrium dye concentration in the solution (kg dye/m 3 solution), and K is the adsorption equilibrium constant.The plot of 1/q e versus 1/C e was found to be linear as represented in Fig. 2. The Langmuir isotherm model parameters (q ∞ and K) were calculated from the slope and intercept of the plot and were estimated to be q ∞ = 0.01447 kg/kg and K = 1.299 m 3 /kg.The adsorption is well described by the Langmuir isotherm model as the equilibrium data fit to the isotherms with correlation coefficient (r 2 ) of 0.950. Photocatalytic activity of TiO 2 -cement plate in batch mode The potentiality of RY degradation is studied in batch mode at various operating parameters such as initial solution concentration and solution layer depth under irradiation with UV light. Effect of initial dye solution concentration The effects of different initial RY solution concentration ranged from 200-500 ppm on degradation of dye were studied in a batch type photo-catalytic reactor system in presence and in absence of catalyst.The catalyst dose was 5 g TiO 2 /5 g cement (wt/wt) in an area of 0.029 m 2 ceramic plate while the solution layer thickness was maintained at 3.7 mm.The (1) remaining solution concentrations after degradation of RY were plotted as a function of time in Fig. 3.It is evident from Fig. 3 that the concentration of RY remaining in the solution decreases with time while no degradation is observed in absence of catalyst.The initial concentration of RY influenced the UV light absorption on TiO 2 catalyst.As the RY concentration increased, some of the UV light photons were absorbed by the substantial amount of dye molecules.The quantity of effective photons which was absorbed by the surface of catalyst was reduced.The quantity of excited TiO 2 electrons produced by the effective photons decreased, making the generating holes lessened.As a result lower removal was achieved at higher concentration. Effect of solution layer thickness Fig. 4 showed the effect of solution depth on the degradation of RY.The solution layer thickness was maintained at 2.77, 3.7 and 5.55 mm using 75, 100 and 150 mL solution respectively.As shown in Fig. 4, higher degradation of dye on the TiO 2 surface was achieved at lower solution depth.The solution concentration droped from 300 ppm to 175.3, 195.7 and 213.7 ppm for the layer thickness of 2.77, 3.7 and 5.55 mm, respectively.The more UV-light penetration capacity at lower solution thickness caused higher reduction in dye concentration. Photocatalytic activity of TiO 2 in continuous mode Numerous experiments were conducted at different initial concentration and residence time to investigate the potentiality of RY degradation under irradiation with 100 watt UVlamp in continuous photocatalytic reactor. Effect of residence time Fig. 6 demonstrated the results obtained from photocatalytic degradation of RY at various (50, 75 and 100 min) solution residence times and constant initial concentration (200 ppm) under UV irradiation.The degradation efficiency of RY was found to increase with an increase in the residence time (Fig. 6).The remaining solution concentration increased from 95.8 to 156.5 ppm as the residence time decreased from 100 to 50 min.At higher residence time the dye molecules would get more time for degradation which could explain why higher removal was attained at higher residence time Effect of initial concentration The effect of initial RY concentration on the degradation efficiency was studied by varying the concentration in the range of 200-300 ppm and keeping the flow rate as well as the solution layer thickness constant in a continuous type reactor and the obtained results are represented in Fig. 7. Without catalyst no degradation was found under UV illumination.For TiO 2 catalyzed system, the RY solution concentration was found to be depleted with irradiation time.It could be found from Fig. 7 that RY concentrations decreased to 92.8 and 210.12 ppm from 200 and 300 ppm, respectively.The active surface on the catalyst available for reaction is very crucial for the degradation to take place, but as the dye concentration is increased and the catalyst amount is kept constant, results in fewer active sites for the reaction.With increased dye molecules the solution became more intense colored and the path length of photons entering the solution decreased thereby only fewer photons reached the catalyst surface.At still higher concentration of the dye, the path length was further reduced and the photodegradation was found to be decreased [16,17]. Comparison of photocatalytic degradation between batch and continuous mode Fig. 8 shows the comparison of photo catalytic degradation performance between batch and continuous mode.From the figure it is apparent that the continuous mode shows lower RY degradation performance than batch mode at constant solution concentration and solution layer depth. Effect of catalyst dose in photo-catalytic plate The performance of photocatalytic plate depends on the catalyst dose.The photocatalytic plates loaded with different doses of catalyst e.g., TiO 2 to cement ratio 1:1 and 1:2 g/g, were used to evaluate the degradation performance of RY and the results were represented in Fig. 9.It is clear from the figure that the catalytic decomposition performance enhances as the TiO 2 fraction increases in the catalyst film.The more active sites of catalyst at higher dose provided better photocatalytic degradation under UV light.Further increase of TiO 2 dose in the plate could not be possible.In case of higher dose of TiO 2 , (TiO 2 : cement = 2:1) the catalyst particles were not hold onto the surface. Conclusion Photocatalytic degradation using TiO 2 was successfully applied for textile dye (Reactive Yellow).Photocatalytic reactor with an immobilized TiO 2 nanofilm was developed for continuous and batch process.The degradation of RY was strongly influenced by initial dye concentration, solution layer thickness and catalyst dose.No decomposition of RY was observed without catalyst.The monolayer sorption capacity of RY onto TiO 2 was found to be 0.01447 kg /kg.The rate constant of RY degradation was 0.001 min -1 .A comparison of photocatalytic performance between batch and continuous mode was performed and the batch mode provided better degradation performance.About 60% degradation of dye was achieved at 360 min for 200 ppm RY solution in batch mode.
2,766.4
2012-08-28T00:00:00.000
[ "Chemistry", "Environmental Science", "Materials Science" ]
Identification of a Novel Domain of Ras and Rap1 That Directs Their Differential Subcellular Localizations* The small GTPase Ha-Ras and Rap1A exhibit high mutual sequence homology and share various target proteins. However, they exert distinct biological functions and exhibit differential subcellular localizations; Rap1A is predominantly localized in the perinuclear region including the Golgi apparatus and endosomes, whereas Ha-Ras is predominantly localized in the plasma membrane. Here, we have identified a small region in Rap1A that is crucial for its perinuclear localization. Analysis of a series of Ha-Ras-Rap1A chimeras shows that Ha-Ras carrying a replacement of amino acids 46-101 with that of Rap1 exhibits the perinuclear localization. Subsequent mutational studies indicate that Rap1A-type substitutions within five amino acids at positions 85-89 of Ha-Ras, such as NNTKS85-89TAQST, NN85-86TA, and TKS87-89QST, are sufficient to induce the perinuclear localization of Ha-Ras. In contrast, substitutions of residues surrounding this region, such as FAI82-84YSI and FEDI90-93FNDL, have no effect on the plasma membrane localization of Ha-Ras. A chimeric construct consisting of amino acids 1-134 of Rap1A and 134-189 of Ha-Ras, which harbors both the palmitoylation and farnesylation sites of Ha-Ras, exhibits the perinuclear localization like Rap1A. Introduction of a Ha-Ras-type substitution into amino acids 85-89 (TAQST85-89NNTKS) of this chimeric construct causes alteration of its predominant subcellular localization site from the perinuclear region to the plasma membrane. These results indicate that a previously uncharacterized domain spanning amino acids 85-89 of Rap1A plays a pivotal role in its perinuclear localization. Moreover, this domain acts dominantly over COOH-terminal lipid modification of Ha-Ras, which has been considered to be essential and sufficient for the plasma membrane localization. The mammalian Ras family of small GTPases consists of more than 20 members including Ras, Rap, R-Ras, Ral, Rin, and Rheb (1,2). As molecular switches, these GTPases cycle between two conformational states depending on whether GDP or GTP is bound. The GTP-bound form represents the active conformation, which interacts with and stimulates downstream target proteins. In the case of Ras as a representative of the small GTPases, two regions, designated switch 1 (amino acids 32-40) and switch 2 (amino acids 60 -76), undergo conformational change upon GDP/GTP exchange. These two regions have been implicated in interaction of Ras with an array of its regulatory and target molecules (3,4). In particular, the switch 1 region binds to the RBD 1 of the serine/threonine kinase Raf-1 or the Ras/Rap1-associating domains of various other effectors in a GTP-dependent manner and therefore is also called the effector region. In addition to the binding through the effector region, a second GTP-independent interaction between Ras and its effector is required for full activation of the effectors (5). Ras acts as a molecular switch of a wide variety of signaling pathways that direct cell cycle progression, survival and differentiation depending on cell types (6 -8). Being localized in the plasma membrane as well as in the endoplasmic reticulum and the Golgi apparatus, Ras is activated by GEFs, including Sos1, Sos2, Ras-GRF1, Ras-GRF2, and Ras-GRP-1 to 4, in response to signals triggered by various membrane-spanning receptors. Activated Ras, in turn, interacts with and stimulates a variety of target proteins. Signaling pathways downstream of Ras ultimately modulate specific gene expression in the nucleus. Once its GTP-hydrolyzing activity is impaired by mutation, Ras becomes oncogenic as a constitutive GTP-bound form. Such mutations, in fact, have been identified in various human cancers as well as in carcinogen-induced mouse tumors, implying that genetic alteration at the ras locus is a critical step in carcinogenesis. Rap1, also called Krev-1 and smg p21, is a close relative of Ras, sharing the identical effector region with Ras (2,9). In fact, Rap1 associates with a subset of Ras effectors including Raf-1, B-Raf, phosphoinositide 3-kinase, Ral GEFs, and PLC⑀. Still, the interaction with Rap1 does not cause, for instance, Raf-1 activation and even antagonizes Ras signaling, thereby suppressing Ki-Ras-induced transformation of NIH3T3 cells. These activities are proposed to be ascribable to the tight binding of Rap1 to the cysteine-rich domain, the second Ras/ Rap1-binding site, of Raf-1 (5). Contrary to the downstream targets, Rap1 does not share its GEFs, including C3G, Epac/ cAMP-GEF, CalDAGGEF1, RA (PDZ)-GEF-1, and RA (PDZ)-GEF-2, with Ras. Although Rap1 exhibits the Ras-antagonizing activity under certain conditions, Rap1 is localized predominantly in the perinuclear compartments including the Golgi apparatus and late endosomes in a marked contrast to Ras, a large population of which exists in the plasma membrane (10 -13). Thus, it is feasible that Rap1 exerts its own functions in addition to inhibition of the Ras pathways. Indeed, Rap1 has been reported to mediate activation of integrins and subsequent cell adhesion following extracellular stimulations such as T-cell receptor or CD31 ligation and lipopolysaccharide treatment (9). However, subcellular location of Rap1 pertinent to these functions remains obscure. Both Ras and Rap1 undergo a series of posttranslational modifications in the COOH-terminal hypervariable region, which are thought to be crucial for determination of their subcellular localization (2). Initially, a lipid tail (a farnesyl moiety for Ras and a geranylgeranyl moiety for Rap1) is attached to a cysteine residue at the fourth position from the COOH terminus. This is an obligatory event to elicit subsequent processes including proteolytic removal of the COOHterminal three amino acids and methylation of the COOH terminus. In Ha-Ras, Ki-Ras4A, and N-Ras, an additional membrane-targeting signal involving one or two palmitoylated cysteine residues within the COOH-terminal portion cooperates with the farnesyl moiety. In Ki-Ras4B and Rap1, a polybasic motif consisting of multiple lysine residues near the COOH terminus enhances the membrane attachment. Although the farnesyl moiety is required for membrane attachment of Ha-Ras, it alone does not suffice for targeting this protein to the plasma membrane. In fact, a Ha-Ras mutant that lacks the palmitoylation sites is localized and activated in the endoplasmic reticulum and the Golgi apparatus (14,15). Although the association of Ras and Rap1 with the membrane is primarily because of the COOH-terminal lipid modifications, the mechanisms whereby Ras and Rap1 are localized in specific subcellular membrane compartments remain to be clarified. Here, as a step toward understanding this issue, we have attempted to identify a specific region of Ras and Rap1 responsible for determination of their differential subcellular localizations by employing a series of chimeric constructs between them. EXPERIMENTAL PROCEDURE Plasmids-cDNAs for wild-type human Ha-Ras (GenBank TM accession number AF493916) and human Rap1A (GenBank TM accession number NM_002884) and their mutants were subcloned between HindIII and BamHI sites of pFLAG-CMV-2 (Sigma) for expression as EGFP fusions in COS-7 cells and into a BamHI site of pEF-BOS (16) for expression with an NH 2 -terminal triple HA tag in COS-7 cells. cDNAs for wild-type human Ha-Ras and its mutants were subcloned into a BamHI site of pGEX-6P-1 (Amersham Biosciences) for expression as GST fusions in Escherichia coli. The cDNA for EGFP-tagged Raf-1 RBD (amino acids 50 -131) was subcloned between HindIII and BamHI sites of pFLAG-CMV-2. The expression plasmid for an MBP fusion Raf-1 RBD was previously described (17). Site-directed mutagenesis was carried out by using the QuikChange site-directed mutagenesis kit (Stratagene) and appropriate oligonucleotides. For the construction of cDNAs for Ha-Ras/Rap1A chimeras, the following restriction enzyme cleavage sites were generated by site-directed mutagenesis; MluI sites were generated by the introduction of a "CGG" to "CGC" mutation at the 102nd codons of Rap1A and Ha-Ras, SalI sites were generated by the introduction of an "ATTGAT" to "GTCGAC" mutation in the region of 46th and 47th codons of Ha-Ras and the introduction of a "GAT" to "GAC" mutation at the 47th codon of Rap1A (1-101)-Ha-Ras, and a BssHII site was generated by the introduction of a "GCCCGA" to "GCGCGC" mutation in the region of 134th and 135th codons of Rap1A (1-101)-Ha-Ras. The cDNA for Rap1A- with BssHII sites at both ends was amplified by the polymerase chain reaction. All the recombinant cDNAs were confirmed by sequencing. Cell Culture and Confocal Laser Microscopy-COS-7 cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% (v/v) fetal bovine serum and transfected with expression plasmids by using the Superfect transfection reagent (Qiagen) according to the manufacturer's instructions. After culture for 48 h, cells were fixed in PBS supplemented with formaldehyde (3.7% (v/v)) for 30 min. For immunofluorescent staining, fixed cells were washed three times with PBS and permeabilized with methanol for 1 min. After washing three times with PBS, cells were incubated in PBS supplemented with horse serum (10% (v/v)) for 1 h and then in PBS supplemented with horse serum (10% (v/v)) and a primary antibody (an antibody against the HA tag (12CA5, Roche Applied Science), the mannose 6-phosphate receptor (Calbiochem), or lysosome-associated membrane protein-1 (Santa Cruz Biotechnology)) for 2 h. Subsequently, cells were washed three times with PBS, and incubated in PBS supplemented with horse serum (10% (v/v)) and an anti-mouse IgG antibody conjugated with AlexaFluor488 or AlexaFluor546 (Molecular Probes) for 1 h, followed by washing three times with PBS. Staining of the Golgi apparatus and the endoplasmic reticulum with BODIPY TR ceramide (Molecular Probes) and Alexa-Fluor594-conjugated concanavalin A (Molecular Probes), respectively, was performed according to the manufacturer's instructions as described previously (18). Fluorescent labels were visualized by a confocal laser scanning microscope (LSM510 META, Carl Zeiss). The protein farnesyltransferase inhibitor FTI-277 and the protein geranylgeranyltransferase inhibitor GGTI-286 were purchased from Calbiochem. Pull-down Assay for Ras⅐GTP-Raf-1 RBD was expressed as an MBP fusion in an E. coli strain BL21 in the presence of isopropyl-␤-D-thiogalactopyranoside (0.5 mM) at 30°C for 3 h. Cells were harvested, resuspended in buffer A, and disrupted by sonication (15 s ϫ 5 times). FIG. 1. Schematic representation of Ha-Ras/Rap1A chimeras and Rap1A deletion mutants used in this study. Blue and red bars represent Ha-Ras and Rap1A, respectively. Numbers above and below the bars depict amino acid residues of Ha-Ras and Rap1A, respectively. Preparation of Soluble and Particulate Subcellular Fractions-COS-7 cells transfected with expression plasmids were suspended in PBS, sonicated, and then centrifuged at 100,000 ϫ g for 1 h. The supernatant and the precipitate were used as soluble and particulate fractions. Subcellular Localization of Ha-Ras/Rap1A Chimeric Constructs-To determine the region that is important for the perinuclear localization of Rap1A, a series of chimeric constructs between Ha-Ras and Rap1A as well as various Rap1A deletion mutants were expressed as EGFP fusions in COS-7 cells, and their subcellular localization was assessed by confocal laser microscopy (Fig. 1). Wild-type Ha-Ras and Rap1A were localized in the cell surface membrane and the perinuclear region, respectively (Figs. 2, A and B, and 3, E and F). The perinuclear location of Rap1A matched well with that of the Golgi apparatus, which was stained with BODIPY TR ceramide (18). A chimera composed of the NH 2 -terminal half of Rap1A and the COOH-terminal half of Ha-Ras (termed Rap1A (1-101)-Ha-Ras) (Fig. 1), which is anticipated to undergo Ha-Rastype posttranslational modifications at its COOH terminus, was predominantly localized in the perinuclear region, suggesting that the NH 2 -terminal portion of Rap1A contains a signal that directs the perinuclear localization of the protein (Fig. 2C). To further delineate the region responsible for the perinuclear localization, the subcellular localization of a Ha-Ras-derived protein, in which a central region (amino acid residues 46 -101) was replaced by the corresponding Rap1A sequence (termed Ha-Ras-Rap1A-Ha-Ras) (Fig. 1), was examined. This construct indeed localized in the perinuclear region, which was stained with BODIPY TR ceramide, supportive of a role of the sequence between amino acid residues 46 and 101 of Rap1A for the perinuclear localization (Figs. 2, D-F, and 3G). We next constructed a series of NH 2 -terminal deletion mutants of Rap1A as shown in Fig. 1, and their subcellular localization was examined. Rap1A-⌬N30 and Rap1A-⌬N60 were mainly localized in the perinuclear region, whereas Rap1A-⌬N90 was localized in the plasma membrane as well as in the cytoplasm (Fig. 2, G-I). Therefore, posttranslational modifica- tions at the COOH terminus of Rap1A are not sufficient for its perinuclear localization, and a region spanning amino acid residues 60 -90 may also be required. A Small Region Consisting of Amino Acid Residues 85-89 of Rap1A Is Sufficient for Golgi Localization-Toward identifying residues of Rap1A that determine the perinuclear localization, various Ha-Ras mutants that contain Ha-Ras to Rap1A-type substitutions of several consecutive amino acids within the region of residues 46 -101 were generated, and their localization was examined (Fig. 4). Among them, a substitution of five amino acids (NNTKS85-89TAQST) altered the subcellular localization of Ha-Ras from the plasma membrane to the perinuclear region (Fig. 5, A-D). In marked contrast, amino acid substitutions in adjacent regions (FAI82-84YSI and FEDI90 -93FNDL) virtually unaffected the plasma membrane localization of Ha-Ras (Fig. 5, E-L). All other substitutions tested also did not show any effects on the subcellular localization (data not shown). We attempted to further specify residues responsible for the perinuclear localization employing two Ha-Ras mutants with double or triple amino acid change (NN85-86TA and TKS87-89QST). Interestingly, both of these mutants were localized in the perinuclear region like Ha-Ras(NNTKS85-89TAQST) (Fig. 5, M-T). However, all single amino acid substitutions between residues 85-89 failed to convert the subcellular localization of Ha-Ras to the perinuclear region (data not shown). GTP binding of Ha-Ras mutants in cells was also examined by pull-down assays using GTPase-deficient versions of Ha-Ras A, E, I, M, and Q). The Golgi apparatus was stained with BODIPY TR ceramide (B, F, J, N, and R). Quantitative data as in Fig. 3 are shown in D, H, L, P, and T. Scale bar, 10 m. (Fig. 8). Introduction of the G12V mutation in wild-type Ha-Ras and its perinuclear region-localized mutants did not change the subcellular localization (Fig. 8, A-E). All of the three perinuclear region-localized mutants bound GTP within the cell similarly to Ha-Ras(G12V) (Fig. 8F). To further substantiate that these mutants indeed exist in the perinuclear region as a GTP-bound from, an in situ detection assay for GTP-bound Ras was carried out (Fig. 9). EGFP-RBD, which specifically binds to the GTP-bound form of Ras, was detected in the plasma membrane when coexpressed with Ha-Ras(G12V) (Fig. 9, B-D). In marked contrast, EGFP-RBD was colocalized in the perinuclear region with GTPase-deficient Ha-Ras mutants harboring substitutions in residues 85-89 (Fig. 9, E-M). Amino Acid Residues 85-89 Are Required for the Perinuclear Localization-To clarify whether the region of residues 85-89 is required for the perinuclear localization, we first tested the subcellular localization of a Rap1A mutant with a substitution of residues 85-89 for the corresponding Ha-Ras sequence (termed Rap1A (TAQST85-89NNTKS)). Indeed, this Rap1A mutant was not localized in the perinuclear region, supportive of a role of the region of residues 85-89 for the perinuclear localization (data not shown). However, this mutant was distributed uniformly throughout the cytoplasm but not in the plasma membrane, which is presumably ascribed to the lack of COOH-terminal palmitoylation as reported (14). Accordingly, we next examined a chimeric construct consisting of the NH 2terminal portion (residues 1-134) of Rap1A and the COOHterminal portion (residues 134 -189) of Ha-Ras (termed Rap1A-(1-134)-Ha-Ras) and its derivative containing a substitution of residues 85-89 for the Ha-Ras sequence (termed Rap1A-(1-134)-Ha-Ras(TAQST85-89NNTKS)) (Fig. 10A). Like Rap1A-(1-101)-Ha-Ras, Rap1A-(1-134)-Ha-Ras predominantly existed in the perinuclear region (Fig. 10, B and C). Introduction of a Ha-Ras-type substitution into amino acid residues 85-89 (TAQST85-89NNTKS) caused the plasma membrane localization (Fig. 10, D and E). Therefore, the region of residues 85-89 within the structural context of Rap1A or Ha-Ras is not only sufficient but also required for the localization in the perinuclear region. Ras is thought to occur within the cytoplasm because the responsible enzyme farnesyltransferase is soluble. Prior to plasma membrane expression, farnesylated Ras is transiently localized in the endoplasmic reticulum, where it becomes a substrate for a endoprotease that cleaves the COOH-terminal three amino acids, leaving the prenylcysteine as the new COOH terminus. Subsequently, Ras is methylated and then transported to the plasma membrane. Trafficking of Ras from the endoplasmic reticulum to the plasma membrane requires either palmitoylation or a polybasic motif. Following palmitoylation, Ha-Ras and N-Ras are suggested to transit brefeldin A-sensitive membrane structures, such as the Golgi apparatus, which are known to be the components of the conventional exocytic pathway. Ki-Ras4B, on the other hand, is likely to be transported along an alternative trafficking pathway to the plasma membrane by virtue of its polybasic motif. In contrast to Ras, the mechanism underlying the determination of the subcellular localization of Rap1 remains largely unknown. Rap1 is found predominantly in the perinuclear membrane structures and endocytic vesicles, although a small portion of Rap1 resides in the plasma membrane (10 -13, 23). Geranylgeranyltransferase attaches the prenyl moiety to Rap1 in the cytoplasm, allowing Rap1 to translocate to membrane compartments, such as the Golgi apparatus. Rap1 does not further translocate to the plasma membrane but instead stays in the perinuclear region even though Rap1 has a cluster of basic amino acids like Ki-Ras4B. Thus, a Rap1-specific mechanism that determines the perinuclear localization of Rap1 may exist. In this study, we identified a small segment consisting of five amino acid residues (TAQST) that causes re-localization of Ha-Ras, which otherwise is mainly localized in the plasma membrane, to the perinuclear region in COS-7 cells. This sequence spanning threonine 85 to threonine 89 of Rap1A is not only sufficient but also required for the perinuclear localization of Rap1A. Considering that this region lies between the fourth ␤ sheet and the third ␣ helix, and is exposed to the surface of the molecule, as revealed by the analysis of the tertiary structure of Rap1A (24), a still unidentified receptor protein that specifically recognizes this sequence may direct Rap1A to the perinuclear region during intracellular trafficking. Replacement of two (NN85-86TA) or three (TKS87-89QST) consecutive amino acid residues in this region also caused the perinuclear localization of Ha-Ras, suggesting that the interaction with the putative receptor protein may require only two or three residues given that these amino acids reside within the structural context of Rap1A or Ha-Ras. The KDEL carboxylterminal sequence is known as a retrieval signal that directs the localization of many soluble proteins in the endoplasmic reticulum (25). However, the TAQST sequence in Rap1A may not act as a signal for the perinuclear localization of diverse proteins because this sequence is found only in Rap1 GTPases (Rap1A and Rap1B). Ha-Ras and N-Ras have recently been reported to be localized in the endoplasmic reticulum and the Golgi apparatus as well as the plasma membrane, being activated following extracellular stimulation (15). Notably, Ras activation upon T-cell receptor engagement is restricted to the Golgi apparatus, highlighting a significant role of Ras in subcellular compartments other than the plasma membrane (26). The diacylglycerol-responsive GEF RasGRP1 is implicated in the activation of Golgilocalized Ras, which in turn activates downstream molecules, such as ERK, JNK, and Akt, similarly to plasma membranelocalized Ras (26,27). Rap1A, when overexpressed, is known to antagonize Ras-induced transformation, which is believed to be ascribed to tight binding of Rap1A to Ras targets such as Raf-1 without activating significantly (5). However, the inhibitory effect of Rap1A may not be accounted for solely by competitive binding to target proteins because the vast majority of Rap1A, unlike Ras, resides in the perinuclear region. Instead, Rap1A-specific signals from the perinuclear region may modulate Ras-dependent signaling pathways. Ha-Ras and Rap1A mutants whose subcellular localization was converted may become a useful tool to test this possibility. Furthermore, Ha-Ras and Rap1A stimulate common targets or downstream signaling pathways in different subcellular compartments with different time courses. For instance, PLC⑀ interacts with both Ha-Ras and Rap1A being activated in the plasma membrane and the perinuclear region, respectively (16). In hematopoietic BaF3 cells, Ha-Ras is responsible for rapid and transient activation of PLC⑀ upon platelet-derived growth factor treatment, whereas Rap1A induces delayed and prolonged PLC⑀ activation (28). Likewise, Ha-Ras mediates nerve growth factor signaling in pheochromocytoma PC12 cells through a transient activation of ERK, whereas Rap1 continuously activates ERK leading to neuronal differentiation. Although a role for the GEF domain of PLC⑀ in Rap1A-dependent sustained activation has been suggested (18,28), the mechanisms underlying different time courses are not fully understood. The possibility of subcellular compartment-specific or GTPase-specific down-regulation is currently being assessed by the use of Ha-Ras and Rap1A mutants described in this study.
4,678.8
2004-05-21T00:00:00.000
[ "Biology" ]
Parity-Violating Neutron Spin Rotation in a Liquid Parahydrogen Target. Our understanding of hadronic parity violation is far from clear despite nearly 50 years of theoretical and experimental progress. Measurements of low-energy parity-violating observables in nuclear systems are the only accessible means to study the flavor-conserving weak hadronic interaction. To reduce the uncertainties from nuclear effects, experiments in the few and two-body system are essential. The parity-violating rotation of the transverse neutron polarization vector about the momentum axis as the neutrons traverse a target material has been measured in heavy nuclei and few nucleon systems using reactor cold neutron sources. We describe here an experiment to measure the neutron spin-rotation in a parahydrogen target (n-p system) using pulsed cold-neutrons from the fundamental symmetries beam line at the Spallation Neutron Source under construction at the Oak Ridge National Laboratory. Introduction The weak hadronic interaction in nuclear systems is not well characterized. The leading theoretical approach pictures the parity-violating nucleon-nucleon (NN) interaction as occurring through meson exchange with one strong interaction vertex and one vertex described by phenomenological weak couplings. Desplanques, Donoghue, and Holstein (DDH) estimated the nucleonnucleon-meson weak couplings using quark model and symmetries arguments and determined a reasonable range of values including their "best guess" values for each of the seven coupling amplitudes [1]. The experimental pursuit of identifying the strength of these couplings has not produced a consistent set of data. A comprehensive discussion of the parity-violating weak hadronic interaction, including the motivation for pursuing this field of study, its role in a broader context of strong and weak quark-quark interactions, and the current status of the theoretical and experimental approaches can be found in these proceedings. See paper by W. M. Snow in this Special Issue. An experimental and theoretical program of study was developed by the community to characterize the weak hadronic interaction in low-energy nucleonnucleon systems. The experimental plan involves a series of measurements in two and few nucleon systems (to minimize nuclear structure effects) to determine the weak-interaction amplitudes either in the vector meson exchange model or expressed as a function of transition amplitudes between S-wave and P-wave states. The neutron spin rotation measurements in helium (fivebody system) and hydrogen (two-body system) are included in this set. In these proceedings is a description of the spin rotation in liquid helium experiment, including a discussion of the preliminary measurement, the apparatus modifications being carried out and the plans for taking data. See paper by C. D. Bass in this Special Issue. The transverse spin polarization vector is a linear combination of plus and minus helicity states, σ ± . The parityviolating (PV) neutron spin rotation about the momentum axis as long-wavelength (λ > 1 Å) neutrons traverse the target medium arises from the spin-dependent part Our understanding of hadronic parity violation is far from clear despite nearly 50 years of theoretical and experimental progress. Measurements of low-energy parity-violating observables in nuclear systems are the only accessible means to study the flavor-conserving weak hadronic interaction. To reduce the uncertainties from nuclear effects, experiments in the few and two-body system are essential. The parity-violating rotation of the transverse neutron polarization vector about the momentum axis as the neutrons traverse a target material has been measured in heavy nuclei and few nucleon systems using reactor cold neutron sources. We describe here an experiment to measure the neutron spin-rotation in a parahydrogen target (n-p system) using pulsed cold-neutrons from the fundamental symmetries beam line at the Spallation Neutron Source under construction at the Oak Ridge National Laboratory. (σ ± ⋅ p term) of the weak interaction forward scattering amplitude. As a result of the weak interaction, each helicity state propagates with a different phase so that the transverse polarization vector rotates. The magnitude of the rotation is directly proportional to the parity non-conserving (PNC) part of the forward scattering amplitude, f PNC (0), and is given by (1) where ϕ PNC is the parity non-conserving neutron spin rotation, ρ and l are the density and length of the target material, respectively [2]. The forward scattering amplitude is proportional to the real part of the weak neutron-nucleus matrix element. Therefore, the size of the PV spin rotation is a direct measure of the strength of the weak nuclear interaction. An axial magnetic field will rotate a transverse spin vector about the momentum axis. For example, a total spin rotation of 10 rad is expected over 1 m in the earth's magnetic field of 50 µT. The total spin rotation from the weak interaction and the background magnetic field is presented in Fig. 1. To measure small rotations on the order of 10 -7 rad, the background magnetic fields must be as small and homogeneous as possible. It is important to note that the flight path or time a neutron spends in a residual axial field will determine the magnitude of the total background rotation. Using the vector meson exchange model of the weak nucleon-nucleon interaction developed by DDH and a model dependent representation of the strong interaction, Avishai and Grange calculated the neutron spin rotation through a liquid parahydrogen target and obtained [3] (2) where f π is the isovector pion exchange amplitude, h 0 ρ and h 2 ρ are, respectively, the isoscalar and isotensor ρ-meson exchange amplitudes and h ω is the isoscalarmeson exchange amplitude. Given that f π , h 0 ρ , and h 2 ρ are of the same order of magnitude and h 0 ω , is smaller than the others, Eq. (2) clearly demonstrates the strong dependence of ϕ PNC (n-p) on the isovector pion exchange amplitude. A measurement of the n-p spin rotation observable would determine f π in a complementary way compared to another parity-odd observable in the n-p system, the gamma asymmetry in polarized neutron capture on protons. (For a discussion of this measurement, please see the article by S. Page et al. in this Special Issue.) The coupling f π is of particular interest because of its long range. The dominant contribution is from the neutral current part of the weak interaction. At present, the set of low-energy weak interaction data indicates different ranges for the isovector amplitude. Using the DDH theoretical "best values," the spin rotation observable in the (n-p) system is -9 × 10 -7 rad/m. Using a reduced value for f π , consistent with the parity doublet measurement in 18 F, the prediction becomes -6 × 10 -7 rad/m. For a 20 cm target, a sensitivity of less than 6 × 10 -8 radians is needed to separate these two predictions. The spin rotation apparatus is based on a crossed neutron polarizer and analyzer designed by Heckel et al. [4] for solid targets and is represented schematically in Fig. 2 for the liquid helium target measurement. For an ideal polarimeter, a measurement of the count rate asymmetry for analyzer alignment plus and minus is proportional to the sine of the integrated rotation angle, φ. The polarimeter is designed to minimize and separate parity-conserving (magnetic field induced) rotations from the parity-violating signal. Magnetic fields are reduced with the use of high permeability material shielding and trim coils. The front and rear coils serve to preserve the neutron spin in the transition between high field regions and the very low field (< 10 nT) target region. A more sophisticated means of performing the traditional "target in" and "target out" comparisons to remove background signals is achieved with two target positions separated by a coil that rotates the spin ideally π radians about the vertical (initial polarization) axis. As a result, rotations in front of the π-coil are negative relative to the rotations behind the π-coil. By alternately filling and emptying the target chamber in front then behind the π-coil, the parity-violating signal follows the target material and is modulated in sign between negative for target in front and positive for target in back while the background is ideally constant. A comparison of these two configurations allows for a subtraction of the constant parity-conserving background rotations to the extent that the background does not change with target position. In a realistic apparatus, target dependent magnetic fields exist for example the diamagnetic properties of the target material alters the field in the chamber region. Target dependent neutron scattering effects introduce changes in beam divergence or path length that behave like target dependent magnetic field rotations. These two types of parity-conserving rotations will mimic the target dependent parity violating signal, cannot be subtracted away, and must be reduced below the desired sensitivity of 10 -7 rad/m. Compared to the liquid helium and solid targets, the scattering rate in the hydrogen target is quite high requiring more effort to control the scattering systematics. There are two states of the hydrogen molecule, H 2 , depending on the relative alignment of the proton spins: the spin 1 ortho-hydrogen and the spin 0 parahydrogen configurations. The neutron scattering cross section is about a factor of 20 higher for the ortho configuration compared to the para configuration. In addition, at relatively high energies (E > 15 meV) spin-flip scattering is allowed on the ortho-hydrogen molecule which adversely affects the rotation angle. To minimize scattering systematic effects, the para-hydrogen molecule is used. Note that the neutron mean free path in para-hydrogen is about 20 cm compared with 1 m or greater in liquid helium. We plan to mount the hydrogen spin rotation apparatus at the spallation neutron source in Oak Ridge, TN. The pulsed nature of the beam provides important advantages for characterizing systematics. With the pulsed time structure, neutrons can be counted as a function of time that corresponds to an average neutron velocity so that the faster neutrons arrive first and the slower, longer wavelength neutrons arrive later. The time structure can be used to filter the higher energy neutrons and the low-energy, long wavelength neutrons that tend to have a greater beam divergence. As can be seen from Eq. (1), the PV spin rotation is independent of neutron velocity (energy) while the scattering and magnetic field rotations are energy dependent. The background false (parity conserving) signals can be characterized by measuring the count-rate asymmetry, or spin rotation angle, as a function of energy, while a measurement of a constant PV signal as a function of time puts a convincing upper limit on these false contributions. Fig. 2. Schematic of the spin-rotation apparatus used in the liquid helium target measurement. The neutrons enter the apparatus (on the z axis) with an initial vertical (x axis) polarization. The front coil is designed to preserve the spin as the neutrons pass from a high-field region to the field-free target region inside the cryostat. Two alternately filled liquid parahydrogen targets are separated by the central π-coil with its axis oriented along the initial vertical (x axis) neutron polarization. The neutrons leave the target region through the end coil that preserves the horizontal (y axis) component of the spin and with the final supermirror polarizer, functions as an analyzer filter. The neutron count-rate is monitored by a segmented 3 He ionization chamber as a function of position and coarse energy bins. Significant changes to the spin rotation polarimeter from the helium apparatus include: 1) a new cryostat appropriate for a 20° liquid parahydrogen including important safety features, 2) an upgraded position sensitive detector and 3) a change in π-coil running conditions from static to dynamic. With the danger and risks associated with known ratios of hydrogen to oxygen, extensive safety features and procedures must be instituted for a hydrogen target system and cryostat. We will put to use the experience gained with the design and certification of the 30 cm long, 20 l liquid para-hydrogen target for the np-dγ parity violation experiment. (Please see the paper by S. Page in this Special Issue.) The target will be operated at 17 K where the equilibrium ortho to para hydrogen ratio is 0.03 %. If needed, a catalyst of paramagnetic material can be used to convert the ortho-hydrogen to the para configuration. The target length is constrained by the scattering systematics and set to one mean-free-path of 20 cm. The collection plates of the segmented 3 He ionization chamber used in the helium measurement is divided into four quadrants providing useful neutron count-rate information in the upper, lower, right and left regions of the beam [5]. The PV signal must be independent of geometry as well as energy, and position sensitivity provides a handle on the scattering systematics. We propose to develop a neutron detector that can collect the entire beam with a 1 cm position resolution. The design combines the 3 He conversion of neutrons to charged particles (protons and tritons) as in an ionization chamber, with the fast timing and position sensitive charge collection techniques of the MicroMegas detector. The MicroMegas technique [6] uses a wire mesh grid to locally amplify the electron charge that is subsequently detected on wire strips or pixels. Several detector modules will be placed in series along the beam axis providing 100 % collection efficiency. The π-coil, smears the polarization of the neutrons in a polychromatic neutron beam, producing a count-rate asymmetry that corresponds to a rotation about one-half the true rotation angle. A neutron precesses about the πcoil magnetic field by an amount proportional to the time spent in the field. Therefore, the precession angle was inversely proportional to the neutron velocity, or linearly dependent upon the wavelength. For a continuous polychromatic beam, the coil current was optimized to rotate the majority of the neutrons by an angle as close to 180° as possible, while some neutrons are necessarily over-rotated and others are under-rotated. The measured angle was an energy averaged value of the spread in transverse components of the polarization vector. A measurement of the polarization product, P, gives the effective reduction in the rotation angle, that is, sin(φ) = P sin(φ) ideal . Polarization products for a monochromatic beam approach that of 90 %, compared to about 50 % on a polychromatic beam [7]. At the SNS where the neutron beam velocity is a linear function of time, we can ramp the coil current to provide a 180° rotation for each time-slice of the beam. This effectively sets the coil for a nearly monochromatic beam of width much less than 0.5 Å in neutron wavelength. Unlike in the continuous beam case, the current can be reversed during off periods of the pulsed beam incurring no additional dead time. We have discussed the spin rotation apparatus and experimental approach to achieving a measurement sensitivity of 10 -7 rad/m, and the systematic effects that impact this measurement. We have included a comparison to the spin rotation apparatus for a liquid helium target currently being upgraded for a second measurement and the importance of the time-structured beam at the Spallation Neutron Source for the success of the n-p measurement. With an expected flux of 8 × 10 9 n/cm 2 /sec at the SNS, we can achieve a measurement with a statistical sensitivity of 2 × 10 -8 rad in a 20 cm target with about 40 days of data, assuming scattering losses in the apparatus and spending 1/3 of the time acquiring systematic check data.
3,544.8
2005-05-01T00:00:00.000
[ "Physics" ]
Associations between heart rate asymmetry expression and asymmetric detrended fluctuation analysis results Abstract The relation between recently established asymmetry in Asymmetric Detrended Fluctuation Analysis (ADFA) and Heart Rate Asymmetry is studied. It is found that the ADFA asymmetric exponents are related both to the overall variability and to its asymmetric components at all studied time scales. We find that the asymmetry in scaling exponents, i.e., \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha ^{+}<\alpha ^{-}$$\end{document}α+<α- is associated with both variance-based and runs-based types of asymmetry. This observation suggests that the physiological mechanisms of both types are similar, even though their origins and mathematical methods are very different. Graphical abstract The graphical abstract demonstrates strong, nonlinear association between the expression of Heart Rate Asymmetry measured using relative descriptors and the Asymmetric Detrended Fluctuation Analysis results. It is clear that there is a strong relation between the two theoretically disparate approaches to signal analysis. The technique to demonstrate the association is loess fit. Introduction Asymmetry in the RR intervals time series has recently found interest in the heart rate variability research [1][2][3][4][5]. Starting from the most generic approaches to time-irreversibility, which in fact spans an enormous number of physiological phenomena and heart rate variability (HRV) measures [5][6][7] through specialized predictive methods like Phase Rectified Signal Averaging (PRSA) [8,9] or Heart Rate Turbulence (HRT) [10][11][12] down to phenomena like Heart Rate Asymmetry (HRA) along with its mathematical tooling. A recent addition to the literature is the Asymmetric Detrended Fluctuation Analysis, which was developed mainly for studying asymmetric correlations in economic time series [13,14], but when applied to the RR intervals time series, revealed an asymmetric physiological effect [15,16]. HRA includes asymmetric effects defined and established in the variance of the RR intervals time series, in its structure and complexity. In all of these areas asymmetry is prevalent, consistent and unidirectional [1,2,[17][18][19][20]. Thus, we hypothesize that HRA and the above-mentioned ADFA results should be connected, and our aim in this paper is to relate the + < − asymmetric effect established in ADFA to HRA. Asymmetric detrended fluctuation analysis Detrended fluctuation analysis is one of the most often used methods for analyzing the time series of RR intervals. The main information it provides is the scaling properties of noise left over after detrending the time series. The details of the method may be found in [21][22][23], and they will be reviewed below very quickly so as to establish the notation. Let us define the RR intervals time series as the distance between successive R-waves in an electrocardiogram [24,25] in the following way: Let us also define a derivative, summed and mean-subtracted time series: where RR stands for the mean of the whole time series and y(k) defines the so-called box of length n within which a trend is found and subtracted. Since our aim is to identify rising and falling trends we only use first order (linear) polynomials where y n (k) is a line fit with slope a n and intercept b n to the specific box and k is the error term we are interested in studying. A mean-square root function is defined as: and calculated over all the scales which are available in the studied time series -in practice n changes from 4 to N/4, where N is the length of the time series. The values of log 10 (F(n)) are plotted against log 10 (n) and if the resulting plot follows a straight line, the existence of power law is concluded in the scaling of the mean-square root function, i.e., where is the scaling exponent. The values of this exponent are interpreted as signifying the presence of negative long-range correlations ( 0 < < 0.5 ), white noise ( = 0.5 ), positive long-range correlation ( 0.5 < < 1 ), 1/f noise ( = 1 ), long-range correlations not following the power law ( 1 < < 1.5 ) or the consistency of the detrended time series with random walk ( = 1.5 ) [21][22][23]. The fact that local trends in the boxes can be linear makes it possible to define two different mean-square root functions, depending on the sign of the tangent of the fitted line. Defining M as the overall number of segments and M + n as the number of segments in which the trend is increasing (or, using Eq. (3), a n > 0 ) we define: and correspondingly for the decreasing trends: where the outer summation (over j) goes over all boxes, (±) selects boxes with increasing (Eq. (6)) or decreasing trends (Eq. (7)) exclusively (compare [14,16]). If the above functions are presented on a doubly logarithmic scale with n, two scaling exponents: may be defined, provided that the dependence is linear. In [13] the local version of ADFA was developed in which the scaling exponents ± are calculated in a window moving along the analyzed time series. In [15] we applied the local version of ADFA to the time series of RR intervals (moving window length 100) and the result was that there is a highly statistically significant asymmetric effect with + < − . In [16] we systematically analyzed this effect and found that it was present in both global and local versions of ADFA with the use of windows of length 100 through 1000 in 30-min ECG recordings, but it was much weaker in the global version. Variance-based heart rate variability descriptors The variance of the time series (1) is defined in the following way [1,24,26]: where N is the length of the time series. Variance SDNN 2 can be partitioned into short-term and long-term variability in the following way [1,24,26,27]: The reasons for calling SD1 2 and SD2 2 the short-term and long-term variability and the details on their calculations are explained in detail in [1,24,26]. Variance-based heart rate asymmetry descriptors The source of variance-based HRA descriptors is the decomposition of variance-based HRV descriptors, such as SDNN 2 , SD1 2 and SD2 2 into parts which only depend on decelerations or accelerations. In [1] it was shown that SD1 2 can be partitioned into two parts dependent separately on decelerations and accelerations in the following way: In [17] it was demonstrated that long-term variability may be partitioned in the following way: and the full variance may be partitioned in the following way: For numerical and algorithmic details of the above see [1,17]. The respective parts of variance can be normalized in order to minimize interpersonal variability [1,17]. For short-term variance: and there is: For long-term variance: where: And finally, for total variance; where: The above descriptors, when applied to the RR intervals time series, reveal a strong asymmetry of this object. First of all, the contribution of heart rate decelerations to short-term variability is greater than that of accelerations, i.e., SD1 2 d > SD1 2 a . In long-term variability this is reversed with SD2 2 d < SD2 2 a , and this is also true in the case of total variability with SDNN 2 d < SDNN 2 a . If the normalized contributions defined above (14), (16), (19) are taken into account, the asymmetry in short-term, long-term and total variability may be respectively expressed as C1 d > 0.5 , C2 d < 0.5 and C d < 0.5. The runs method A run is an uninterrupted sequence of RR intervals which constantly shortens (heart rate accelerates) or constantly lengthens (heart rate decelerates) or constantly does not change, and which is preceded and followed by a different type of run. Figure 1 shows the partitioning of a segment of an RR intervals time series into disjoint accelerating and decelerating runs. It can be easily noted that this partitioning is unambiguous [2]. A detailed definition of runs may be found in [2]. Number of runs The most natural descriptor is the number of runs of a specific type. So, by DR i we will denote the number of deceleration runs of length i, and AR i will mean the number of acceleration runs of length i. Entropic descriptors of runs of decelerations and accelerations In [2] the Shannon entropy [28] associated with the distribution of decelerating and accelerating runs was partitioned into parts depending only on decelerations or only on accelerations in the following way (dropping the device-dependent neutral runs): The mathematical details of the above formulas may be found in [2]. In [2,18] it was found that the runs of accelerations are longer in terms of the number of beats than the runs of decelerations. In [2] it was found that H DR < H AR . Both these effects are highly statistically significant. The runs method turned out to have a significant predictive value for long-term survival in patients after myocardial infarction [18] and in patients who underwent clinically indicated exercise test [19]. Runs have also been found to be very useful in studying and predicting sleep apnea [20,29] as well as selecting patients who will respond to the proper treatment of obstructive sleep apnea. It has also been applied to the diagnosis of late sepsis in neonates [30]. Methods We used 388 stationary 30-min ECG recordings from healthy young subjects, age range 20-40 years, 233 women. The study was performed at rest in the supine position, and the subjects were kept quiet in a neutral environment. The subjects were allowed to breathe spontaneously during the whole study. The 30-min recording was taken after a preceding 15-min period used for cardiovascular adaptation. The ECG curves were sampled at 1600 Hz with the use of the analog-digital converter (Porti 5, TMSI, Holland). The libRASCH/RASCHlab (v. 0.6.1, www. libra sh. org, Raphael Schneider, Germany) [31] software was used for post-processing and automatic classification into beats of sinus, ventricular and supraventricular origin as well as artifacts. The automatic classification was reviewed by a trained technician who corrected any wrong classifications of the beats. To obtain the asymmetric descriptors from the annotated RR intervals time series we used in-house, free GPL3 software written in Python, HRAexplorer, which can be reviewed and downloaded at https:// github. com/ jarop is/ HRAEx plorer. An interactive online version of this software in the R programming language may be found at https:// hraex plorer. com/. The RR intervals time series were carefully filtered for each technique -the specifics of dealing with ectopic beats are described in [25] and [2]. The above-mentioned software uses all these filtering techniques. Since both theory and the Shapiro-Wilk test reject the normality of all HRA descriptors, the non-parametric paired Wilcoxon test was used to establish asymmetric relations. The binomial test was used to establish the departure from 0.5 of recordings exhibiting HRA, which would be the case in symmetric (e.g., shuffled) data. In the present paper we used the local version of ADFA with jumping window of length 100 beats, since in a previous study [16] it was the shortest window in which asymmetry was clear and consistent with longer windows. By jumping windows we mean disjoint windows of length 100 fully covering the analyzed recording. This can be contrasted with a sliding window which means a window sliding along the recording, moving by either one beat or by a time unit -for details see [16,32]. To calculate + and − in-house, free, GPL3 software written in Python with the use of Cython (https:// github. com/ kosmo 76/ adfa) was used. Any ectopic beats were linearly interpolated according to [33]. The time series of + and − obtained for each recording were summarized by medians for the purpose of comparisons. Since the mean values of + and − did not have normal distribution (Shapiro-Wilk test), the non-parametric paired Wilcoxon test was used for their comparison. The association analyses between ADFA asymmetric exponents and the descriptors of HRA were carried out with the use of the non-parametric Spearman correlation test. To check whether HRA entails + < − we built the univariate logistic regression models for 2 × 2 contingency tables tabulating the existence of HRA and + < − . All statistical calculations were carried out with the use of the R statistical language and its libraries. Presence of asymmetry in ADFA The median values of the scaling exponents were + = 0.951 (IQR (0.825, 1.052)), − = 0.993 (IQR (0.885, 1.099)), the p value is < 0.0001 . These results should be analyzed in view of the possible values that ± may take (see discussion after formula (5)). If the median values above were different by a larger amount, this would mean a total flip in the properties of the noise after detrending. The + < − was present in 278 cases, which is 74% of the entire group, the binomial test for this value to be consistent with 50% gives the p-value < 0.0001 Fig. 1 An example of the partitioning of an RR intervals time series into monotonic runs. The runs are marked as DRn (Deceleration Run of length n) and ARn (Acceleration run of length n); the Nn symbols stand for neutral runs which may break the deceleration/acceleration runs. Full black circles denote beginnings of decelerations runs and full gray circles mark the beginnings of accelerations runs -these can be thought of as reference points for the respective runs Presence of asymmetry in the variance-based descriptors The presence of asymmetry in the variance-based descriptors was established by comparing the deceleration-and acceleration-based parameters as well as checking whether globally the proportion of subjects with a specific type of asymmetry was different from the theoretically expected value of 0.5 if there is no asymmetry [1,17]. Short-term asymmetry The parameters of the distribution of SD1 d , SD1 a and C1 d may be found in Table 1. The number of cases in which short-term asymmetry can be established is 291, which is 75% of the entire group, the binomial test for this value to be consistent with 50% gives the p-value < 0.0001. Long-term asymmetry The parameters of the distribution of SD2 d , SD2 a and C2 d may be found in Table 2. Long-term asymmetry is present in 269, which is 69.3% of the entire group, the binomial test for this value to be consistent with 50% gives the p-value < 0.0001. Total asymmetry The parameters of the distribution of SDNN d , SDNN a and C d may be found in Table 3. Presence of asymmetry in the runs based descriptors The summary of the acceleration and deceleration runs distributions may be found in Fig. 2 and in Table 4. It can be concluded that the asymmetric effect, i.e., runs of accelerations being longer than those of decelerations, can be observed for runs of lengths 4 through 12, with lengths 1-3 and over 12 being not statistically significant or too few to carry out the statistical tests. Thus, the runs-based descriptors demonstrate the presence of HRA in the studied group. Table 5 demonstrates the distributions of runs entropy for H DR and H AR shows the comparison between the two types. The direct comparison between the entropic runs summaries yields p < 0.0001 , so there is a highly statistically significant asymmetric effect. Associations between variance-based descriptors and local asymmetric scaling exponents The associations between variance-based HRA descriptors and ADFA exponents were studied with the use of the Spearman correlation as well as loess type regression to visualize the type of association. The results are presented in Table 6. From the table above it can be seen that the asymmetric scaling exponents are significantly associated with both HRV and HRA magnitudes. The associations between asymmetric contributions of HRA to short-term, long-term and total variability and the asymmetric exponents are significant for decelerations and not significant for accelerations. Therefore it is necessary to study this in more detail. This is undertaken in the next section. Associations between the presence of asymmetry in variance-based descriptors and the presence of asymmetry in local asymmetric scaling exponents To answer the question whether or not the asymmetry observable in variance-based descriptors is related to the asymmetry observable in the scaling exponents is actually present rather than there being just an association between ± and the variance of time series we first build a 2 × 2 contingency table between the two categorical variables. Then for each type of asymmetry we build a logistic regression model to study the strength and significance of the association, i.e., the predictor in each case is the presence of asymmetry expressed according to the standard definition, e.g., using the inequalities from points 1.3, 1.4 and 1.5. The presence of asymmetry is coded as 1. The relations between various types of HRA and ADFA are summarized in Table 7. The results of the logistic model of the above asymmetry as predictor of + < − are presented in Table 8. It can be concluded that variance-based heart rate asymmetry and asymmetry in local ADFA are related. Associations between runs-based HRA descriptors and ADFA Tables 9 and 10 demonstrate the correlations between deceleration / acceleration runs and ADFA asymmetric exponents. The associations are quite strong for runs of length greater than 3. This may mean that there is both an association between heart rate asymmetry and ADFA and an association between the presence of various length runs and ADFA. Thus it is necessary to study the cooccurrence of both types of asymmetry. Associations between the presence of asymmetry in runs-based descriptors and the presence of asymmetry in local asymmetric scaling exponents We use the same method as above for runs of length greater than 3, since for these runs the heart rate asymmetry is clearly observable. The results of the logistic regression models for all the analyzed run lengths can be found in Table 12. From both Tables 11 and 12 it can be concluded that the presence of asymmetry in monotonic runs predicts the presence of asymmetry in ADFA. The association between asymmetry in the entropic HRA descriptors and ADFA The associations between ± and the quantities summarizing all the above runs, i.e., H DR and H AR are presented in Table 13. As we did for the other descriptors, let us build the 2 × 2 contingency table (Table 14) and the logistic model for the co-occurrence of the two types of asymmetry. The coefficient of the model is 1.856 with p < 0.0001 , so again the association is strong. Table 7 Associations between occurrence of HRA and the asymmetry in ADFA Clearly, in all the types of descriptors, the presence of asymmetry in HRA entails the presence of asymmetry in ADFA. Discussion In the present paper we have shown that HRA in both its versions presented here, i.e., variance based and runs based, is associated with the asymmetry observable in ADFA, i.e., + < − . The study was carried out in a group of healthy young people from whom stationary 30-min-long recordings were obtained. In the studied group we observed clear and highly statistically significant heart rate asymmetry at all time scales, i.e., short-term ( C1d > 0.5 ), long-term ( C2d < 0.5 ) and total ( Cd < 0.5). Runs-based asymmetry is also clearly visible and significant for all observable runs of length > 3 as well as in the summary runs-based descriptors, i.e., H DR < H AR . This group also exhibits a clear and highly statistically significant ADFA asymmetry, defined as + < − . We have found that both scaling exponents are highly statistically significantly associated with the heart rate variability of the analyzed recordings, i.e., they were associated with total ( SDNN 2 ), long-term ( SD2 2 ) and short-term ( SD1 2 ) variability. For this reason they were also associated with the unnormalized HRA Poincaré plot descriptors like SD1 2 d∕a , SD2 2 d∕a and SDNN 2 d∕a . The associations between the scaling exponents and the magnitude of the relative contributions to asymmetry ( C1 d , C2 d , C d ) were significant for + , but not significant for − . At this point it is probably impossible to interpret this result. The associations of ± with the magnitudes of the runsbased HRA descriptors are also highly statistically significant. However, the strongest associations between ADFA and HRA in the RR intervals time series are observable in the agreement in between the two types. From the numbers presented in Section 3.4 on the predictive power of HRA for ADFA asymmetry, it is almost certain that both approaches describe the same physiological phenomenon, even though they use totally different methods, assumptions and even different language (HRA is based on statistical considerations, and ADFA on studying long-range correlations in dynamic systems). An important limitation of this study should be raised at this point. Since the recordings used in the present study are quite short, the ± cannot be considered measures of Table 11 Associations between occurrence of HRA in runs of length 4-10 and the asymmetry in ADFA fractality and they will be strongly related with the variance of the recording. Therefore, the results involving variance-based HRA descriptors are weakened by this observation. However, the relation with the normalized descriptors is much more robust to this effect since the dependence on variance is largely eliminated by normalizing by variance-based parameters. Additionally, the results relating the incidence of asymmetry in ADFA with HRA should be fully resistant to this effect. The explanation of HRA has not yet been established. In the crudest approximation of the Autonomic Nervous System (ANS) it can be said that the parasympathetic branch of the ANS is responsible for decelerations and the sympathetic branch is responsible for accelerations. In this picture it might be tempting to ascribe the deceleration-based descriptors ( SD1 d , SD2 d , SDNN d , C1 d , C2 d , C d , DRx as well as + ) to the parasympathetic branch and the rest to the sympathetic branch. Yet, in reality, both branches can lead to both accelerations and decelerations through activation or deactivation. It would be more prudent to state that HRA reflects the interaction between both branches as well as describing the patterns of accelerations and decelerations. ± in the approach assumed in ADFA are influenced by both accelerations and decelerations and reflect the differences in noise left over after removing linear trends. Thus, their asymmetric behavior most possibly reflects the different interactions, both long and short term, present during decelerating and accelerating trends. Possibly the best way in which the two types of asymmetry can be related is through the monotonic runs. It is fair to hypothesize that, since falling and rising trends consist of individual runs, rising trends in the RR intervals time series should be dominated by decelerating runs and falling trends by accelerating runs. Since, as we have shown in this and other papers, the dynamics of these runs differ, the long-range correlations reflected by the ADFA scaling exponents, should also differ. The above hypothesis is strictly mathematical. As far as physiological relation is concerned,we can safely hypothesize that both measures are linked through a common, underlying physiological process. One such process which immediately comes to mind is respiratory sinus arryhythmia, which is a strong driver of heart rate variability. However, as demonstrated in [34], the link between HRA and RSA is by no means clear. The authors find no link between period variability asymmetry and respiratory sinus arrhythmia in healthy and chronic heart failure individuals. This means that further, physiologically oriented studies are necessary. Since this is a strictly observational study we refrain from physiological explanation for HRA and its relation with ADFA as this would call for a full paper, we would, however, like to note that a few interesting and convincing physiological mechanisms have been identified in [35]. Another point that should be raised at this point is that we use a specific approach to HRA, namely the variance-based and runs-based descriptors. There are other approaches, like using predictability markers [36] which could also shed light on the analyzed problem. To sum up, ADFA asymmetry is associated with HRA and one is an almost perfect predictor of the other. Conclusion ADFA, which is a new method for studying asymmetry in time series has been applied to the time series of RR intervals. It has been found that it is associated to other approaches to asymmetry in this time series. Since ADFA has unique properties, like the ability to study long-range correlations or scaling behavior of the time series, it is a promising direction in the HRA analysis, even though it does have some interpretational difficulties. Funding This study was partially funded from the sources of the project 'Predicting adverse clinical outcomes in patients with implanted defibrillating devices' (TEAM/2009-4/4), which is supported by the Foundation for Polish Science -TEAM program co-financed by the European Union within the European Regional Development Fund. Declarations Ethics approval This study was accepted by the Bioethics Committee of the Poznan University of Medical Science. Competing interests The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. J. Piskorski is a theoretical physicist and medical biologist interested mainly in heart rate asymmetry, heart rate variability, signal processing and statistical methods in biomedical research. He is employed at the University of Zielona Gora and often consults for biomedical and pharmaceutical companies. D. Mieszkowski is a theoretical physicist, mainly interested in applications of theoretical physics to biological systems and in industrial processes. His main area of interest is Asymmetric Detrended Fluctuation Analysis. Dr. Mieszkowski works in industry for the automation company OEM Automatic Ltd. He spends his free time fishing, travelling and having fun with his family. S. Żurek works at the Institute of Physics of the University of Zielona Góra, Poland. His main areas of interest include the application of methods of computational physics and high-performance computing to complex problems in physics and medicine. Lately, the most explored topics concern analysis of biomedical time series, specifically the heart rate variability and complexity analysis of RR intervals. In his previous work, he contributed to the field of polymer translocation through pores and was working in the EPR spectroscopy developing some artificial intelligence methods that helped in the labeling of experimental data. He also has some experience in molecular dynamics simulations. B. Biczuk is PhD candidate at the Institute of Physics of the University of Zielona Gora, Poland. His main areas of interest include application of methods of digital signal processing to biomedical signals and study of quantum optical systems. S. Jurga is a medical doctor with a specialization in neurology, head of the neurology department of the University of Zielona Góra hospital. His main scientific interest lies within cerebral vascular diseases, neurosonology and sleep medicine. He is interested in applying mathematical concepts to studying these areas and is eager to apply them to his clinical data. T. Krauze works at the Department of Cardiology-Intensive Therapy and Internal Diseases, Poznan University of Medical Sciences as a research/technical worker. In his scientific research is focused mainly on autonomic modulation of the cardiovascular system, noninvasive measure of hemodynamic parameters and arterial stiffness. A. Wykrętowicz is the head of the Department of Cardiology, Poznan University of Medical Sciences. His interests are diverse, including Heart Rate Variability and Heart Rate Asymmetry. The Department headed by prof. Wykretowicz is responsible for the most number of HRA/HRV papers in Poland and possibly also in the world. P. Guzik is a cardiologist working at the Department of Cardiology-Intensive Therapy, Poznan University of Medical Sciences, Poznan, Poland. His scientific interests are related to cardiology, including cardiac arrhythmias (e.g., atrial fibrillation), electrocardiology, and cardiovascular time series analysis. Together with Jaroslaw Piskorski, they have described the Heart Rate Asymmetry phenomenon related to unequal input of heart rate accelerations and decelerations to Heart Rate Variability. In clinical practice, he consults patients with cardiovascular diseases, e.g., acute myocardial infarction, heart failure, arrhythmias, or cerebral stroke. Several interdisciplinary papers related to bioengineering, medical physics, and signal analysis are among his publications.
6,713
2022-08-24T00:00:00.000
[ "Physics" ]
The social component of the projection behavior of clausal complement contents Some accounts of presupposition projection predict that content’s consistency with the Common Ground influences whether it projects (e.g., Heim 1983; Gazdar 1979a,b). I conducted an experiment to test whether Common Ground information about the speaker’s social identity influences projection of clausal complement contents (CCs). Participants rated the projection of CCs conveying stereotypical liberal or conservative political positions when the speaker was either Democrator Republican-affiliated. As expected, CCs were more projective when they conveyed political positions consistent with the speaker’s political affiliation: liberal CCs were more projective with Democrat compared to Republican speakers, and conservative CCs were more projective with Republican compared to Democrat speakers. In addition, CCs associated with factive predicates (e.g., know) were more projective than those associated with non-factive predicates (e.g., believe). These findings suggest that social meaning influences projective meaning and that social meaning is constrained by semantic meaning, in line with previous research on the relation between other levels of linguistic structure/perception and social information. ). On such lexically-based accounts of presupposition, presuppositions project unless they conflict with other information in the Common Ground (Heim 1983;Gazdar 1979a,b;van der Sandt 1992). Thus, in the absence of information that conflicts with with the CC of know in (1), these accounts predict that the CC projects. Other approaches derive (at least some cases of) presupposition projection from general pragmatic principles (e.g., Stalnaker 1974;Boër & Lycan 1976;Karttunen & Peters 1979;Simons 2001Simons , 2005Abrusán 2011). A prominent and well-developed approach within this camp has proposed that information-structural properties of utterances predict projection behavior (e.g., Abbott 2000;Simons 2001Simons , 2007Simons et al. 2010Simons et al. , 2017Beaver et al. 2017;Tonhauser et al. 2018). Proponents of this approach hypothesize that whether content projects depends on whether it is at-issue in the discourse. Content that addresses the Question Under Discussion (QUD; Roberts 1996Roberts /2012) is at-issue and predicted to be non-projective. Content that does not address the QUD is not-at-issue and predicted to project. For example, when (1-a) is uttered in a context in which the QUD is Did Obama improve the American economy?, the CC of know is predicted not to project since it addresses the QUD and is hence at-issue. When the same sentence is uttered in a context in which the QUD is What cognitive relation does Ben have to the proposition that Obama improved the American economy?, the CC of know is predicted to project since it does not address the QUD and is hence not-at-issue. These approaches highlight two different properties of content that have been implicated in projection: consistency with the Common Ground and at-issueness. As Beaver et al. (2017) point out, these properties are related: content that is entailed by, and hence consistent with, the Common Ground is not-at-issue and predicted to project. However, content that is not entailed by the Common Ground may be either at-issue or not-at-issue, depending on the QUD. It is possible that the projection of content not entailed by the Common Ground is influenced by both Common Ground consistency and the QUD/atissueness, or that these properties influence one another. Here, I focus on the former of these two properties in isolation, leaving for future research the task of determining their relation to each other and to the projection of content that is not entailed by the Common Ground. In investigating the property of Common Ground consistency, the current study builds on research from Tonhauser & Degen (2019), who manipulated the CC's consistency with the Common Ground by providing information relevant to the truth of the CC. Tonhauser & Degen (2019) found that the CC is more projective when it is highly probable by virtue of the Common Ground information compared to when it is less probable, suggesting that inconsistency with the Common Ground interferes with projection. The primary goal of the current study is to investigate whether consistency with a particular type of Common Ground information -social information about the speaker -influences CC projection. A second goal is to investigate whether these (in)consistencies are dependent on the factivity of the clauseembedding predicate. These two aims are articulated in the research questions in (2): (2) Research Questions: a. For clausal complement contents (CCs), is projection sensitive to consistency between the contents and Common Ground information about the speaker's social identity? b. Is the effect of consistency dependent on the factivity of the clause-embedding predicate? I explore these research questions on the basis of an experiment in which participants were presented with target sentences like (1-a). Consistency between the CC and Common Ground information about the speaker's social identity was operationalized by manipulating two variables, as demonstrated in (3). The lexical content instantiating the clausal complement was manipulated such that it conveyed either a liberal political position (Obama improved the American economy) or a conservative political position (Obama damaged the American economy). The speaker's political affiliation was manipulated by presenting each sentence as the utterance of either a Republican or Democrat-affiliated speaker. If consistency with Common Ground social information influences projection, CCs that are consistent with liberal ideologies will project more with Democrat speakers and CCs that are consistent with conservative ideologies will project more with Republican speakers, compared to when the speaker is affiliated with the opposing party. In order to address the second research question, the factivity of the clause-embedding predicates was also manipulated. Each lexical content was presented as the complement of one factive predicate like know and one non-factive predicate like believe. If projection is a consequence of lexically-encoded presupposition, as assumed by lexically-based accounts of projection, then factivity should influence the projection of CCs. Specifically, the CCs of factive predicates should be more projective than those of non-factive predicates, and may also constrain the effect of Common Ground consistency on projection. 2. Accounting for projection behavior: existing accounts. Lexically-based analyses of projection start with the assumption that projection is a consequence of lexically-encoded presupposition (e.g., Karttunen 1974;Heim 1983Heim , 1992Gazdar 1979a,b;van der Sandt 1992;Schlenker 2008). Presuppositions are then subject to a requirement with respect to the Common Ground: the presupposition must be entailed by (Heim 1983) or satisfied in the Common Ground (van der Sandt 1992). In the simplest case, the presupposition is already part of the Common Ground, i.e, the speaker and addressee already have mutual knowledge of the presupposed information. If the presupposition is not already in the Common Ground but the speaker's utterance is nevertheless felicitous, the addressee simply adds the presupposed information to the Common Ground. This process is called global accommodation (e.g., Heim 1983). In both cases, the speaker is interpreted as having expressed commitment to the truth of the presupposition by taking it to be in the Common Ground, i.e., the presupposition is felt to project to the Common Ground of the interlocutors. Among the presupposition "triggers" that have been claimed to lexically-encode presuppositions are a subset of clause-embedding predicates called factives (Kiparsky & Kiparsky 1970). Kiparsky & Kiparsky (1970) distinguished factive predicates like regret, know and realize from non-factive predicates on the basis of both syntactic and semantic properties, but semantically they were differentiated on the basis of the presupposition of the CC. Whereas factive predicates are assumed to lexically encode their CCs as presuppositions, non-factive predicates are not. Projection then follows from presupposition: the CCs of factives are predicted to project, in contrast to those of non-factives. As many observations in the literature attest, the CCs of factive predicates do not always project. Just one year after Kiparsky & Kiparsky (1970) introduced the factive/non-factive di-vision, Karttunen (1971) pointed out that some factives (e.g., regret) seem to allow only projective interpretations, whereas others like discover admit both projective and non-projective interpretations. To capture this difference, he coined the term semi-factive for predicates like discover that can exhibit variable projection behavior, distinguishing them from predicates like regret that do not. Factive presuppositions can also fail to project when they conflict with other contextual information. Simons (2001), following comments from Geurts (1999) and Chierchia & McConnell-Ginet (1990), pointed out that factive presuppositions do not project in "explicit ignorance contexts". She offers the example in (4), in which the CC of discover is embedded under the entailment-cancelling epistemic modal adverb perhaps. (4) [Context: At a restaurant, the interlocutors observe a couple arguing at another table.] Speaker: Perhaps she just discovered that he's having an affair. Since neither interlocutor is acquainted with the couple, the addressee believes that the speaker has no knowledge of the man's behaviors. Hence, the addressee does not take the speaker to be committed to the CC of discover. Instead, the addressee interprets the speaker as suggesting that it is possible that he had an affair, and that if he has had one her discovery of it may be the reason for the argument. The mechanism that is standardly invoked to account for the observation that presuppositions do not always project is local accommodation (Heim 1983). Central to this process are the notions of global and local contexts (Karttunen 1974;Stalnaker 1974;Heim 1983). An entire sentence is evaluated with respect to the information in the global context, i.e., the Common Ground. But constituent sentences may instead be evaluated with respect to local contexts, prior to updating the global context. In (4), the CC of discover could be locally accommodated by being evaluated within the scope of the modal adverb. The speaker is then interpreted as believing that the affair is a possibility, without being certain that it happened. Others have proposed that presuppositions are simply cancelled when they conflict with Common Ground information (e.g., Gazdar 1979a,b;van der Sandt 1988). Gazdar (1979a,b) outlines an account in which presuppositions project by default, but are cancelled if they conflict with the speaker's prior set of commitments. Presuppositions only project if they do not conflict with prior speaker commitments contributed by implicatures, entailments, or other extra-linguistic information. When such a conflict emerges, the presupposition does not project to the Common Ground. The approaches described thus far in this section assume that projection is a consequence of presupposition. But other authors have pointed out that content need not be presupposed in order to project (e.g., Chierchia & McConnell-Ginet 1990;Potts 2005). For example, several authors have noted based on examples like (5) from Chierchia & McConnell-Ginet (1990) that appositive expressions contribute content that is not presupposed, and nevertheless projects (Chierchia & McConnell-Ginet 1990;Potts 2005): the speaker has not presupposed that Jill lost something on her flight, but is nevertheless taken to be committed to this content. (5) Jill, who lost something on the flight from Ithaca to New York, doesn't like to travel by train. Intonation also seems to influence whether content projects. Beaver (2010), for example, pointed out the CC of discover in (6) seems not to project when a constituent within the complement (e.g., plagiarized in (6)) is narrowly focused; when narrow focus is outside of the complement (e.g., when discovers is focused), the sentence receives a projective reading with respect to the CC. Later experimental work confirmed the intuition that prosody influences the projection of factive CCs (Cummins & Rohde 2015;Tonhauser 2016;Djärv & Bacovcin 2017). (6) a. If the TA discovers that your work is PLAGIARIZED, I will be forced to notify the dean. b. If the TA DISCOVERS that your work is plagiarized, I will be forced to notify the dean. In light of observations like these, recent work has attempted to provide a unified analysis of both presuppositional and non-presuppositional projective meanings in terms of information structure (e.g., Simons et al. 2010;Beaver et al. 2017;Simons et al. 2017). At the core of this analysis is Roberts' (1996Roberts' ( /2012) Question Under Discussion (QUD), i.e., the semantic question corresponding to the current topic of discourse. In (7), for example, Rachel's utterance could be intended to address either QUD1 or QUD2. (7) QUD1: What does the pizza have on it? QUD2: What is Ben's relation to the proposition the pizza has olives on it? Rachel: Ben doesn't know that the pizza has olives on it. Whereas the CC addresses QUD1, it does not address QUD2. In their Projection Principle, Beaver et al. (2017) hypothesized that this property of the CC predicts whether it projects: content that addresses the QUD, at-issue content, is hypothesized not to project; content that does not address the QUD, not-at-issue content, is hypothesized to project (see also Simons et al. 2010Simons et al. , 2017. Like the lexically-based accounts, the Projection Principle predicts that content that is entailed by the Common Ground projects. Beaver et al. (2017) explain that this prediction follows from the assumption that the Common Ground and the QUD ought to be compatible with one another. That is, the QUD should not be about the truth of the content that is entailed by the Common Ground; hence content entailed by the Common Ground is not-at-issue, and therefore projects. But the Projection Principle and lexically-based accounts make different predictions about the projection of content that is not entailed by the Common Ground. On the informationstructural accounts outlined above, the QUD determines whether the content is (not-)at-issue and this property predicts whether it will project. On lexically-based accounts, however, only content associated with a presupposition trigger has the potential to project, and it is predicted to do so unless it conflicts with other information in the Common Ground. The experiment reported in this paper is designed to investigate the projection of content that is not entailed by the Common Ground, and whether its ability to project is sensitive to a particular property: consistency with the Common Ground. Though the influence of this property on projection is associated with lexically-based accounts of projection, it is in principle possible that both Common Ground consistency and information-structural properties such as at-issueness influence projection of content (i.e., content not entailed by the Common Ground). This study is not intended to explore the interaction of these two properties, and leaves this task to future research. However, I do consider the possibility that content may project even when it is not associated with a canonical presupposition trigger, a possibility which is con-sistent with information-structural but not lexically-based projection accounts. In particular, I investigate the extent to which consistency with the Common Ground influences the projection of CCs associated with factive predicates -which are canonical presupposition triggers -as well as non-factive predicates, which are not assumed by lexically-based accounts to encode their complements as presuppositions. 3. Social and linguistic meaning. As discussed in section 2, consistency with the Common Ground has been implicated as property of content that influences projection. Such Common Ground information can come from a variety of sources -prior utterances, entailments or implicatures of the uttered sentence, or implicit reasoning about the extralinguistic context (e.g., Heim 1983;Gazdar 1979a,b). In this paper, I investigate an aspect of the extralinguistic context that hasn't been explored in the projection literature: the speaker's social identity. The hypothesis that projection is sensitive to social information is motivated by robust evidence from the sociolinguistic literature that linguistic structure and social information are intimately connected in the minds of speakers/hearers. For example, listeners have been shown to use information about speaker region (e.g., Niedzielski 1999;Hay & Drager 2010), age (e.g., Drager 2011) and gender (e.g., Strand 1999) in speech perception. D'Onofrio (2018) showed that the speaker's construction of a unique social personae (e.g., a valley girl or business professional) influences the perception of the speaker's vowels. Social information has also been found to influence lexical access. A common experimental paradigm in this literature involves manipulating a speaker characteristic like age, as well as the content of the sentence, such that the content is either congruent with the speaker characteristic (e.g., an adult talking about drinking wine) or incongruent (e.g., a child talking about drinking wine). In brain-imaging research, incongruent stimuli are associated with brain responses that are similar to the responses associated with other types of linguistic anomalies (e.g., semantically anomalous words), leading researchers to conclude that the same sorts of neural mechanisms underlie the processing of linguistic and social information (e.g., Van Berkum et al. 2008;Tesink et al. 2009). In lexical access research that uses reaction time or offline dependent measures, the processing of the word that creates the anomaly has been found to be impeded (Kim 2016;Walker & Hay 2011;Choe et al. 2019). Casasanto (2008) implemented a variation of the incongruity paradigm in which listeners were presented with pairs of spoken sentences that were temporarily ambiguous between two interpretations. On one interpretation the word of interest exhibited a feature of African American Vernacular English, a final /t/ or /d/ that had been deleted (e.g., mast pronounced as [maes]). In the other interpretation, there was no underlying final stop (e.g., mass pronounced as [maes]). Casasanto (2008) found that participants responded faster to the sentence continuation that was compatible with the deleted final stop interpretation when the speaker was black, and faster to the sentence continuation that was compatible with there being no underlying final stop when the speaker was white. Syntactic processing, too, has been shown to be sensitive to social information. In a brain imaging study of native listeners, Hanulíková et al. (2012) found that gender-agreement violations in Dutch elicited a neural response associated with encountering ungrammaticality when the speaker was native Dutch speaker, but not when the speaker was non-native. Similarly, Seifeldin et al. (2015) found that native Standard American English (SAE) listeners' neural responses to copula deletion was modulated by the dialect of the speaker: the characteristic response for ungrammaticality appeared for native SAE speakers, but not for native speakers of other English dialects. Though research exploring the interface between social and semantic-pragmatic meaning is sparse, a recent strand of scholarship has attempted to unify these two domains. Andrea Beltrama's work focuses on the social and semantic-pragmatic meaning of linguistic expressions conveying intensification and precision. With respect to intensifiers, he showed that totally has greater potential to index social meaning when it targets a pragmatically-provided scale anchored to the speaker's attitude (e.g., You should totally click on that link!) vs. a lexical scale provided by the subsequent predicate (e.g., The bus is totally full.) (Beltrama & Staum Casasanto 2017;Beltrama 2018a). In other work, Beltrama (2018b) found that speakers who use more precise expressions (e.g., Ben called at 9.03) are perceived as more intelligent, educated and articulate but also more annoying, obsessive, pedantic and uptight than speaker who use less precise expressions (e.g., Ben called at 9). He interprets these finding as evidence that semanticpragmatic properties of expressions conveying linguistic intensification and precision constrain their social meaning. In addition to linguistic precision and intensification, determiners have also been the subject of research at the interface of semantic-pragmatic and social meaning. Acton & Potts (2014) argued that the semantics of demonstrative determiners facilitates their ability to convey social meaning; in particular speakers can use demonstratives to convey shared perspective with their interlocutors. Acton (2019) further showed that the semantics of plural determiner phrases (e.g., The Americans) allow speakers to express social distance from the addressee, in contrast to bare plural phrases (e.g., Americans) To summarize, there is robust evidence that social information influences linguistic processing and perception at multiple levels of linguistic structure. However, within semantics and pragmatics, research has focused on whether this relationship goes in the other direction, i.e., whether linguistic (semantic-pragmatic meaning) influences social perception. Here, I bring these two lines of inquiry together by investigating whether semantic-pragmatic meaning -particularly projective meaning -is influenced by social information. Further, in light of the prior research showing that semantic meaning constrains social perception, I explore the extent to which the effect of social information on projection is mediated by lexical semantics, specifically the factivity of clause-embedding predicates. 4. Experiment. The projective contents explored in the experiment were the contents expressed by the complements of clause-embedding predicates. (In)consistency between the CC and the Common Ground was operationalized by manipulating the political orientation of the lexical content instantiating the clausal complement (conservative vs. liberal) and the political affiliation of the speaker (Republican vs. Democrat). Factivity of the clause-embedding predicate was manipulated by presenting each lexical content as the complement of a factive predicate and a non-factive predicate. Participants provided projection ratings for the same stimuli in two separate experimental blocks, one in which the speaker was Republican-affiliated and one in which the speaker was Democrat-affiliated. Projectivity was measured using the 'certainty' diagnostic (e.g., Tonhauser 2016;Stevens et al. 2017;Tonhauser et al. 2018). This diagnostic assesses speaker commitment by asking participants to indicate the extent to which the speaker is certain about the content of interest. 4.1. PARTICIPANTS. 200 participants were recruited on Amazon's Mechanical Turk platform and paid $1.25 for participating in the experiment. 1 Participants had US IP addresses and at least 97% of previous HITs approved. Data from non-native American English speakers (N=5) and participants who responded incorrectly to control items (N=46) was removed, leaving data from 149 participants. 4.2. MATERIALS. Each target sentence featured a third-person matrix subject, a clause-embedding predicate, and a clausal complement. The predicate and complement were embedded under negation, as in (8). (8) Cindy doesn't know that a. Obama damaged the American economy. conservative CC b. Obama improved the American economy. liberal CC c. club membership numbers have increased. neutral CC Seven clause-embedding predicates were used to construct the target sentences: the factive predicates know, realize and see and the non-factive predicates believe, think and feel. The clausal complements were instantiated by 42 lexical contents. 28 of these were "political" lexical contents that conveyed a position regarding 1 of 14 political topics (e.g., Obama, marriage equality etc.). For each topic, one lexical content conveyed a stereotypical liberal positions and one conveyed a stereotypical conservative position. These lexical contents were normed in a separate experiment to ensure that they were associated with the intended political orientation. 2 The other 14 lexical contents were each associated with 1 "neutral" topic. These topics were unrelated to politics (e.g., club membership numbers). The neutral topics were included in order to confirm that the CCs in sentences uttered by Democrats were not perceived as more projective than Republicans, or vice versa. Participants might have preexisting beliefs that affiliates of one political party are more certain about their claims than the affiliates of the other party. If this were the case, observed differences in projectivity could be driven by this preexisting belief, rather than interactions between social information and the CC. If participants don't hold this preexisting belief, then the projection of neutral CCs should be equivalent regardless of whether the speaker is Democrat-or Republican-affiliated. Each lexical content was combined with 2 predicates (1 factive and 1 non-factive in most cases). For political topics, the conservative and liberal lexical contents were each combined with the same predicate. In total, there were 84 predicate/lexical content pairs, which were used to construct 84 target sentences. Target stimuli were constructed by presenting each target sentence once as the utterance of a speaker attending a meeting College Democrats, and once as the utterance of a speaker attending a meeting of College Republicans, as in (9) The 84 target sentences were distributed across 4 presentation lists. On each list, there were 14 target sentences with complements instantiated by politically-oriented lexical contents ("political target sentences") and 7 target sentences with complements instantiated by neutral lexical contents ("neutral target sentences"). Each target sentence was presented once as the utterance of a Republican-affiliated speaker and once as the utterance of a Democrat-affiliated speaker. Each list included exactly three target sentences per predicate and no more than one target sentence per topic. For half of the 14 political target sentences on each list, the complements were instantiated by conservative lexical contents; the complements of the other political target sentences were instantiated by liberal lexical contents. In order to assess whether participants were paying attention, two control lexical contents were also constructed. These lexical contents were apolitical, and expressed by main clauses that were not embedded under entailment-cancelling operators, as in (10). They were therefore expected to be interpreted as commitments of the speaker and receive high certainty ratings regardless of the type of meeting attended by the speaker. Each control lexical content appeared on each list twice, once as the utterance of a Democrat-affiliated speaker and once as the utterance of a Republican-affiliated speaker. In total, there were 46 stimuli on each list. A short demographic questionnaire was also included in which participants provided standard demographic information (age, native language. etc.) as well as information about their own political affiliation. (10) a. Carly, at the College Democrats Club meeting: Alan brought the cookies for dessert. b. Larry, at the College Democrats Club meeting: Olivia missed the meeting because she's sick. PROCEDURE. Each participant was randomly assigned to 1 of the 4 presentation lists. Participants completed the experiment in two separate experimental blocks. At the beginning of each block, participants were told to imagine a meeting at a university campus. In one block, they were told to imagine a meeting of the College Republicans Club. In the other block, they were told to imagine a meeting of the College Democrats club. In the Republican block, participants saw the 23 stimuli on their list with Republican speakers. In the Democrat block, they saw the 23 stimuli on their list with Democrat speakers. Block order and within-block trial order were randomized. On each target stimulus trial, participants read the the speaker's utterance followed by a response question about the speaker's certainty with respect to the CC, as shown in Figure 1. Participants gave their responses by adjusting a slider labeled on the left with "No, not certain" (corresponding to a rating of 0) and on the right with "Yes, certain" (corresponding to a rating of 1). The higher the response, the more projective the CC was taken to be. The response question for the control stimuli was about the main clause content (Is Carly certain that Alan brought the cookies for dessert? and Is Larry certain that Olivia missed the meeting because she's sick?). 3 After completing both blocks of the experiment, participants completed a demographic questionnaire. (2). In this section, I report the experimental results that bear on these questions. First, I confirm that the projectivity of the stimuli with neutral CCs was not sensitive to the political affiliation of the speaker. Second, I report the results for the stimuli with political CCs. These results suggest that both the factivity of the clause-embedding predicate and the speaker's political affiliation independently influenced CC projection. The responses to stimuli with neutral CCs were analyzed with a linear mixed-effects model. The full model included fixed effects for predicate factivity (factive vs. non-factive), the political affiliation of the speaker (Republican vs. Democrat), and their interaction. The maximal random effects structure for which the model converged included random intercepts for item and participant. p-values for fixed effects were obtained using log-likelihood comparisons between the full model against a model without the effect in question. The responses to stimuli with political CCs were analyzed with a linear mixed-effects model. The full model included fixed effects for predicate factivity (factive vs. non-factive), the political affiliation of the speaker (Republican vs. Democrat), and the political orientation of the CC (conservative vs. liberal) and all 3-and 2-way interactions. The maximal random effects structure for which the model converged included random intercepts for item and participant, as well as by-item and by-participant slopes for speaker political affiliation and the political orientation of the CC. p-values for fixed-effects were obtained using log-likelihood comparisons between the full model against a model without the effect in question. Figure 2 visualizes mean ratings as a function of speaker political affiliation and political orientation of the CC, grouped according to the factivity of the predicate. As suggested by the figure, there was a significant interaction between the political orientation of the CC and speaker's affiliation (β = 0.19, SE = 0.15, t = 13.12, χ 2 (1) = 168.14, p < 0.0001). For clausal complements instantiated by liberal lexical contents, ratings were higher when the speaker was Democrat-affiliated (mean = .66) than Republican-affiliated (mean = .56). For clausal complements instantiated by conservative lexical contents, ratings were higher when the speaker was Republican-affiliated (mean = .64) than Democrat-affiliated (mean =.55). The three-way interaction between political orientation, speaker affiliation, and predicate factivity did not reach significance (β = 0.01, SE = 0.03, t = 0.39, χ 2 (1) = 0.15, p = 0.70). Figure 3 visualizes mean ratings by predicate and factivity, suggesting that ratings were higher when the predicate was factive (mean = 0.69) compared to when it was non-factive (mean = 0.48). This qualitative observation was confirmed statistically: there was a significant main effect of predicate factivity, such that the CCs of factive predicates received higher ratings than those of non-factive predicates (β = 0.22, SE = 0.02, t = 9.99, χ 2 (1) = 40.66, p < 0.0001). 4.5. DISCUSSION. This experiment was designed to investigate the effects of two properties on projection: (1) (in)consistency between projective content and Common Ground information about the speaker's social identity, and (2) the factivity of the clause embedding predicate. As expected, CC projection was sensitive to consistency between the lexical contents instantiating the complements and Common Ground information about the speaker's social identity. Consistency between the lexical content instantiating the clausal complement and the speaker's social identity led to higher projection ratings than inconsistency: when the complement was instantiated by a liberal lexical content, projection ratings were higher when the speaker was Democrat; when the complement was instantiated by a conservative lexical content, projection ratings were higher when the speaker was Republican. This finding is compatible with accounts of projection that attribute non-projective interpretations of factive presuppositions to local accommodation (e.g., Heim 1983) or cancellation (e.g., Gazdar 1979a,b;van der Sandt 1992). This finding also demonstrates that the social identity of the speaker is an important source of information about whether projective content is consistent with the Common Ground. Listeners draw inferences about speaker beliefs based on information about the speaker's social identity and use these inferences to reason about whether content is consistent with the Common Ground; in short, social information influences projective meaning. The factivity of the clause-embedding predicate was also found to influence projection, independently of whether the lexical content instantiating the complement was consistent with the Common Ground information about the speaker's social identity. CCs were more projective when the predicate was factive compared to when it was non-factive, regardless of the speaker's political affiliation. This finding is compatible with the assumption that factive predicates lexically-encode their complements as presuppositions, whereas non-factive predicates do not. In sum, the findings of this experiment suggest that factivity and consistency with Common Ground information about the speaker's social identity independently influence projection. Common Ground (in)consistency can influence CC projection regardless of predicate factivity. However, such (in)consistency is not sufficient to override the effect of factivity, suggesting that the semantics of the clause-embedding predicates constrain the effects of social information. Conclusion. Consistency between projective content and the Common Ground has previously been implicated as a property that influences projection (e.g., Heim 1983;van der Sandt 1992;Gazdar 1979a,b). In this paper, I investigated whether projection is sensitive to Common Ground information about the speaker's social identity. The findings suggest that such information bears on how listeners reason about the consistency of the Common Ground with projective content: contents were more projective when they conveyed political positions consistent with the speaker's social identity than when they were not. Given that social information has been found to influence comprehension/perception in many other linguistic domains, this finding is expected and confirms that semantic-pragmatic meaning is also subject to influence from such information. In addition, the finding that social information about the speaker did not override the effects of the predicate's factivity suggests that semantic meaning limits the extent to which social information influences projective meaning, in line with prior research showing that semantic-pragmatic meaning constrains social perception. Together with recent work, the findings of this study highlight that social and semanticpragmatic meaning are more closely connected than often assumed, thereby motivating future research at the interface of these two types of meaning. With respect to projective meaning, an interesting question is whether less explicit cues to speaker social identity also influence projection, for example, the use of sociolinguistic variants (e.g., -ing vs. -in). It remains for future research to determine whether such cues also play a role, and how their effects ought to be incorporated into a theory of projection.
7,582.4
2020-04-13T00:00:00.000
[ "Linguistics" ]
Decision Analysis via Granulation Based on General Binary Relation Decision theory considers how best to make decisions in the light of uncertainty about data. There are several methodologies that may be used to determine the best decision. In rough set theory, the classification of objects according to approximation operators can be fitted into the Bayesian decision-theoretic model, with respect to three regions (positive, negative, and boundary region). Granulation using equivalence classes is a restriction that limits the decision makers. In this paper, we introduce a generalization and modification of decision-theoretic rough set model by using granular computing on general binary relations. We obtain two new types of approximation that enable us to classify the objects into five regions instead of three regions. The classification of decision region into five areas will enlarge the range of choice for decision makers. Introduction Making decisions is a fundamental task in data analysis.Some methods have appeared to make a decision.Yao and Wong [1] proposed and studied a more general type of probabilistic rough set approximations via Bayesian decision theory.In Section 2, we give a brief overview of granulation structures on the universe.One is defined by an equivalence relation due to Pawlak [2] and the other by a general relation proposed by Rady et al. [3].Approximation structures are discussed for each type of granulation.Section 3 discusses a decision-theoretic model of rough sets under equivalence relations given by Yao and Wong [1].Our main contribution is to introduce a general decision-theoretic model of rough sets using a general relation.The resulted granulation induces approximation different from that due to Pawlak.This enables us to construct two new approximations, namely, semilower and semiupper approximations which are useful in the partition of boundary region in particular and the universe in general with respect to any subset of the universe. Granulation of universe and rough set approximations In rough set theory, indiscernibility is modeled by an equivalence relation.A granulated view of the universe can be obtained from equivalence classes.By generalizing equivalence relations to binary relations, one may obtain a different granulation of the universe.For any kind of relations, a pair of rough set approximation operators, known as lower and upper approximation operators, can be defined in many ways (Pawlak [2], Rady et al. [3]).consists of all elements equivalent to x, and is also the equivalence class containing x. In an approximation space apr = (U,E), Pawlak [2] defined a pair of lower and upper approximations of a subset A ⊆ U, written as apr (A) and apr(A) or simply A and A as follows: (2. 2) The lower and upper approximations have the following properties. For every A and B ⊂ U and every approximation space apr = (U,E), apr(−A) = −apr(A), (9) apr(−A) = −apr(A), (10) apr(apr(A)) = apr(apr(A)) = apr(A), (11) apr(apr(A)) = apr(apr(A)) = apr(A), (12) if A ⊆ B, then apr(A) ⊆ apr(B) and apr(A) ⊆ apr(B).Moreover, for a subset A ⊆ U, a rough membership function is defined by Pawlak and Skowron [4]: where | • | denotes the cardinality of a set.The rough membership value μ A (x) may be interpreted as the conditional probability that an arbitrary element belongs to A given that the element belongs to [x] E . Granulation by general relation. Let U be a finite universe set and E any binary relation defined on U, and S the set of all elements which are in relation with certain x in U for all x ∈ U.In symbols, S = {xE}, ∀x ∈ U where {xE} = {y : xEy; x, y ∈ U}. (2.4) Define β as the general knowledge base (GKB) using all possible intersections of the members of S. The member that will be equal to any union of some members of β must be omitted.That is, if S contains n sets, 2,...,n; S i ⊂ S and β i = ∪S i for some i . (2.5) The pair apr β = (U,E) will be called the general approximation space based on the general knowledge base β. Rady et al. [3] extend the classical definition of the lower and upper approximations of any subset A of U to take these general forms where β x = {b ∈ β : x ∈ B}.These general approximations satisfy all the properties introduced in Section 2.1 except for properties (8, 9, 10, and 11).This is the main deviation that will help to construct our new approach. For granulation by any binary relation, Lashin et al. [5] defined a rough membership function as follows: (2.7) Granulation by general relation in multivalued information system. For a generalized approximation space, Abd El-Monsef [6] defined a multivalued information system.This system is an ordinary information system whose elements are sets.Each object has number of attributes with attribute subsets related to it.The attributes are the same for all objects but the attribute set-valued may differ.A multivalued information system (IS) is an ordered pair (U,Ψ), where U is a nonempty finite set of objects (the universe), and Ψ is a nonempty finite set of elements called attributes.Every attribute q ∈ Ψ has a multivalued function Γ q , which maps into the power set of V q , where V q is the set of allowed values for the attributes.That is, The multivalued information system may also be written as (2.8) With a set P ⊆ Ψ we may associate an indiscernibility relation on U, denoted by β(P) and defined by (x, y) ∈ β(P) if and only if Γ q (x) ⊆ Γ q (y) ∀Q ∈ P. (2.9) Clearly, this indiscernibility relation does not perform a partition on U. Example 2.1.In Table 2.1, we have ten persons (objects) with attributes reflecting each situation of life.Consider that we have three condition attributes, namely, spoken languages, computer programs, and skills.Each one was asked about his adaptation by choosing between {English, German, French} in the first attribute; {Word, Excel, Access, Power Point} in the second attribute; {Typing, Translation} in the third attribute.Let a i be the ith value in the first attribute, let b j be the jth value in the second attribute, and let c k be the kth value in the third attribute.The indiscernibility relation for C = {T 1 ,T 2 ,T 3 } will be x 4 ,x 9 , x 5 ,x 5 , x 6 ,x 6 , x 7 ,x 7 , x 7 ,x 8 , x 8 ,x 8 , x 9 ,x 9 , x 10 ,x 3 , x 10 ,x 10 . (2.10) It is easy to see that β(C) does not perform a partition on U in general.This can be seen via x 6 , x 7 ,x 8 , x 8 , x 9 , x 10 ,x 3 . (2.11) Obviously, the U/β(C) is the set S defined in the general approach in Section 2.2. Bayesian decision-theoretic framework for rough sets In this section, the basic notion of the Bayesian decision procedure is briefly reviewed (Duda and Hart [7]).We present a review of results that are relevant to decision-theoretic modeling of rough sets induced by an equivalence relation.A generalization and modification of decision-theoretic modeling induced by general relation is applied on the universe. Bayesian decision procedure. Let Ω = {ω 1 ,...,ω s } be a finite set of s states of nature, and let Ꮽ = {a 1 ,...,a m } be a finite set of m possible actions.Let P(ω j /X) be the conditional probability of an object x being in state ω j given the object is described by X. Let λ(a i /ω j ) denote the loss for taking action a i when the state is ω j .For an object with description X, suppose that an action a i is taken.Since P(ω j /X) is the conditional probability that the true state is ω j given X, the expected loss associated with taking action a i is given by and also called the conditional risk.Given description X, a decision rule is a function τ(X) that specifies which action to take.That is, for every X, τ(X) assumes one of the actions, a 1 ,...,a m .The overall risk R is the expected loss associated with a given decision rule.Since R(τ(X)/X) is the conditional risk associated with the action τ(X), the overall risk is defined by where the summation is over the set of all possible description of objects, that is, entire knowledge representation space.If τ(X) is chosen so that R(τ(X)/X) is as small as possible for every X, the overall risk R is minimized.Thus, the Bayesian decision procedure can be formally stated as follows.For every X, compute the conditional risk R(a i /X) for i = 1,...,m and then selected the action for which the conditional risk is minimum. Decision-theoretic approach of rough sets (under equivalence relations). Let apr = (U,E) be an approximation space where E is equivalence relation on U.With respect to a subset A ⊆ U, one can divide the universe U into three disjoint regions, the positive region POS(A), the negative region NEG(A), and the boundary region BND(A) (see Figure 3.1); In an approximation space apr = (U,E), the equivalence class containing x, [x] E , is considered to be description of x.The classification of objects according to approximation operators can be easily fitted into Bayesian decision-theoretic framework (Yao and Wong [1]).The set of states is given by Ω = {A, −A} indicating that an element is in A and not in A, respectively.With respect to the three regions, the set of actions is given by Ꮽ = {a 1 ,a 2 ,a 3 }, where a 1 , a 2 , and a 3 represent the three actions in classifying an object, deciding POS(A), deciding NEG(A), and deciding BND(A), respectively. Let λ(a i /A) denote the loss incurred for taking action a i when an object in fact belongs to A, and λ(a i / − A) denote the loss incurred for taking the same action when the object does not belong to A, the rough membership values μ A (x) = P(A/[x] E ) and μ A C (x) = 1 − P(A/[x] E ) are in fact the probabilities that an object in equivalence class [x] E belongs to A and −A, respectively.The expected loss R(a i /[x] E ) associated with taking the individual actions can be expressed as where λ i1 = λ(a i /A), λ i2 = λ(a i / − A) and i = 1,2,3. The Bayesian decision procedure leads to the following minimum-risk decision rules: Based on P(A/[x] E ) + P(−A/[x] E ) = 1, the decision rules can be simplified by using only probabilities P(A/[x] E ). Consider a special kind of loss with λ 11 ≤ λ 31 < λ 21 and λ 22 ≤ λ 32 < λ 12 .That is, the loss for classifying an object x belonging to A into the positive region is less than or equal to the loss of classifying x into the boundary region, and both of these losses are strictly less than the loss of classifying x into the negative region.For this type of loss functions, the minimum-risk decision rules (P)-(B) can be written as (P ) if , where From the assumptions λ 11 ≤ λ 31 < λ 21 and λ 22 ≤ λ 32 < λ 12 , it follows that α ∈ [0,1], γ ∈ (0,1) and β ∈ [0,1).Note that the parameters λ i j should satisfy the condition α ≥ β.This ensures that the results are consisted with rough set approximations.That is, the boundary region may be nonempty. Generalized decision-theoretic approach of rough sets (under general relation). In fact, the original granulation of rough set theory based on partition is a special type of topological spaces.Lower and upper approximations in this model are exactly the closure and interior in topology.In general spaces, semiclosure and semiinterior (Crossley and Hildbrand [8]) are two types of approximation based on semiopen and semiclosed sets which are well defined (Levine [9]).This fact with the concepts of semiclosure and semi-interior directed our intentions to introduce two new approximations.For any general binary relation, the general approximations do not satisfy the properties (10, 11) in Section 2.1.Therefore, we can define two new approximations, namely, semilower and semiupper approximation.Definition 3.1.Let apr = (U,E), where E is any binary relation defined on U. Then we can define two new approximations, namely, semilower and semiupper approximations as follows: (3.6) The lower and upper approximations have the following properties. For every A and B ⊂ U and every approximation space apr = (U,E), where This definition enables us to divide the universe U into five disjoint regions as follows (see Figure 3. (3.8) In this case, the set of states remains Ω = {A, −A} but the set of actions becomes Ꮽ = {a 1 ,a 2 ,a 3 ,a 4 ,a 5 }, where a 1 , a 2 , a 3 , a 4 , and a 5 represent the five actions in classifying an object deciding POS(A), deciding SemiL(A) − A β , deciding SemiBND(A), deciding A β − SemiU(A), and deciding NEG(A), respectively. In an approximation space apr = (U,E), where E is a binary relation, an element x is viewed as β x (a subset of GKB containing x).Since β does not perform a partition on U in general, then we consider that ∩β x be a description x.The rough membership values μ A (x) = P(A/ ∩ β x ) and μ A C (x) = 1 − P(A/ ∩ β x ) are in fact the probabilities that an object in ∩β x belongs to A and −A, respectively.The expected loss R(a i / ∩ β x ) associated with taking the individual actions can be expressed as The Bayesian decision procedure leads to the following minimum-risk decision rules. ( , decide NEG(A).Since P(A/ ∩ β x ) + P(−A/ ∩ β x ) = 1, the above decision rules can be simplified such that only the probabilities P(A/ ∩ β x ) are involved. NEG(A) Consider a special kind of loss function with For this type of loss functions, the minimum-risk decision rules (1)-( 5) can be written as follows. ( (3.11) A loss function should be chosen in such a way to satisfy the conditions: (3.12) These conditions imply that (Semi L(A) − A β ) ∪ SemiBND(A) ∪ (A β − SemiU(A)) is not empty, that is, the boundary region is not empty. (3.13) Now, we can decide the region for each object by using the generalized decisiontheoretic approach that proposed in Section 3.3.This approach can be applied on a multivalued information system and gives us the ability to divide the universe U into five regions which help in increasing the decision efficiency.The result given by general rough sets model can be viewed as a special case of our generalized approach. In our example, the set of states is given by Ω = {A, −A} indicating that an element is in A and not in A, respectively.With respect to five regions, the set of actions is given by Ꮽ = {a 1 ,a 2 ,a 3 ,a 4 ,a 5 }. To apply our proposed technique, consider the following loss function: There is no cost for a correct classification, 2 units of cost for an incorrect classification, 0.25 unit cost for an object belonging to A is classified in SemiL(A) − A β and for an object does not belong to A classified into A β − SemiU(A), 0.5 unit cost for classifying an object into boundary region, and 1 unit cost for classifying an object belong to A into A β − SemiU(A) and for an object does not belong to A into SemiL(A) − A β (note that a loss function supplied by user or expert).According to these losses, we have (3.15) By using the decision rules (1 )-( 5), we get the results shown in Table 3.1.Thus, we have (3.16) Now we apply the decision theoretic technique proposed by Yao and Wong [1] to classify the decision region into three areas.The set of actions is given by Ꮽ = {a 1 ,a 2 ,a 3 }, where a 1 , a 2 , and a 3 represent the three actions in classifying an object, deciding POS(A), deciding NEG(A), and deciding BND(A), respectively.To make this, consider that there is 0.25 unit cost for a correct classification, 3 units of cost for an incorrect classification, and 0.5 unit cost for classifying an object into boundary region, that is, λ 11 = λ 22 = 0, λ 31 = λ 32 = 0.5, λ 21 = λ 12 = 3. (3.18)By using the decision rule (P )-(C ) and replacing P(A/[x] E ) by P(A/β x ), we get the results shown in Table 3. 2. This means that POS(A) = x 1 ,x 3 ,x 6 , BND(A) = x 2 ,x 4 ,x 10 , NEG(A) = x 5 ,x 7 ,x 8 ,x 9 . (3.19) From the comparison between the two approaches, we note that the our approach (classification of decision region into five areas) gives us the ability to divid BND(A) = {x 2 ,x 4 ,x 10 } into SemiL(A) − A β = {x 2 }, which is closer to the positive region, A β − SemiU(A) = {x 4 }, which is closer to the negative region, and SemiBND(A) = {x 10 }. Conclusion The decision theoretic rough set theory is a probabilistic generalization of standard rough set theory and extends the application domain of rough sets.The decision model can be interpreted in terms of more familiar and interpretable concept known as loss or cost.One can easily interpret or mesure loss or cost according to real application. We have proposed in this paper a generalized decision-theoretic approach, which is applied under granulated view of the universe by any general binary relation.This approach enables us to classify the decision region into five areas.This classification will enlarge the choice for decision maker and help in increasing the decision efficiency.
4,232.2
2007-01-15T00:00:00.000
[ "Computer Science" ]
Multimedia Motion : motivating learners Multimedia Motion is a CD-ROM, designed by Gill Graham and David Glover, for teaching post-16 students about dynamics. It allows students to select data from moving bodies (such as space rockets and tennis players), and to explore how that data can be displayed graphically and what the relationships are between distance moved, velocity, acceleration, impulse, momentum, etc. Use of parts of the disc is incorporated into the Supported Learning in Physics (SLIP) programme, and it is in this context that the evaluation of its use reported here was carried out. DOI: 10.1080/0968776970050111 Introduction Multimedia Motion is a CD-ROM, designed by Gill Graham and David Glover, for teaching post-16 students about dynamics.It allows students to select data from moving bodies (such as space rockets and tennis players), and to explore how that data can be displayed graphically and what the relationships are between distance moved, velocity, acceleration, impulse, momentum, etc. Use of parts of the disc is incorporated into the Supported Learning in Physics (SLIP) programme, and it is in this context that the evaluation of its use reported here was carried out. The SLIP Project The SLIP Project is a curriculum development programme, led by the Open University, to provide flexible, supported learning materials for students studying for A or AS level physics or GNVQ advanced science or engineering.The materials are currently at various stages of development, and should be ready for use by post-16 students in the autumn term of 1997 (a description of the SLIP project can be found in Whitelegg, 1996).The materials are all text-based, but some authors are incorporating multimedia into their study units, and it is in this context that Multimedia Motion is used.It is currently used for four 'explorations' in the unit Physics on the Move (which teaches dynamics and statics), and is also used in the unit Physics for Sport The CD-ROM complements the laboratorybased practical work incorporated in the units, and reinforces a core educational strategy used in SLIP: that of teaching the physics content through real-life contexts. Motivation Teaching through real-life contexts has been shown to increase motivation for learning (Hennessy, 1993), and this strategy is employed throughout the SLIP materials.The contexts for the Physics on the Move unit is the safe transportation of people and goods, and for the Physics for Sport unit the contexts of rock climbing, springboard diving and scuba diving are all used to teach forces, vectors, oscillations and pressure.The Multimedia Motion disc asks students to examine videos of car and train crashes, people jogging, playing various sports, etc., and thus reinforces the real-life approach introduced in the text materials.Because of this approach, Multimedia Motion is more consistent with the philosophy of the project than certain computer simulations which could have been used instead.The disc also uses examples of laboratory-based experiments, such as the linear air track to teach momentum and kinetic energy.These can be compared with real experiments done by students in the school or college laboratory.In addition to increasing motivation for learning physics through the use of real-life contexts, it was felt that the inclusion of the CD-ROM would increase motivation in students who might perceive it as a new and exciting way to learn, particularly for those who already used CD-ROMs for entertainment. The evaluation The evaluation of the use of the disc was carried out in parallel with, but separately from, an evaluation of the text material (Abraham, 1996).Separate teams undertook the text and CD-ROM evaluations, although the same schools and colleges were used for both.The aim of the evaluation was to determine whether the incorporation and use of the disc in the project's materials was realistic or not, and whether it could provide an effective learning experience which might replace some conventional laboratory-based practical experiments.Yilditz and Atkins (1993) suggest a wide range of aspects of multimedia to be addressed in any evaluation.Our eclectic approach was that advocated by Jones et al (1996). Evaluation method Observations were carried out at two schools.One was involved in the evaluation of the SLIP project texts, and the other acted as a control school.These observations involved 12 students in around 60 hours of computer-based activity, and the resulting data is in the form of videotape records, observation schedules and student self-reports. To confirm our findings from the observations, we devised two questionnaires about students' use of the disc, one for students themselves, the other for their teachers.The questionnaires asked them about their previous experience, and their perceptions of the advantages/disadvantages, of using computers in physics lessons, their use of CD-ROMs generally, and about the Multimedia Motion disc and the particular sequences on it which they used.Students were asked to compare the Multimedia Motion explorations to real practical work, to compare watching practical demonstrations in the laboratory with watching videos in computer simulations, and to report on how much teacher-support they needed in order to use the disc.These questionnaires were sent to the schools and colleges involved in the evaluation of the SLIP Project.Twenty-one questionnaires were returned. Evaluation outcomes Increased motivation for learners We observed some of the expected benefits of increased motivation for learners because of access to more realistic applications of the laws of physics illustrated on the disc, as predicted by one of the authors of this paper, Hatzipanagos, at a presentation given in 1995 at CAL'95 in Cambridge.Some students wrote that Multimedia Motion made physics more fun, and provided a variation in the approach to the subject, though not all agreed ('The Multimedia Motion explorations offer more accurate results and an easier way of obtaining them, but are not as much fun as performing the real experiments'). Support for exploratory learning styles Throughout the observations, most students did not use the disc to its full extent.The SLIP material which incorporates the use of the disc directs students to undertake particular activities and provides some questions to answer resulting from its use.However, students often ignored these instructions, and once they had got Multimedia Motion running, their use of it became rather unstructured.The disc provides data-files which students need for solving physics problems, and there was often a reluctance to embark on these problems, particularly if calculations were involved.Without teacher input, either in the form of teacher-directed problems to be solved through use of the disc, or monitoring and help provided by the teacher during its use, there was a tendency for less motivated, less able students to jump around the various contents without engaging with them very constructively.Also, as most students used the disc in pairs or groups of three, some just observed others using it, and did not interact with it themselves.This led to a passive experience, similar to watching a video. Practical work and computer-based activities In general, students felt that the computer-based activities on the disc had some advantages for learning dynamics over laboratory-based practical work.Typical comments were that the disc gave 'examples of real situations where realistic data can be manipulated and understanding made easier', and that the explorations 'can be used faster than doing all the experiments for yourself.There were also positive student comments concerning the ease of printing out information, the reliability of results, and the usefulness of a frame-by-frame feature of the software.One teacher commented that students concentrated on data rather than on setting up and adjusting physical equipment.There were, however, some opposing views from students, such as that 'much of the learning stems from mistakes made in real practicals', and that Multimedia Motion 'lacks the hands-on feel of a practical lesson [and] some of the problem-solving elements of setting up a practical'.One student wrote: 'Seeing something for yourself in real life leads you to believe it more', while another wrote: 'More is learned by doing practical experiments than by looking at the computer screen' (our italics). Difficulties with graphing Many students had difficulty in interpreting the graphs that their data produced.This was sometimes due to the data points they selected, but they were also sometimes left confused and frustrated because they had not attained the outcome they expected.They complained that they could not ask the computer questions in the same way as they would normally ask their teacher. School/college mode of use In nearly every case, the use of the disc was limited by the number of computers with CD-ROM drives available in the school or college.In both the institutions where observations took place, only one computer was available for students to use the disc.The results of the postal questionnaires also indicated limited use because of lack of hardware. Conclusions There are certain learning styles which, it is claimed, multimedia can support, among them exploratory learning (see, for example, Laurillard, 1993).But for students of this age and relative inexperience, multimedia may not be effective on its own.The students in this study needed more teacher intervention and to be provided with problem-solving activities which required their interaction with the disc.This would also encourage the more passive students to interact with it, and the more dominant ones to share the use of the computer. The lack of hardware was a problem in every institution which responded, and prevented the disc from being built into the core of the teaching programme.It tended to be used as an added extra outside the main curriculum, or for revision. The creators of this disc have now produced a Teachers' Pack which may help to solve some of the problems.However, while Multimedia Motion provides an alternative to some laboratory-based practical work, without frequent teacher intervention, students will probably not make effective use of all that it offers.It can provide an effective learning experience, particularly for high-ability, motivated students, but for others it does so only with substantial teacher support and monitoring.Multimedia in this form is not stand-alone but is a useful and enjoyable way for students to increase their learning in a conceptually difficult area of physics, and to relate it to real-life situations in an interactive way.As one student put it: 'Lab-based and multimedia approaches should both be used to complement each other.'
2,331
1997-01-01T00:00:00.000
[ "Education", "Physics" ]
Insights into the molecular mechanism of dehalogenation catalyzed by D-2-haloacid dehalogenase from crystal structures D-2-haloacid dehalogenases (D-DEXs) catalyse the hydrolytic dehalogenation of D-2-haloacids, releasing halide ions and producing the corresponding 2-hydroxyacids. A structure-guided elucidation of the catalytic mechanism of this dehalogenation reaction has not been reported yet. Here, we report the catalytic mechanism of a D-DEX, HadD AJ1 from Pseudomonas putida AJ1/23, which was elucidated by X-ray crystallographic analysis and the H218O incorporation experiment. HadD AJ1 is an α-helical hydrolase that forms a homotetramer with its monomer including two structurally axisymmetric repeats. The product-bound complex structure was trapped with L-lactic acid in the active site, which is framed by the structurally related helices between two repeats. Site-directed mutagenesis confirmed the importance of the residues lining the binding pocket in stabilizing the enzyme-substrate complex. Asp205 acts as a key catalytic residue and is responsible for activating a water molecule along with Asn131. Then, the hydroxyl group of the water molecule directly attacks the C2 atom of the substrate to release the halogen ion instead of forming an enzyme-substrate ester intermediate as observed in L-2-haloacid dehalogenases. The newly revealed structural and mechanistic information on D-DEX may inspire structure-based mutagenesis to engineer highly efficient haloacid dehalogenases. D-DEXs have been used to produce L-2-chloropropionic acid as a chiral intermediate for the production of herbicides [18][19][20] . However, because of the low reactivity and stability, no commercial enzyme has been produced. Although the immobilization that covalently attaches HadD AJ1 which is a D-DEX from Pseudomonas putida strain AJ1/23 to the controlled-pore glass has been tested to improve its stability and increase its tolerance to a high substrate concentration, only a minor improvement was made 19 . Structural and functional information is necessary to rationally direct enzyme engineering to meet scientific and industrial needs. To unravel the molecular basis of D-DEX catalysis, the structure of substrate-free HadD AJ1 as well as the structure of the enzyme-product complex (E-P) were determined. The structure-guided elucidation of the catalytic mechanism of D-DEX is presented here. Results and Discussion Overall structure of HadD AJ1. The crystal structure of the wild-type HadD AJ1 (WT) was determined by the molecular replacement method using the structure of a DL-DEX, DehI (PDB: 3BJX) from Pseudomonas putida PP3 as the search probe 1 . HadD AJ1 is refined at 2.64 Å resolution with an R free of 25.1 (Table 1). HadD AJ1 is an α-helical hydrolase similar to DL-DEX. However, it completely differs from L-DEX, which is an α/β type hydrolase 1,12,14 . Each asymmetric unit of HadD AJ1 includes four monomers. Each monomer presents a compact fold featuring twelve α-helices and one 3 10 -helix η 1 (Fig. 2B). Two structurally axisymmetric repeats are observed with 20% sequence identity and a superposition RMSD of 1.24 Å in each monomer. Repeat 1 and 2 consist of N-terminal α-helices 1-6 and C-terminal α-helices 7-12, respectively, linked by a lengthy loop with 33 residues including a 3 10 -helix η 1 (Fig. 2). Each repeat is composed of a three-helix-bundle formed by the first three helices and a three-helix-triangular arrangement formed by the second three helices (Fig. 2B). The two repeats are organized by van der Waal forces, salt bridges, hydrogen bonds and hydrophobic interactions of α 6 /α 12 and α 4 /α 10 , respectively. Helices α 4 and α 10 are spatially oriented parallel to each other. Helices α 6 and α 12 mutually interlace at their bulges, located in the midst of the helix. The similar symmetric architecture of the structurally repeated folds is also observed in the phosphorylation-coupled vitamin C transporter 21 . So far, the structurally internal repeats have been reported in many proteins, which are considered to be caused by genetic processes such as fusion and fission of domains, and gene duplication in protein evolution 22 . HadD AJ1 folds into a homotetramer in the crystal (Fig. 3), which is consistent with the functional state previously reported in solution 18 . In the tetramer, two types of interfaces including A/C (or B/D) and D/C (or A/B) are observed in the assembly of the monomers with the online interactive tool Protein Interfaces, Surfaces and Assemblies (PDBePISA, http://www.ebi.ac.uk/msd-srv/prot_int/cgi-bin/piserver). The interfaces are stabilized by hydrogen bonds and salt bridge interactions ( Fig. 3B and C). Both interfaces comprise approximately 1000 Å 2 (~12%) of total buried surface area, which is considered to be a weak association for oligomeric proteins 23,24 . Comparison of D-DEX and DL-DEX structures. D-DEXs share 30% similarity with DL-DEXs in primary structure ( Supplementary Fig. S1). As shown in Fig. 4A, the superposition of HadD AJ1 with DehI reveals a very high degree of structural similarity with an RMSD of 1.05 Å for 207 C α atoms 1 . The similar structural repeats are regarded as a "pseudo-dimer" in DehI. The "pseudo-dimers" of both enzymes overlap perfectly, also the linker. These similarities reveal the close evolutionary relationship between HadD AJ1 and DL-DEXs. However, two significant differences are observed. First, the η 1 helix and the subsequent loop of the linker between the two repeats are buried in the interface assisting the assembly of the tetrameric HadD AJ1, while the linker wraps around the outside of the homodimeric DehI molecule, without involving the dimer interface. Second, helices α 3 and α 4 of HadD AJ1 are connected by a loop fragment; whereas, a helical bend links the α 3 and α 4 of DehI (Fig. 4B). The two joints contain a strictly conserved Thr between D-DEXs and DL-DEXs. The replacement of Thr by Ala completely damages DL-DEX activity 1,25 , from which Thr is considered as essential for the dehalogenation. However, the same mutant of HadD AJ1 still retains 87.8% activity ( Supplementary Fig. S2A), which suggests that Thr76 is not crucial for the conformational stabilization and HadD AJ1 catalysis. The distinct effects on dehalogenation resulted from the same mutation of the conserved Thr indicates some differences in mechanisms of D-DEXs and DL-DEXs, which can be explained by the corresponding structural differences of the loop and the helical bend. D-DEX dehalogenation mechanism. To unveil the dehalogenation mechanism of D-DEX by exploring the molecular details of substrate recognition, HadD AJ1 was crystallized in the presence of the substrate D-2-chloropropionate (D-2-CPA) at pH5.5~6.4 and 277 K. In addition, co-crystallization and crystal soaking trials were undertaken with other reactive substrates including chloroacetate, bromoacetate, and 2-bromopropionate as well as with nonreactive substrates including 2-fluoropropionate, L-2-chloropropionate, 3-chloropropionate, 2,2-dichloropropionate and 2-bromo-2-methylpropionate. Unfortunately, no E-S complex crystals were obtained. However, it turned out that HadD AJ1 was able to bind the product L-LA in the active site. The E-P complex was captured at 2.18 Å resolution (Fig. 5). Further inspection into the complex structure shows that L-LA is caged in the enclosed binding pocket of 71 Å 3 . The binding pocket is oriented at the interface of the two repeats and framed by the structurally repeat-related α 2 , α 8 , α 6 and α 12 (Fig. 4A). There are 13 amino acids arranged in the binding pocket which are Trp48, Lys50, Val51, Ile52, Asn131, Tyr134, Asn203, Ser204, Asp205, Phe281, Met284, Leu285 and Leu288 (Fig. 5B). The product-bound pocket has an identical spatial location to the active site of DehI ( Supplementary Fig. S2B) 1 . By comparison of the active sites of HadD AJ1 and DehI, the residues corresponding to Trp48, Asn131, Tyr134, Ser204 and Asp205 are highly conserved between HadD AJ1 and DL-DEXs, whereas the remaining residues of the active sites are not ( Supplementary Fig. S1), and their functions are discussed below. Although another product of the catalyzed reaction -the departing Cl − is missing in the complex, a water molecule adjacent to L-LA is trapped in the active site (Fig. 5B). This water molecule interacts with the carbonyl atom of the side chain of Asn131 and the carboxylic hydroxyl of Asp205 by forming hydrogen bonds. The distance between water and the C2 atom of L-LA is 3.0 Å. Considering the conservation of Asp205 with the catalytic residue Asp189 of DehI 1,25,26 , we propose that Asp205 and Asn131 are key residues that activate water required for the hydrolytic dehalogenation of HadD AJ1. Therefore, the mutation of Asp205 and Asn131 was performed (Fig. 5A). Notably, the replacement of Asp205 by Asn completely abolishes the enzymatic activity, which reveals that it's a critical catalytic residue in HadD AJ1. Furthermore, residue Asp is strictly conserved among D-DEXs and DL-DEXs ( Supplementary Fig. S1). The mutation of Asn131 to Asp in HadD AJ1 loses approximately 96% of catalytic activity, which shows Asn131 is an important residue for the dehalogenation. The location of the water molecule and its interactions with the two crucial residues Asp205 and Asn131 indicate that it is likely to be involved in the hydrolytic dehalogenation. Therefore, we hypothesize that the C2 atom of the substrate is directly attacked by a nucleophilic water molecule (Fig. 1B). To confirm our hypothesis, a single turnover reaction was conducted in H 2 18 O with the WT enzyme that was ten times higher concentration than D-2-CPA. As seen in Fig. 6B, 5.2 times more labelled 18 O-L-LA was produced compared to 16 O-L-LA, which indicates that the dehalogenation is directly mediated by a water molecule. Furthermore, this finding confirms that the nucleophilic water molecule is activated by the Asp205 residue with the assistance of Asn131. The carbonyl oxygen in the side chain of Asp205 forms hydrogen bonds with the -NH 2 group of Asn131, which drives Asp205 to attract a proton from a proximal water molecule. The activated water molecule is poised to attack the C2 atom of the substrate without involving an E-S ester intermediate, which is identical to DL-DEX but different from L-DEX 15 . As stated above, the E-P complex structure is captured after releasing Cl − and before L-LA departure from the active site. Hydrogen bonds and hydrophobic interactions predominantly stabilize the product in the active site. Hydrogen bonds are formed between the carboxyl of L-LA and the backbone amide nitrogen of Val51, the amino nitrogen of the side chain in Asn203, the backbone amide nitrogen of Ser204, and the carboxyl of Asp205 (Fig. 5B). Carboxyl is invariant in the substrate and the product before and after the dehalogenation. Therefore, it is hypothesized that Val51, Asn203, Ser204 and Asp205 contribute to stabilizing the substrate as well. Correspondingly, mutagenesis of these residues would be expected to affect enzymatic activity. As shown Fig. 6A, 63.4%, 98.8%, 99.6%, 95.1% and 58.2% of enzymatic activity is lost for mutants V51F, N203A, N203S, S204A and S204T, respectively, towards D-2-CPA. The replacement of non-conserved Val51 and Asn203 by the equivalent residues found in DL-DEX led to a significantly reduced HadD AJ1 activity, which sheds light on different roles of these positions in the two types of enzymes. The increased K m of the mutants V51F and N203A demonstrate their contribution to the binding interactions with the ground state substrates ( Table 2). Consistent with the altered activity, N203A exhibits an obviously decreased k cat /K m which is 355-fold lower than that of the (Table 2), which suggests the side chain of Asn203 is crucial for stabilizing the E-S transition state. By contrast, V51F exhibits a 4-fold lower k cat /K m value than the WT enzyme (Table 2), which indicates that the increased steric hindrance in the mutant V51F does not severely perturb the transition state. The different loss in the enzymatic activity resulted from S204A and S204T reveals that the hydroxyl group of Ser204 and its orientation affects the interaction with the substrates. The D205E mutant retains 10.1% of activity (Fig. 6A), suggesting the carboxylic hydroxyl of Asp205 is essential for HadD AJ1 dehalogenation. Simultaneously, the increased K m proves that Asp205 participates not only in water activation but also in the stabilization of the substrate (Table 2). Additionally, the thermodynamic contributions of Val51, Asn203 and Asp205 are demonstrated by the elevated transition state free energies of 3.5 kJ/mol, 14.8 kJ/mol and 6.49 kJ/mol resulting from V51F, N203A and D205E mutants, respectively, in comparison with that of WT (Table 2). Residues Ile52, Phe281, Met284 and Leu285 form a hydrophobic pocket, which engages in hydrophobic interactions with the -CH 3 group of L-LA. The mutations of Ile52 to Gly and Met284 to Cys resulted in 98.5% and 93% activity loss, respectively, suggesting the hydrophobic side chains of these two residues make a great contribution to the interaction with the substrate. The mutantations F281A and L285I destroy 45.7% and 75.6% of enzymatic activity, respectively, which indicates that hydrophobic residues of appropriate sizes are required in these positions for interacting with the substrate. The Cl − is released towards Phe281 after the C2 atom is attacked from the opposite side of the halogen atom by the nucleophilic water molecule. Therefore, Phe281 was speculated to be involved in stabilizing the departing Cl − , which is likely to be trapped by an anion-π interaction between it and the phenyl group of Phe or a non-classical hydrogen bond, resulting from its "side-on" position at the plane of the benzene ring of Phe 27,28 . Consequently, the dehalogenation is affected by the mutation of Phe to Ala by destroying this interaction. The residues Trp48, Gly50, Tyr134 and Leu288 indirectly interact with L-LA, but they participate in the hydrogen-bond network among the residues in the active site. Residues Trp48 and Tyr134 are strictly conserved between HadD AJ1 and DL-DEXs. In agreement with the effects resulting from the mutation of the two residues in DL-DEXs 15 , HadD AJ1 mutants W48A and Y134F are almost inactive (Fig. 6A). Although Gly50 and Leu288 are replaced by the corresponding residues of DL-DEXs, the mutations damage 94.9% and 62.8% of HadD AJ1 activity, respectively (Fig. 6A). This result shows that the polarity and the steric hindrance of the residues at these positions have an impact on the dehalogenation catalyzed by HadD AJ1. These different contributions are likely to be caused by natural selection of the residues in the active site among D-DEXs and DL-DEXs. Conclusions By combining structural, biochemical and site-directed mutagenesis analyses, we have gained insights into the dehalogenation catalyzed by D-DEX. HadD AJ1 is wholly α-helical and was crystallized as a homotetramer. Each monomer contains two N-terminal and C-terminal repeats, linked by a loop with a 3 10 -helix η1. The substrate binding pocket of HadD AJ1 is locatedat the interface between two repeats. Residues lining the active pocket participate in the E-S interaction, the mutation of which severely impairs the enzymatic activity. Dehalogenation catalyzed by D-DEX is directly mediated by a nucleophilic water molecule activated by Asp205 and Asn131 without forming the E-S ester intermediate, which is the same as DL-DEX but different from L-DEX. These findings enrich the knowledge on haloacid dehalogenases. Methods and Materials Cloning X-ray diffraction analysis. X-ray diffraction data were collected on BL 18U1 and BL19U at Shanghai Synchrotron Radiation Facility (China). The diffraction data were processed and scaled using the HKL 2000 software package 29 . Then, the scaled output was converted into MTZ files using scalepack2mtz from the CCP4 suite 30 . The molecular replacement method was used to determine the structure of HadD AJ1 using Phaser-MR from the Phenix 31 suite with DehI (PDB: 3BJX) as a search model 1 . Model rebuilding was performed in Coot 32 . Subsequently, further rounds of restrained refinement were performed, and water molecules were added and adjusted using the Phenix refine program. Structure figures were prepared with PyMol (http://www.pymol. org). Sequence comparison was generated by ESPript 3.0 33 . The structural information of all crystals is shown in Table 1. Site-directed mutagenesis of HadD AJ1. The residues involved in the substrate binding pocket were mutated. The residues Trp48 and Thr 76 which are strictly conserved between D-DEX and DL-DEX were mutated to Ala. The residues Gly50, Val51, Ile52, Asn203, Phe281, Leu285 and Leu288 which are not conserved between D-DEX and DL-DEX were mutated to the corresponding amino acids found in DehI PP3. The residues Asn131, Tyr134, Ser204, Asp205 and Met284 which are identical between HadD AJ1 and DL-DEX were mutated on the basis of their charge and polarity. The primers containing mutations were designed using DNAstar and are summarized in Supplementary Table S2 Enzymatic activity. The enzymes in Buffer D were used to assay the activity towards D-2-CPA. The standard assay system was used unless stated otherwise 6 . The reaction mixture (1 mL) contained 10 mM D-2-CPA, 100 mM glycine-NaOH (pH 10.0), and enzyme. To ensure that at least 5% of the substrate is degraded, the enzyme concentrations were 0.02 mg/mL for WT, V51F, S204T, F281Y, L285I and L288I, and 0.3 mg/mL for other mutants. The reactions were terminated by the addition of 10 µL of phosphoric acid (85% w/w) following the incubation at 30 °C. After the precipitates were removed by centrifugation (14,000 × g, 10 min), the supernatants were analyzed by HPLC to determine the D-2-CPA contents. One unit of dehalogenase activity was defined as the amount of enzyme that catalyzed the hydrolysis of 1 µmol D-2-CPA per min. Steady-state kinetic measurements. Kinetic study of WT and several mutants was carried out by assaying the activity of the enzyme under standard condition at varying initial substrate concentrations. The initial rates of the reaction at different substrate concentrations were measured. Kinetic equation fitting was conducted using Origin 7 software. K m and k cat values were derived from the non-linear regression analysis ( Supplementary Fig. S4). Data Availability. All data generated or analysed during this study are included in this published article (and its Supplementary Information files).
4,100.6
2018-01-23T00:00:00.000
[ "Chemistry", "Biology" ]
Analysis of the modulation depth of some femtosecond laser pulses in holographic interferometry On the basis of the modulation depth in the interference and holographic processes, we discuss in this paper the quality of holograms generated by some femtosecond laser pulses of two different colors. The modulation depth in terms of the fringe contrast (MDFC ratio) of Higher-order sh- and ch-Gaussian temporal profiles (sh n GTP and ch n GTP ) are investigated in detail. It’s shown that, when we use two lasers having a very large frequency detuning, the sh n GTP exhibit more precise results than the Gaussian beams for holography. sh n GTP give new areas of frequency detuning to realize the very significant value of the MDFC ratio for obtaining better holograms, which is impossible with Gaussian beams. It also permits flexibility in the variable frequency difference and pulse duration for good quality holograms, and in the case of the ch n GTP, null regions develop and frequency bands are observed favoring the formation of the holograms. The numerical simulations are presented to illustrate and discuss the influence of the frequency detuning, the beam order and the controller parameter of the waves on the modulation depths. The propose theory will be a good basis for the development of some new experiments on the holographic interferometry and it will certainly be very useful for the specialists of the studied femtosecond laser pulses. Introduction In the last few decades, the dark hollow beams (DHBs) have attracted the attention of many authors because of their extensive applications in modern optics such as trapping of particles, optical communications, and guiding cold atoms [1][2][3][4]. Several models have been elaborated in the literature to describe the DHBs [5][6][7][8][9][10] by using various techniques such as the transverse mode selection [11], the optical holographic [12], the geometrical optical [13], the computer-generated hologram [14], and the hollow optical fibers [15] methods, to mention but a few. In 2017 Zou et al. [16] treated propagation characteristics of hollow sinh-Gaussian beams through the quadratic-index medium, and in 2021, Saad and Belafhal [17] studied the propagation of the hollow higher-order cosh-Gaussian (HhCG) beam through a quadratic index medium (QIM) and a Fractional Fourier transform (FRFT), they found that the intensity distribution of HhCG changes periodically during propagation in the QIM. Lasers are the basic building block of the technologies for the generation of short light pulses. Shortly after the invention of this source of light, the duration of the shortest pulse produced had decreased by six orders of magnitude [18], ranging from microsecond (μs) and nanosecond (ns) regimes in the 1960s and through the picosecond (ps) regime in the seventies and finally to femtosecond (fs) techniques in the eighties [19]. On the other hand, holography is a process for recording the phase and amplitude of the wave diffracted by an object. This recording process makes it possible to later restore a threedimensional image of the object. The latter is achieved using the properties of coherent light from the laser. Holographic techniques have been a great success in various areas, such as 3D display [20] optical metrology [21], medicine [22] and many more. Lithium Niobate ( 3), as photorefractive material, has proved with success its potential in being utilized in different field as holographic medium [23]. Information stored in the photorefractive material is generally erased during readout due to the dynamic nature of this material. But two color holography is a way out for non-volatile holography in which a beam of high photon energy, represented as the gating beam, is also present while recording for sensitizing the material. This beam useful in making readout non-volatile but only at larger wavelengths. A lot of Lithium Niobate systems were constructed to demonstrate non-volatile two-color holography for single hologram storage [24,25]. Two color hologram multiplexing was successfully studied by the use of stoichiometric [26]. Recently, recording of permanent holographic gratings using laser beams of small frequency difference has been reported by Odoulov et al. [27]. However, Malik and Escarquel [28] have reported in their work that best holograms can be produced with large frequency detuning. In 2020, Malik and Escarquel [29] showed that sh n GTP for the order is equal to unity, is better than even Gaussians profile for holography, especially for higher frequencies differences. However, to the best of our knowledge, no study has treated the modulation depths of sh n GTP and ch n GTP in terms of the fringe contrast. The remainder of this paper is organized as follows: The theory details for the ratio of modulation depth and fringe contrast is developed in Section 2. While Section 3 is devoted to some important numerical simulations and discussion of our main results applied in two general cases: sh n GTP and ch n GTP. Finally, the calculated results are summarized in the conclusion. Theory In this section, we consider the interference of two femtosecond laser pulses with different amplitudes and different frequencies. We present the theory concerning the modulation depth evaluation of the considered beams. In the plane waves approximation, the two waves have the following electric fields where 1 A and 2 A are the amplitudes of the electric fields of the two lasers, 1  and 2  are the angular frequencies, 1 k and 2 k are the wave-vectors of the electric fields and () ft is their temporal profile. The intensity profiles of these beams is given by Consequently, in time and space, the intensity of the total electric field is evaluated by 2 12 ( , ) ( , ) ( , ) . By introducing the frequency detuning of the waves 21   = − , the difference of the wave vectors 21 k k k =− and after some algebraic calculations, the intensity distribution of these two waves can be rewritten as By using the Euler's formula, Eq. (10) can be written as Eq. (14) is the main result of the present work. Note that for a Fourier transforms spectrometer, the modulation depth is also defined by Eq. (14) where cos( . ) cos kr  = with δ is the phase difference of the two coherent beams. If the two waves have the same frequency but the amplitudes are different, in this case 0 = and Eq. (14) reduces to d Mm = . So, in this case, the modulation depth is equal to the fringe contrast and the ratio MDFC/m is equal to unity. In general, this MDFC ratio plays an important role in holography and holographic interferometry, because this ratio yields to the best quality of the holograms. If we have two beams, with the same amplitudes ( 12 A A A == ) and the fringe contrast is equal to unity (m=1), d M is expressed as In the following sub-section, we will apply our main result to two important cases: sh n GTP and ch n GTP. Case of sh n GTP We apply Eq. (14) to evaluate the modulation depth of sh n GTP which is given by where τ is the initial pulse duration, n is the beam order and δ is the controller parameter of the central dark spot size of the considered pulse. 0 I is the amplitude of the profile and for simplicity, we take 0 1 I = . To study the modulation depth n sh d M of sh n GTP, we will evaluate the terms K , K + and K − given by 22 To carry out these integrations, we recall the following identities [30]   and Eq. (18.a) becomes To calculate the integrals K + and K − , we consider the following expression With the help of Eqs. (19) and (20), this last equation can be restated as By using the identity and Eq. (21), Consequently, R  is expressed as n k a n n nk n k nn a R e e ch n k kn The two integrals given by Eqs. ( Finally, for this temporal profile, the modulation depth can be rearranged as   It shows that this MDFC ratio is dependent on the frequency detuning Ω of the waves and on the parameters: the beams order n and the controller parameter δ, which is valid for n≥1. In the particular case for n=1, Eq. (32) becomes which is consistent with the well-known expression given by Malik et al [29]. Case of ch n GTP In this case, the temporal profile is given by where  is the controller parameter of the central dark spot size of the pulse, n is the beam order and τ is the pulse initial duration. As above, the amplitude 0 I of the profile is taken equal to the unity. In this section, we evaluate the integrals transforms K , K + and K − given by Eq. (15) corresponding to this temporal profile family. By using the following identity [30]   With the help of the previous results, and in the case of ch n GTP, Eq. (14) can be written as   It proves that the MDFC ratio is dependent for this temporal profile on the frequency detuning Ω of the waves and on the two parameters: the beams order n and the controller parameter δ. One notes that Eq. (43) is the closed-form of the modulation depth of ch n GTP and for the first order n=1 of this beams family, one finds the following expression of the modulation depth rearranged as which is consistent with the-known expression for a Gaussian profile [25]. Numerical analysis and discussions In this section, we will present some numerical simulations to study the dependence of the MDFC ratio on the frequency detuning for sh n GTP, ch n GTP and especially for the Gaussian profiles. The present analysis is based on the analytical expressions given by Eqs. (32), (43) and (45). Case of Gaussian temporal profile In this sub-section, we illustrate in Fig. 1 From this figure, one observes that the MDFC ratio increases from 0 to 1 as the color changes from blue to yellow (see the rectangular bar). It can be seen also that the value of the MDFC ratio decreases when the pulse duration increases and the frequency detuning is larger for the Gaussian profile. Consequently, the appropriate value of the MDFC is obtained for a higher pulse duration and a small frequency difference. Case of sh n GTP We investigate here the MDFC ratio of the temporal profile given by Eq. (32). We illustrate in Fig. 2 the temporal profile of sh n GTP expressed by Eq. (17) for two values of the beam order n and by varying the controlled parameter δ. To show the effects of the beam order n and the skew parameters δ, we illustrate in Fig. 5 the MDFC ratio as a function of the frequency detuning for the sh n GTP (dashed curve) and the Gaussian profile (solid curve) for four values of n and two values of δ. It's shown from these plots that for a particular frequency detuning, the MDFC ratio of sh n GTP is higher than the MDFC ratio of the Gaussian profile (red hatched area). For example, the MDFC ratio has a value of 0.45 in the case of sh n GTP when n=1 (Fig. 5 (a)), which is much greater than that the obtained one with Gaussian profile (i.e. MDFC ratio = 0.29); this is more pronounced when the controlled parameter is smaller (i.e. δ =0.1). We can also observe side lobes that take place when the beam order increases. It is clear that the total number of the secondary lobes is proportional to the considered profile order. For example, for n = 1, we find only one secondary lobe but when n = 3, we note three lobes and when n = 4, another lobe starts to rise (four secondary lobes). We note that when the controlled parameter and the beam order are equal to unity, the MDFC ratio is identical to that one investigated by Malik and Escarquel [28]. In Fig. 6, the MDFC ratio is depicted as a function of the pulse duration and the frequency detuning. It's seen from these plots that the MDFC ratio increases from 0 to 1 as the color changes from blue to yellow (see the rectangular bar). We note that for a particular pulse duration, the MDFC ratio progressively decreases as the frequency detuning increases. After this duration, a thin strip is formed when the MDFC ratio stays zero. As Ω increases further, the MDFC ratio starts to rise and progressively decreases to 0, which gives rise to two null regions. We observe that by increasing the beam order n the new thin strips appear. Hence, the MDFC ratio of sh n GTP presents discontinuities compared to that one of the Gaussian profile, due to the formation of null region (MDCD ratio = 0). The discontinuities increase with the increase of the beam order and a better quality of the hologram can be obtained for this beam profile. We deduce that the sh n GTP is more suitable for obtaining the holograms with good quality. Case of ch n GTP In this section, we treat the case of ch n GTP. We illustrate firstly in Fig. 7, the graphical representations of the temporal profile for this beam family are given for three values of the controlled parameter δ and for two values of the beam order n. We find the Gaussian profile when δ=0 and as the controlled parameter δ increases and reaches a value 1.2, we observe the appearance of two small lobes, which grow with the increase of δ. We present in Fig. 8 the variation of the modulation depth with the frequency detuning Ω for three values of the skew parameter δ of ch n GTP having 50 femtoseconds duration. We observe the appearance of new side lobes with the increase of the beam order n and the width of the lobes widens with the decrease of the skew parameter δ. Basing on Eq. (43), we present in Fig. 10 the MDFC ratio as a function of the frequency detuning Ω with different skew parameters δ, for a fixed value of the pulse duration set as τ = 50 and for different values of n (1, 2, 3, and 4). One can easily see from these plots that the ch n GTP yields the Gaussian profile when δ=0, and by varying the parameter δ we see the appearance of side lobes. For example, when n=2 ( Fig. 10 (b)) for δ=1, we obtain two lobes and when δ=1.3 another lobe that is added. The plots in Fig. 11 depict the MDFC ratio as a function of the pulse duration and the frequency detuning for ch n GTP for two values of δ. It's observed from this figure that the decrease from 1 to 0 of the MDFC ratio value as the color becomes darker (see the rectangular bar). Contrary to the Gaussian profile, the ch n GTP is characterized by a MDFC ratio which progressively decreases from 1 (at Ω=0) to 0 as the frequency detuning is increased (see Fig. 11) and then a thin strip is formed while the MDFC ratio keeps null. In the end and as Ω increases further, the MDFC ratio begins to enhance and gradually decreases to zero, which results the appearance of two null regions favoring the formation of the holograms. Conclusion In this paper, we have investigated the process of interference and holography with two laser beams with different amplitudes and different frequencies. We have evaluated in detail the analytical expression of the modulation depth in terms of the fringe contrast, the frequency detuning and the controller parameter of the Higher-order sh-and ch-Gaussian temporal profiles. Based on our main results of the profile evaluated modulation depth, we conclude that the hollow laser beam with dark spot, contrary to Gaussian profile, gives a new area where good quality holograms can be produced for a higher frequency detuning and we showed that the total number of side lobes is proportional to the order of the considered profile. For the second profile family, ch n GTP is favorable to form a good quality hologram. In this later case, we showed that the null regions facilitating the formation of holograms which is more suited to obtaining holograms of good quality. Not only this, our work helps to know the exact combination of the pulse duration and the frequency detuning for the larger modulation depth to fringe contrast ratio gets to the best quality holograms possible. The present work is seeks to present in a plain way that sh n GTP is better than the Gaussian lasers for holography, particularly for higher frequency differences.
3,924.2
2021-03-29T00:00:00.000
[ "Physics" ]
Disordered RNA chaperones can enhance nucleic acid folding via local charge screening RNA chaperones are proteins that aid in the folding of nucleic acids, but remarkably, many of these proteins are intrinsically disordered. How can these proteins function without a well-defined three-dimensional structure? Here, we address this question by studying the hepatitis C virus core protein, a chaperone that promotes viral genome dimerization. Using single-molecule fluorescence spectroscopy, we find that this positively charged disordered protein facilitates the formation of compact nucleic acid conformations by acting as a flexible macromolecular counterion that locally screens repulsive electrostatic interactions with an efficiency equivalent to molar salt concentrations. The resulting compaction can bias unfolded nucleic acids towards folding, resulting in faster folding kinetics. This potentially widespread mechanism is supported by molecular simulations that rationalize the experimental findings by describing the chaperone as an unstructured polyelectrolyte. Remarks to the Author: In this paper, Holmstrom et. al probe the nucleocapsid domain (NC D) of the hepatitis C virus core protein for its ability to fold the nucleic acids. They use FRET experiments to first establish that the NC D indeed assists a DNA and an RNA hairpin to fold. Based on decrease in the binding affinity between the hairpin and NC D in "high" salt, they establish that the interactions are driven by charge-charge contacts. Furthermore, they use site-specific labelling to probe the structure of the NC D-nucleic acid complexes, and show that that overall, both the NC D and hairpin become "more" compact when they interact. And finally, they demonstrate that the effect of nanomolar amount of NC D on nucleic acid folding is comparable to that of extremely high (1M) salt concentration NaC l. As more interest is growing in IDPs and their interactions with RNA, this very thorough and timely paper is welcomed. Following are some concerns that should be addressed prior to acceptance. General C omments: One suggestion is to provide more background on what this paper is going to be about wrt kinetics and single molecule studies. This is totally unclear from the Abstract and only partially clear in the current Introduction, where the authors simply mention this in passing. A better setup is needed. It isn't clear that the effects of NC D are chaperoning at all. For instance, Figure 2F doesn't show chaperone behavior per se. Specifically, the equilibrium between F and U for the DNA (or RNA) isn't changed in the absence of the chaperone, like for chaperoning of a protein where the chaperone releases a protein and it stays folded. I don't feel this point, which is important, diminishes the work in any way since what is described is a real thing, but clarity is needed. To me, this is like the use of the word enzyme for ribozyme. Most ribozymes are not enzymes in that they are changed in the reaction and are single turnover; that doesn't make ribozymes less important. But, semantics are important, especially since most of the community uses chaperone as I say above. C larity is needed if the impact of this paper is to be the highest possible. Major C omments: 1. My major concern is that many of the IDRs tend to interact with RNA and not with DNA. Yet majority of the experiments in the study has been done with DNA hairpin. Even in the experiments shown in figure S3, there is a four-fold difference in the Kd between the DNA and the RNA hairpin, suggesting some "structural" specificity. As such, could authors provide some guidance on how their findings should be interpreted in the context of what happens in vivo where these IDRcontaining proteins are interacting primarily with RNAs? 2. Which brings me to my next concern. As demonstrated in figure 1c, about 20-fold decrease in binding affinity is observed with the addition of 80 mM NaC l. Suggesting that these interactions may be quite sensitive to in vivo salt concentrations which can contain upwards of 140 mM K+ and both free and weakly chelated Mg2+ which can further destabilize these interactions. Which again raises concerns as to whether these interactions really happen in vivo. 3. It appears that 20 out of 25 "positive" charges in the NC D come from arginine residues. Which are known to form bidentate interactions with guanosine. Interestingly, the folding incompetent mutant that authors chose have guanosines at both 5' and 3' end. As such, the binding observed in the "folding incompetent" mutant may be attributed to specific Arg-Guanosine interactions (Luscombe et. al. Nucleic Acids Res. 2001 Jul 1;29(13): 2860-2874. The generality of the results would have been more solid if they had shown similar effects with a different RNA/DNA sequences. Minor comments 1. Figure 1. a. It is confusing that "DNA Hairpin" appears under the cyan protein. b. The charge on the DNA is not -60. Manning theory says that there is only about -0.1 charges per phosphate in the present of salt. c. Panel C . The values on the x-axis appear to be left-shifted for 1.0, 10., and 100. 2. Page 2 line 12, the use of "catalyze" could be interpreted as "chemical reaction". 3. Information in Supplementary figure S2 is could be better presented in the main text by breaking down the panels. 4. p3. Second to last paragraph. "…yielded consistent rate constants (Figures 2c and 2d)" is confusing. I took it as consistent between the panels, which of course is not true. The authors should rewrite to remove this confusing possible interpretation of the current wording. 5. C ould authors explain the lack of "more compact" unfolded state (around E=0.5) of the hairpin which is seen in figure 1b (cyan) but is absent in figure 4a with NaC l. Response to reviewers' comments on Manuscript NCOMMS-19-03195 "Disordered RNA chaperones can enhance nucleic acid folding via local charge screening" We thank the reviewers for their thoughtful comments and constructive criticism, which have helped us further improve the manuscript. A detailed response is given below. Reviewer # 1: Holmstrom et al. present an interesting analysis of how intrinsically disordered proteins can act as nucleic acid folding chaperones. This question is particularly interesting give that IDPs or IDRs are present widely in nucleic acid binding and processing proteins. Furthermore, the HCV system is relevant for biology and disease. The authors carefully carry out a range of single molecule fluorescence experiments and find that the highly charged NCD domain of HCV protein increases the efficiency of folding of nucleic acid stem-loop structures, as well as generic compaction. Furthermore, the NCD itself is compacted yet disordered in the complex, reminiscent of IDP-protein complexes studied by the same lab and others in the field. The authors also carry out coarse-grained MD simulations that recapitulate key results and support the electrostatic model. The work is very well done, using a range of experiments and labeled constructs, and new and interesting information about an important biological problem is revealed from the data and simulations. Thus, the work will be of high interest for the readers of this journal. We would like to thank the reviewer for the positive and encouraging feedback, which has helped us to improve the overall quality and clarity of our manuscript. I only have a couple of minor comments that the authors can address to improve the manuscript. As the authors mention early on, RNA folding has been shown to often result in trapped/misfolded intermediates. I would think that the stabilization of compact states shown here would often increase the barrier from these trapped states to native structures. Is this correct? If so, can the authors comment on how the systems might get around this problem? The reviewer is right in that the mechanism identified in our manuscript only enhances formation of compact conformations and has no way of preferentially promoting native over non-native contacts. We have tried to more clearly capture this point in: the abstract (ln 22); the introduction (ln 37), where the term "functional" may have been misleading; and in the "RNA chaperones as macromolecular counter ions" section (ln 219) by referring to the formation of compact structures rather than the formation of native or functional conformations. Regarding barrier heights: our results show that chaperone binding stabilizes compact structures via an increase in the folding rate coefficient (i.e., a decrease the height of the folding barrier). Thus, as shown in the simplified free energy diagram below on the left, we thus expect no change in the height of the barrier between folded and misfolded states, in contrast to a scenario where folded or misfolded states are stabilized preferentially (diagram on the right). As far as I can see, the results here are on DNA folding. Authors, please comment more on the expected differences and potentially more complex behavior for RNA folding. We indeed focus on DNA in the manuscript, but we also present a data set of experiments performed on an RNA-based hairpin in the supplemental data with results that are very similar to those from the DNA hairpin (Supplementary Figure S2) and suggest that the mechanism we observe is general (at least for RNA and DNA hairpin formation). Given the purely electrostatic character of the proposed mechanism, no fundamental differences between DNA and RNA are expected. The only difference that may arise in RNA folding could be that RNA (because of the 2' OH) is more likely than DNA to have more competing compact conformations. We have tried to more clearly capture this point in the "NCD promotes nucleic-acid folding via electrostatic interactions" section (ln 93-96) and the "RNA chaperones as macromolecular counter ions" section (ln 235), in the "Conclusions" section (ln 253 and 274). 3 Reviewer #2: Holmstrom et al demonstrate how an intrinsically disordered protein can function as an RNA chaperon. By forming only unspecific eletrostatic interaction the chaperon efficiently screens the RNA charges smoothing the free energy landscape and increasing the folding speed and folded state population. The mechanism is shown to be equivalent to the addition of 1M of NaCl. A simple physical model is used to simulate the protein-RNA interaction well capturing the observed experimental observation and further demonstrating that the molecular recognition is unspecific and that the resulting effect on the RNA is that of decreasing its average size.The work is extremely well done and definitely worth publishing. We would like to thank the reviewer for the positive and encouraging feedback. The only minor observation I feel to add is that -GROMACS and PLUMED should be referenced Thanks for the suggestion, both are now referenced in the manuscript (ln 643 and ln 675). -It would be nice to have a reference GitHub or other sharing service with some availability of FRET data and simulation models. In accordance with the journal's policies, we have now included a reference to a "Data Availability" section of the manuscript (ln 700-704) where "Source Data" can be found. Details on the simulation model are now included with the re-submission material with web links in the manuscript (ln 644 and ln 676) for documentation on the software. In this paper, Holmstrom et. al probe the nucleocapsid domain (NCD) of the hepatitis C virus core protein for its ability to fold the nucleic acids. They use FRET experiments to first establish that the NCD indeed assists a DNA and an RNA hairpin to fold. Based on decrease in the binding affinity between the hairpin and NCD in "high" salt, they establish that the interactions are driven by chargecharge contacts. Furthermore, they use site-specific labelling to probe the structure of the NCDnucleic acid complexes, and show that that overall, both the NCD and hairpin become "more" compact when they interact. And finally, they demonstrate that the effect of nanomolar amount of NCD on nucleic acid folding is comparable to that of extremely high (1M) salt concentration NaCl. As more interest is growing in IDPs and their interactions with RNA, this very thorough and timely paper is welcomed. We are delighted and encouraged to see that the reviewers consider our report to be thorough and particularly relevant for the field. Following are some concerns that should be addressed prior to acceptance. General Comments: One suggestion is to provide more background on what this paper is going to be about wrt kinetics and single molecule studies. This is totally unclear from the Abstract and only partially clear in the current Introduction, where the authors simply mention this in passing. A better setup is needed. Thanks for bringing this issue to our attention. We have now adjusted the abstract (ln 20-21 and 25) and introduction (ln 48 and 54) to clarify the methods and approach used in the study. It isn't clear that the effects of NCD are chaperoning at all. For instance, Figure 2F doesn't show chaperone behavior per se. Specifically, the equilibrium between F and U for the DNA (or RNA) isn't changed in the absence of the chaperone, like for chaperoning of a protein where the chaperone releases a protein and it stays folded. I don't feel this point, which is important, diminishes the work in any way since what is described is a real thing, but clarity is needed. To me, this is like the use of the word enzyme for ribozyme. Most ribozymes are not enzymes in that they are changed in the reaction and are single turnover; that doesn't make ribozymes less important. But, semantics are important, especially since most of the community uses chaperone as I say above. Clarity is needed if the impact of this paper is to be the highest possible. We agree that correct terminology is important to prevent possible confusion, and we debated this aspect at length during the preparation of the manuscript. In what is generally accepted to be the first presentation of the so-called "RNA chaperone hypothesis" (Herschlag, JBC, 1995), Herschlag states: "'RNA chaperone' refers to proteins that aid in RNA folding and is not meant to refer to chaperones made of RNA." In our manuscript, we use this definition of "RNA chaperone" in a broad sense. We are aware of more restrictive definitions, which essentially follow Herschlag's suggestion in the same paper: 5 "RNA chaperones are defined as proteins that aid in the process of RNA folding by preventing misfolding or by resolving misfolded species. This is in contrast to proteins that help … RNA folding by catalyzing steps along the folding pathway…" As the reviewer astutely noted, the mechanism we describe for NCD does not specifically prevent or resolve misfolding, and therefore NCD does not fall into this more restrictive category of RNA chaperones. Our decision regarding the use of "chaperone" was reinforced by the common reference to NCD as an RNA chaperone in the existing literature (e.g., Cristofari, NAR, 2004;Ivanyi-Nagy, NAR, 2006;Ivanyi-Nagy, NAR, 2008;Zúñiga, Virus Research, 2009;Sharma, NAR, 2010;Sharma, NAR, 2011;Romero-Lopez, Scientific Reports, 2017) and by discussions with Dan Herschlag. Notably, our results include the common "chaperoning assay" for our NCD variant (Supplementary Figure S1). However, following the reviewer's helpful suggestion, we now clarify this issue in first paragraph (ln 62-66) and at the end of the "Chaperone binding accelerates folding" section (ln 141-144) to alert the reader of possible discrepancies in terminology. Major Comments: My major concern is that many of the IDRs tend to interact with RNA and not with DNA. Yet majority of the experiments in the study has been done with DNA hairpin. Even in the experiments shown in figure S3, there is a four-fold difference in the Kd between the DNA and the RNA hairpin, suggesting some "structural" specificity. As such, could authors provide some guidance on how their findings should be interpreted in the context of what happens in vivo where these IDR-containing proteins are interacting primarily with RNAs? The detailed differences between DNA and RNA are of course an interesting topic that will be the subject of future investigations. The focus of the present work was to establish a generic picture of charge screening by positively charged IDPs. Correspondingly, we consider the four-fold difference in app K d between DNA and RNA to be rather small compared to the 30-fold difference in K d between the folded and unfolded nucleic acids. Our simulations indicate that these changes in affinity can simply be attributed to electrostatics and changes in local charge density rather than evoking additional structurespecific interactions. Nevertheless, the charge densities of nucleic acids do clearly depend on their structures, so from that point of view there may well be some "structural" specificity. In the end, we decided to avoid that terminology for fear that it might misguide the reader into thinking about more classical aspects of structural specificity in protein nucleic acid interactions (like major groove width), which are not required to explain our findings. We note that the apparent equilibrium dissociation constant is lower for RNA than DNA, indicating that these types of interactions are actually more likely to occur with RNA rather than DNA, which may reflect the greater linear charge density in dsRNA than in dsDNA (Pabit, NAR, 2009). Regardless of any small differences in affinity, NCD is a potent chaperone for both RNA and DNA, as indicated by its ability to promote hairpin formation in both cases. Therefore, in vivo, NCD would still facilitate folding via the mechanism we describe (i.e., the chaperone will bind to the RNA, compact the unfolded RNA, and increase in the folding rate constant). We have tried to more clearly capture this point in the "NCD promotes folding via electrostatic interactions" section (ln 93-96) and the "RNA chaperones as macromolecular counter ions" section (ln 235). Finally, we would like to point out that there are also important interactions between charged IDPs and DNA, for instance in the context of the nucleosome and various transcription factors, many of which 6 contain large positively charged IDRs. Charge screening effects similar to the ones we describe here are likely to contribute to the mechanisms of those proteins, as mentioned in the concluding discussion (ln 261-263). 2. Which brings me to my next concern. As demonstrated in figure 1c, about 20-fold decrease in binding affinity is observed with the addition of 80 mM NaCl. Suggesting that these interactions may be quite sensitive to in vivo salt concentrations which can contain upwards of 140 mM K+ and both free and weakly chelated Mg2+ which can further destabilize these interactions. Which again raises concerns as to whether these interactions really happen in vivo. The relevance of the salt concentrations we use in our experiments for the cellular situation is an important point that we may not have stressed sufficiently in the manuscript. For the total ion concentrations, one must also consider ions associated with the buffer. The sodium phosphate buffer we use has a concentration of sodium ions of ~75 mM. Together with the NaCl (80 mM), the total monovalent ion concentration (75 mM + 80 mM = 150 mM) is very near physiological monovalent concentrations. In the revised manuscript, we now state explicitly that we use near-physiological monovalent ion concentrations (ln 74). Additionally, we changed the horizontal axis of Figure 1d to include contributions from the buffer. Regarding Mg 2+ , we had not systematically explored the effects of divalent cations and focused on the mechanistic aspects of chaperone activity. However, we now include additional data (Supplementary Figure S4a) demonstrating that inclusion of 1 mM MgCl 2 does not alter the transfer efficiency histogram, indicating that the structural (e.g., transfer efficiencies) and functional (e.g., fraction of bound molecules) properties of NCD are largely insensitive to the concentrations of MgCl 2 in the physiological range. We now also mention this point in the manuscript (ln 88-90). 3. It appears that 20 out of 25 "positive" charges in the NCD come from arginine residues. Which are known to form bidentate interactions with guanosine. Interestingly, the folding incompetent mutant that authors chose have guanosines at both 5' and 3' end. As such, the binding observed in the "folding incompetent" mutant may be attributed to specific Arg- We agree that this is an interesting direction of research and intend to further explore in future work how the protein and nucleic acid sequences modulate the effect we observe by including a number of biologically-relevant structured nucleic acids (and mutants thereof). However, to keep the manuscript focused, we felt that including a DNA hairpin, a folding-incompetent DNA hairpin, and an RNA hairpin would suffice to demonstrate some degree of generality without diluting out the mechanistic findings of the article. In the experimental design, we concluded that the most conservative folding-incompetent sequence was the mutant where each of the seven nucleotides on the 5'-end of the hairpin was mutated to its complement, eliminating all the Watson-Crick base pairs associated with the original folding-competent hairpin. In both nucleic acids (folding-competent and -incompetent), the vast majority of nucleotides are A, and a change in G content from 5% (3/60) to 10% (6/60) is not expected to substantially alter the binding affinity. In fact, we present experimental evidence to support this claim in the manuscript: We observe a 70 nM Kd between NCD and the incompetent hairpin. We cannot compare this value to the apparent K d of the folding-component hairpin (18 nM) because this value arises from a system of coupled equilibria (binding equilibrium and folding equilibrium), but we can compare the K d associated with the binding of NCD and the UNFOLDED hairpin. This value can be determined from the HMM analysis of surface trajectories of the folding-competent hairpin (i.e., 6 s -1 / 8•10 7 M -1 s -1 = 75 nM, see Fig. 2f). The similarity of these two values indicates that small changes in the G content do not substantially alter NCD's affinity to DNA. In the revised manuscript, we mention the similarity of NCD affinities to the two sequences more explicitly (ln 137) and include the investigation of alternative sequences as an interesting topic of future research (ln 238). The effective charge of polyelectrolytes in solution is indeed reduced due to counter ion condensation and charge regulation and can be estimated using Manning theory and related approaches. In fact, we have recently determined the effective charge of ssDNA using single-molecule electrometry (Ruggeri et al., Nat. Nanotech. 2017) and found the effective charge to be approximately half of the structural charge (i.e. the nominal charge based on the sequence), in agreement with Poisson-Boltzmann-type calculations. However, owing to the pronounced surface interactions of positively charged proteins with the nanostructured devices used in single-molecule electrometry, comparable measurements for NCD have not been possible so far. For consistency, we therefore decided to only quote the structural charge. To more clearly make this point, we now use the nomenclature of Ruggeri et al. and refer to "-60" as the "structural charge" (ln 83 and Figure 1a) and included a reference to Ruggeri et al. c. Panel C. The values on the x-axis appear to be left-shifted for 1.0, 10., and 100. Thank you for noticing this mismatch. The figure has now been adjusted accordingly. 2. Page 2 line 12, the use of "catalyze" could be interpreted as "chemical reaction". We considered the term "catalyze" appropriate since NCD helps the hairpin fold by lowering the effective barrier between U and F, but to avoid any unintentional misunderstanding, we have replaced "catalyze" with "facilitate" (ln 39). Information in Supplementary figure S2 is could be better presented in the main text by breaking down the panels. We now realize that the first reference to Figure S2 in the main text was potentially confusing. We now refer to Figure S2 in the main text when we specifically mention the salt-dependent measurements (ln 222-223), which should make the figure more intuitive and should not require a panel-by-panel breakdown. Figures 2c and 2d)" is confusing. I took it as consistent between the panels, which of course is not true. The authors should rewrite to remove this confusing possible interpretation of the current wording. p3. Second to last paragraph. "…yielded consistent rate constants ( We now explicitly provide call-outs to the two figure panels separately and rephrased the sentence to prevent a possible misunderstanding (ln 119-121). 5. Could authors explain the lack of "more compact" unfolded state (around E=0.5) of the hairpin which is seen in figure 1b (cyan) but is absent in figure 4a with NaCl. The absence of the "more compact" unfolded state at E = 0.5 in Figure 4a is because the figure panel only shows data up to a final concentration of 1 M NaCl, where the transfer efficiency of the "more compact" unfolded state has only increased to a value of E ≈ 0.2. At higher salt concentrations, it becomes difficult to visualize the mean transfer efficiency of the "more compact" unfolded state because this population is so sparsely populated (since NaCl stabilizes the folded population). As shown in Figure 4c, in order to observe a transfer efficiency of the "more compact" unfolded state that is comparable with the chaperone bound unfolded hairpin, we would need to use a solution with > 4 M NaCl.
5,836.6
2019-06-05T00:00:00.000
[ "Biology" ]
Hydrogen Isotope Separation Using a Metal–Organic Cage Built from Macrocycles Abstract Porous materials that contain ultrafine pore apertures can separate hydrogen isotopes via kinetic quantum sieving (KQS). However, it is challenging to design materials with suitably narrow pores for KQS that also show good adsorption capacities and operate at practical temperatures. Here, we investigate a metal–organic cage (MOC) assembled from organic macrocycles and ZnII ions that exhibits narrow windows (<3.0 Å). Two polymorphs, referred to as 2α and 2β, were observed. Both polymorphs exhibit D2/H2 selectivity in the temperature range 30–100 K. At higher temperature (77 K), the D2 adsorption capacity of 2β increases to about 2.7 times that of 2α, along with a reasonable D2/H2 selectivity. Gas sorption analysis and thermal desorption spectroscopy suggest a gate‐opening effect of the MOCs pore aperture. This promotes KQS at temperatures above liquid nitrogen temperature, indicating that MOCs hold promise for hydrogen isotope separation in real industrial environments. Gas sorption: Nitrogen adsorption and desorption isotherms for all samples were collected at 77 K using an ASAP2020 volumetric adsorption analyzer (Micrometrics Instrument Corporation). Carbon dioxide and hydrogen isotherms were collected up to a pressure of 1200 mbar on a Micromeritics ASAP2020 at 77 K for hydrogen, or at 273 and 298 K for carbon dioxide. All samples were degassed at 80 °C for 15 hours under a dynamic vacuum (10−5 bar) before analysis. A fully automated Sieverts apparatus (Autosorb-iQ2, Quantachrome Instruments) was used to perform the Ar isotherm at 87.3 K and the hydrogen cryogenic adsorption experiments. The calibration cell was an empty analysis carried out at the same temperature and pressure range of each experiment; corrections relating the sample volume and the nonlinearity of the adsorbate were made. Around 50 mg of each sample was activated at 80 °C for 2α and at 180 °C for 2β under vacuum for 12 h in order to remove any solvent molecules. A coupled cryocooler based on the Gifford−McMahon cycle was used to control the sample temperature. The cooling system permitted us to measure temperatures from 20 to 300 K with a temperature stability of <0.05 K. Thermal desorption spectroscopy (TDS) TDS experiments were carried out on an in-house designed device with about 2 mg of each sample. The sample holder is screwed tightly to a Cu block, which is surrounded by a heating spiral in the high vacuum chamber. The Cu block is connected to a flowing helium cryostat, allowing cooling below 20 K. All the samples were first loaded in the sample holder and activated under vacuum for 2 h at the temperature of 353 K for 2α and at 453 K for 2β. Then, the sample was exposed to a 10/200 mbar equimolar D2/H2 isotope mixture at different exposure temperatures (30, 50, 77, and 100 K) for 10 min. The remaining gas molecules were removed at the corresponding exposure temperatures until high vacuum was reached again. Afterwards, the sample was rapidly cooled down below 20 K. Then a linear heating ramp (0.1 K/s) was applied, the desorbing gas was continuously detected using a mass spectrometer (QMS), recognizing a pressure increase in the sample chamber when gas desorbs. The area under the desorption peak was proportional to the desorbing amount of gas, which can be quantified after careful calibration of the TDS apparatus. A solid piece of a diluted Pd alloy Pd95Ce5 (~0.5 g) was used to calibrate the mass spectrometer signal. Before the calibration, the oxide layer of the alloy was removed by etching with aqua regia. Then the alloy was heated up to 600 K under a high vacuum to remove any hydrogen that might be absorbed during the etching procedure. Afterwards, it was exposed to 40 mbar pure H2 or pure D2 for 1.5-2.5 h at 350 K. As H and D were bound preferentially to the Cerium atoms at low exposure pressures, the alloy could be handled under ambient conditions for a short time. The alloy was weighed after being cooled down to room temperature. The mass difference between unloaded state and loaded state was equal to the mass uptake of hydrogen or deuterium, respectively. After weighing, the alloy was loaded in the chamber again, and then a 0.1 K/s heating ramp (room temperature (RT) to 600 K) was applied for a subsequent desorption spectrum. The obtained mass of gas is directly corresponded to the area under the desorption peak. Computational details An isolated molecule, extracted from experimental crystal structure of 2α, was optimized by GFN2-XTB method with D4 dispersion [2] model in gas phase with defaults for convergence. The optimized geometry was confirmed as a true minima by numerical harmonic frequency calculation without imaginary frequency. [3] Based on that, MD simulation was performed for 200 ps, in which 100 ps for equilibration and 100 ps for production with timestep of 2 fs, along with SHAKE restraints on all bonds, [4] in the NVT ensemble using the Berendsen thermostat [5] to maintain temperature of 298 K. Structures were dumped every 1 ps. The results will be very similar by using isolated molecule from 2β. Pywindow [6] was used to calculate pore diameter and widow diameter of the molecular dynamics trajectories including 100 geometries obtained from xTB calculations. Pore size distribution (PSD) histogram was calculated by Zeo++ [7] , it is advised to use probe radius similar to atomic radii for which one should expect the said 0.1 Å accuracy in peak positions. Therefore, we chose the probe radii below the half of the largest free sphere (Df) of the crystals' pore structure. As for the MeOH@2, 2α and 2β, the Df is 3.4, 2.0 and 2.6 Å, respectively. The probe radii chosen for MeOH@2 and 2β is 1.2 Å, and 0.97 Å is for 2α. Water stability measurements 5 mg of 2α or 2β was added to a 5 mL vial containing 4 mL of deionized water to test hydrolytic stability of 2α and 2β,. The 2α and 2β samples were suspended in water without stirring at RT for 1, 2, and 5 days. Then, each 2α and 2β sample was removed by filtration and dried in air. PXRD patterns were recorded using the air-dried samples ( Figure S11), and NMR spectra were recorded after fully dissolving the air-dried samples in CDCl3 ( Figure S4 for 2α and Figure S5 for 2β). As shown in Figures S4 and S5, the 1 H NMR spectra recorded after fully dissolving the air-dried 2α and 2β crystals in CDCl3 after being immersed in water for 1, 2, or 5 days are comparable, demonstrating that 2 is chemically stable in 2α and 2β in water at room temperature for at least 5 days The PXRD patterns that were recorded for 2α ( Figure S11a) and 2β (Figure S11b) after these samples were immersed in water for up to 5 days showed that some of the peaks shifted, and there were other differences in peak position. These differences indicate that 2α and 2β swell and change the structure slightly after being immersed in water. However, 2α and 2β remain crystalline, and their structures do not appear to collapse. MeOH vapour sorption experiments The large yellow block-shaped crystals of MeOH@2 formed during synthesis immediately lose solvent after being removed from MeOH and break up into smaller yellow crystalline powders in the air. The PXRD pattern of the air-dried MeOH@2 sample is not identical to the simulated PXRD pattern of MeOH@2; however, it closely matches the simulated PXRD pattern of the 2α structure ( Figure S12). To further investigate the dynamic structural behaviour of 2α and 2β, we performed vapour sorption experiments using pure MeOH. For each MeOH vapour exposure test, an open 5 mL vial containing 15 mg of 2α or 2β was placed in a sealed 20 mL vial containing 2 mL of MeOH at RT. PXRD patterns of the MeOH-loaded samples were then collected for up to 5 days immediately after air-drying the samples on the PXRD plate. The PXRD pattern of 2α after being exposed to MeOH vapour for 2 days is similar to the PXRD of air-dried MeOH@2 ( Figure S12a). By contrast, the PXRD patterns of 2β after being exposed to MeOH vapour appears to gradually transform into PXRD of air-dried MeOH@2 over the 5-days before the crystals begin to redissolve in MeOH in the vial (Figure 12b). , there is little weight loss at 37.2 °C may be due to the unbound water, which is consistent to the NMR result that the methanol has been removed entirely in 2α. Crystallography Report ( Figure S2) A temperature of ≥80 °C can remove all of the methanol in 2. Figure S7. DSC curves of 2α and 2β heated at a ramp rate of 10 °C/min under an N2 atmosphere. Figure S10. PXRD patterns of 2 after and before testing gas isotherms: (a) PXRD patterns of 2α collected after and before testing H2 and D2 isotherms at 30 to 100 K. (b) PXRD patterns of 2β collected after and before testing H2 and D2 isotherms at 30 to 100 K. a b Figure S11. PXRD patterns that were collected after suspending (a) 2α in water and (b) 2β in water for 1, 2, and 5 days. The samples were collected by filtration and air-dried before the PXRD patterns were recorded. Figure S12. PXRD patterns of (a) 2α and (b) 2β after being exposed to MeOH vapour at RT for up to 5 d. The PXRDs of 2@MeOH after air-drying the sample at RT on the PXRD plate (red) and the simulated PXRD of MeOH@2 (black) are included in both plots. Figure S15. N2 adsorption-desorption cycles of 2α recorded at 77 K. Solid symbols: adsorption; hollow symbols: desorption. The isotherms were cycled to investigate the reproducibility of the pressure-induced gating effect observed for 2α during N2 adsorption at 77 K. The N2 adsorption-desorption isotherm three times in a row using the same 2α sample. The three N2 adsorption isotherms all exhibited pressure-induced gating effects over the P/P0 range from 0.01 to 0.12, and have similar shapes and uptakes outside that range. In addition, the calculated BET surface areas for cycles 1, 2, and 3 were comparable (401, 418, and 398 m²/g, respectively) over the P/P0 range (0.06 to 0.32). Figure S17. Isosteric heat of adsorption of H2 and D2 for 2α as a function of the adsorption amount. The isosteric heat of adsorption in 2α was calculated to be 3-7 kJ/mol for H2 and 4-8 kJ/mol for D2 using the Clausius-Clapeyron equation. A higher heat of adsorption is the reason for the higher D2 uptake. Interestingly, the enthalpy gradually rises with a higher adsorption amount, meaning that the host can adsorb gas molecules more easily at higher loadings. Meanwhile, the heat of adsorption in 2β was calculated to be 0.5-3.5 kJ/mol for both isotopes. The abnormal slope for 2α is attributed to the strong diffusion barrier due to the small pore aperture, which opens up at higher temperatures due to thermally-activated flexibility. Figure S18. Isosteric heat of adsorption of H2 and D2 for 2β as a function of the adsorption amount. Concerning the self-diffusion coefficient, for materials like MOCs, the diffusion limitation typically governs the adsorption/ desorption process. At low temperatures, gas molecules can only be adsorbed weakly on the outer surface; the gas can penetrate into the cavity at higher exposure temperatures. It is, therefore, difficult to calculate the self-diffusion coefficient for hydrogen isotopes. However, the maximum temperature of the TDS peaks correlates with the diffusion thru the apertures; that is, lower and higher maximum temperatures correspond to faster and slower diffusion, respectively. Thus, the higher maximum temperature of H2 (106 K 2α in and 92 K in 2β) compared to D2 (102 K 2α in and 86 K in 2β) indicates a slower H2 and faster D2 diffusion in MOC. Figure S19. H2 (Black) and D2 (Red) thermal desorption spectra of H2/D2 single gas at 10 mbar and 200 mbar for (a) 2α and (b) 2β. A laboratory-designed cryogenic thermal-desorption spectroscope (TDS) are utilized for determining the preferred H2 and D2 adsorption sites in 2α and 2β. TDS measurements were carried out by applying pure H2 and D2 atmospheres (10/200 mbar), respectively, under identical experimental conditions. The gas exposure was carried out at room temperature and cooling down to 20 K. The resulting TDS spectra obtained between 20 and 170 K. a b Figure S20. Molecules overlay for 2α (blue and cyan) and 2β (red), as generated using the Molecules Overlay tool in Mercury. H atoms are omitted for clarity. [27] * D2/H2 mixture selectivity calculated by IAST
2,934.4
2022-06-10T00:00:00.000
[ "Materials Science" ]
Fact or fiction: updates on how protein-coding genes might emerge de novo from previously non-coding DNA Over the last few years, there has been an increasing amount of evidence for the de novo emergence of protein-coding genes, i.e. out of non-coding DNA. Here, we review the current literature and summarize the state of the field. We focus specifically on open questions and challenges in the study of de novo protein-coding genes such as the identification and verification of de novo-emerged genes. The greatest obstacle to date is the lack of high-quality genomic data with very short divergence times which could help precisely pin down the location of origin of a de novo gene. We conclude that, while there is plenty of evidence from a genetics perspective, there is a lack of functional studies of bona fide de novo genes and almost no knowledge about protein structures and how they come about during the emergence of de novo protein-coding genes. We suggest that future studies should concentrate on the functional and structural characterization of de novo protein-coding genes as well as the detailed study of the emergence of functional de novo protein-coding genes. Introduction The question of how new genes come about has been a major research theme in evolutionary biology since the discovery that different species' genomes contain varying numbers of genes. This question is difficult to answer, since emerging genes cannot easily be "caught in the act". Ohno 1 gave the first comprehensive answer: new genes can emerge via the duplication of old genes. Consequently, gene duplication was thought to be the only mechanism of gene birth for many years 2 . However, the discovery of so-called orphan genes in newly sequenced genomes raised doubt about the general validity of Ohno's model of gene duplication. Orphan genes are genes that lack detectable homologs outside of a species or lineage. To explain the presence of orphans under the assumption that new genes emerge only via duplication, one has to assume gene loss in all other lineages or a phase of highly accelerated evolution that leads to the loss of detectable sequence similarity 3 . Yet convergent gene loss in many independent lineages is unlikely -especially given the high number of orphan genes -and it is difficult to explain why so many genes would experience prolonged phases of accelerated evolution 4 . On the contrary, it would be expected that genes that do not experience any selective pressure -which is required here for accelerated evolution -would be pseudogenized eventually, i.e. not be transcribed anymore. These inconsistencies and further observations suggested that there could be other mechanisms of gene emergence 5,6 , for example de novo gene emergence, a process in which a new gene evolves from a previously non-genic sequence. The product of this process can be an RNA gene or a protein-coding gene. The possibility of de novo gene emergence has long been disputed, with many claiming that it is impossible for an intergenic, random open reading frame (ORF) to encode a functional protein (reviewed in 4,7). But, despite these open questions regarding the exact mechanism of de novo gene birth, many recent studies report de novo emergence of protein-coding genes 5,6,[8][9][10][11][12][13][14][15][16][17][18][19] . In general, genes without detectable homologs can be summarized under the term novel genes. These genes can also be called orphan genes, or -more preciselyspecies-/lineage-specific genes. The term de novo describes a specific subclass of novel genes, namely genes emerging from non-genic sequences 20 . Additionally, one has to discriminate between functional genes and other classes of sequences. A de novo transcript can be any species-specific transcript that is homologous to an intergenic sequence in outgroups. De novo transcripts can be seen as putative de novo genes (see also Figure 1). The term protogene also describes intergenic transcripts or ORFs that are situated on a continuum between non-genic sequences and functional genes 21 (see also Figure 1). At the genic end of the spectrum, the term de novo gene describes a functional gene that has emerged de novo. De novo genes can either code for a protein or be functional as RNAs 22 . Here, we will use the term de novo gene to describe de novo genes of unknown coding status and de novo protein-coding gene to describe de novo-emerged genes that likely produce a functional protein product. Shown is the hypothesis of a step-wise genic and structural maturation of an intergenic sequence towards a protein-coding gene. The steps are each shown as pictograms of protein and gene structure. An exemplary phylogenetic tree is shown to the right. The status of the protein/gene is projected onto the tree using grey, dotted lines. Gene emergence is depicted using a green star, gene loss using a grey X symbol. ORF, open reading frame. Identification of de novo genes The first step necessary to determine de novo status of a gene is to verify that no homologous sequences are present in outgroups. This homology search is often performed using BLAST or similar alignment search tools, for example against non-redundant protein databases containing all known protein sequences. Usually, an e-value cutoff between 10 −3 to 10 −5 is used for this step to ensure that no spurious, suboptimal alignments are taken into account 4 . If this homology search does not find any homologs outside of the analyzed species, the query gene has successfully been confirmed to be a novel gene. This definition states only that there are no homologous sequences outside of a certain phylogenetic group. Calling a gene novel does not imply any knowledge about the emergence mechanism of the gene. To additionally determine de novo gene origin, the homologous non-coding outgroup DNA sequence has to be retrieved 14,23 . The outgroup homologous sequence can be recovered using synteny information about the position of orthologous neighbor genes. Another possibility is searching the target gene sequence in outgroup genomes using alignment search tools such as BLAST 4,23 . A number of different types of de novo genes can be discriminated depending on the type of sequence that the genes likely emerged from 23 . Problems in de novo gene identification and annotation. In the past 24 and also more recently 25,26 , studies have raised questions regarding the reliability of homology-based searches of novel genes. Specifically, short and fast-evolving genes were proposed to lose detectable sequence similarity faster than other genes. As a result, shorter genes would be expected to be over-represented among young genes, thereby biasing the results of studies of genes of different ages 24-26 . Doubts have been raised as to which fraction of genes would actually be affected by this effect 27 . Also, this should not be a problem for de novo genes defined by the methods summarized here. The possibility that the examined gene is actually a fast-evolving old gene is excluded, since for a confirmed de novo gene the homologous non-genic outgroup sequence has to be determined. Additionally, doubts have been raised regarding the accuracy of the initial claims of the unreliability of homology detection 28 . Another challenge is the previously mentioned identification of a non-coding sequence in an outgroup which is clearly homologous to the suspected de novo gene. In non-coding DNA, homology signals disappear very quickly, since non-coding sequences accumulate mutations faster than coding sequences. Because of this, it is often impossible to determine the homologous non-coding sequence in an outgroup. This problem increases with gene age. As a result, it is often not possible to determine the mechanism of origin, especially for older genes. Additionally, there are methodological difficulties in the annotation of de novo and also all other types of novel genes 4 . These problems could lead to a systematic underestimation of the number of de novo/novel genes. The problems are caused by genome annotation also being based on sequence homology 29 . As de novo/novel proteins per definition do not possess any homologs, they cannot be annotated based on that criterion and their number is likely to be underestimated. Other common criteria such as minimum expression strength and the presence of multiple exons could also contribute to the problem, as these criteria do not represent intrinsic requirements for gene existence and are biased against de novo/ novel genes 18 . Nevertheless, the criteria might be necessary to prevent an over-annotation of spurious transcripts as genes, but they also make it impossible to identify all de novo genes. Recent studies on de novo protein-coding genes also employed such thresholds on exon number and expression strength to produce a more robust data set 15,17,18 . De novo gene emergence Conceptually, de novo genes can evolve via two different mechanisms. The first mechanism is transcription-first, where an intergenic sequence gains transcription before evolving an ORF 20,30 . Recently, this has been shown to happen frequently when long non-coding RNAs (lncRNAs) become protein coding 17,31,32 . Consequently, lncRNAs could represent an intermediate step in the evolution of a protein-coding gene 33 . The second model is ORF-first, in which an intergenic ORF gains transcription 20,30 . Such a transcribed de novo ORF has been proposed to represent an intermediate step in gene emergence, a protogene (Figure 1). High turnover of intergenic transcription 34 likely plays a role in de novo gene emergence by exposing novel transcripts to selection. Transposable elements can also play a role in de novo gene emergence 35 . Additionally to whole proteins, terminal domains can also emerge de novo 33,36 . One model regarding the emergence of novel domains is the "grow slow and molt", in which reading frames get extended gradually and eventually gain a structure and function 37,38 . An additional process that could play a role during de novo protein-coding gene emergence is a (partial) revival of pseudogenized gene fragments. This possibility has already been proposed by Ohno 1 . Regarding de novo protein-coding gene emergence, it seems possible that fragments of a pseudogenized gene that has been somewhat eroded by drift could become part of a de novo ORF later on. These fragments could provide a starting point for de novo protein emergence by providing remnants of structural elements. For all of these models, there are several consistent findings, but none of the models is, as yet, supported by a comprehensive set of data from diverse sources and corresponding experimental data. De novo gene death. Orphan genes seem to generally have a high loss probability 14,39 that seems to be negatively correlated with gene age 40,41 . The cause of this correlation is not yet well understood. It seems possible that young orphan genes have not yet gained a function or do not perform transient functions. It is also not clear yet how much of these findings can be transferred to de novo genes, as the studies on this topic examined all novel genes of different emergence mechanisms jointly. De novo gene functions A number of studies have examined the functions of orphan genes, some of which may represent de novo-emerged genes. Findings on orphan gene functions include involvement of orphan genes in the stress response 21,42 , rapid adaptation to changing environments as well as species-specific adaptations 43,44 , and limb regeneration 45 . Additionally, novel genes were found to quickly gain interaction partners and become essential 39,46 . Fewer studies, however, have examined the functions of systematically verified de novo-emerged genes. Generally, a high number of de novo genes was found to be expressed specifically in the testes, at least in Drosophila species 5,6 and primates 18 , as well as in plant pollen 16,47 . In the mouse, a de novo-emerged RNA gene was found to raise reproductive fitness 22 . Another study found de novo genes to play a role in the Arabidopsis stress response 12 . More specifically, one de novo ORF was found to play a role in male reproduction in Drosophila 48 . Reinhardt et al. 48 also presented findings suggesting a role of de novo genes in developmental stages of Drosophila. However, these findings have to be interpreted carefully, as the RNAi method used has been shown to produce unreliable results 49,50 . A few other examples of functional de novo genes have been found 30 , while others were not able to determine specific functions of identified de novo-emerged genes 15 . The available data suggest that de novo-evolved genes can play a role in many different processes from reproduction to the stress response. Recently, one study analyzed the function of two putative de novo protein-coding genes in Drosophila melanogaster 51 . The two analyzed genes were found to be essential for male reproduction and to have testis-biased expression. Both genes are located inside introns of other, older genes with homologs in outgroups. However, the de novo origin of the analyzed genes could not be confirmed with certainty owing to the outgroup homologous sequences not being identifiable (see above for a general description of this problem). Protein structure of de novo proteins Little is known about the protein structures of de novo proteins. Some studies have found a high amount of intrinsic protein disorder 52 in very young genes 15,51,53 , while others have not 21 . A priori, it seems unlikely that de novo-emerging proteins have a well-defined protein structure. Intuitively, it seems more likely for random sequences to be intrinsically disordered instead (see Figure 1). Nevertheless, disordered regions can also be highly functional 52,54 and could as such also represent an evolved state. Also, contrary to intuition, at least semi-random (restricted alphabet) proteins appear to sometimes have a defined secondary structure 55,56 . Additionally, the existing protein structure families appear to have multiple origins 57 . This finding suggests that the emergence of new protein structures is at least possible. Avoidance of misfolding and aggregation, on the other hand, have been proposed to be driving forces of protein evolution 58,59 . This observation and the existence of de novo protein-coding genes suggest that de novo proteins have the potential to exhibit a defined structure. Open questions regarding de novo genes Despite many advances in recent years, many open questions remain regarding de novo protein-coding genes. One understudied field is the functional characterization of protein-coding de novoemerged genes. One non-coding RNA gene has been found to have a role in reproduction in the mouse 22 , and additionally one likely protein-coding gene has been found to be essential for reproduction in Drosophila 48 . However, beyond that, there is a substantial lack of data. Consequently, it remains unclear how de novo protein-coding genes gain their function and if there are some roles that they are more or less likely to carry out. As described above, the structural characterization of de novo protein-coding genes is still an open question. Previously, ambiguous signals have been found regarding the role of intrinsic disorder in de novo-emerging protein-coding genes 15,21 . It would be important to experimentally verify the structure -or lack thereof -of de novo protein-coding genes. Here it is of major interest to determine the proportion of intergenic ORFs with folding potential and also what the implications are for the retention of such ORFs. This would allow further conclusions about de novo gene emergence: if most intergenic, random ORFs are foldable, function would seem to be the bottleneck of de novo protein-coding gene retention. On the other hand, if most confirmed de novo genes are folding, but most intergenic ORFs do not possess folding potential, folding potential would be a bottleneck of de novo protein-coding gene emergence and retention. Another unsolved problem is how to find specific annotation thresholds for orphans/de novo genes 4 . As described above, a number of their properties make de novo genes difficult to annotate and to be distinguished from transcriptional noise. One solution would be to generate high-quality proteome data using e.g. mass spectrometry. However, this process is still highly expensive and might also not be able to generate a complete picture, since low-frequency peptides are hard to detect 60 . Another method is ribosome profiling, which uses ribosome occupancy of sequences as a measure of translation. This method has been successfully used to show that some transcripts that were previously classified as noncoding could in fact be translated 61 . Additionally, patterns of selection, e.g. measured in the ratio of non-synonymous to synonymous mutations, can be used to infer the coding status of sequences. Genes with a higher fraction of synonymous mutations compared to non-synonymous mutations can be expected to be protein coding and under purifying selection 17,20 . However, these measures require a number of orthologs to be present, which makes them of limited use for novel genes. Another possibility is the use of population data for the same purpose, which circumvents the problem of the unavailability of orthologs for novel genes. As it stands, studies mostly have to rely on arbitrary cutoffs 15,17 and thus might miss a number of genes. It would be of major interest to be able to differentiate de novo genes and protogenes from transcriptional noise. Recent research has already shown that small ORFs (smORFs) can play a functional role 62,63 , and consequently it seems quite likely that also very short novel ORFs could be functional. This question also touches upon the problem of differentiating lncRNAs from protein-coding genes, which is often performed via an ORF length cutoff 17,32 . Going forward, it is of major interest to fully characterize a large number of de novo genes in terms of evolutionary, functional, and structural history to be able to draw some general conclusion about their evolution. Specifically, it is of major interest to determine whether a functional role is an exception for protogenes or if most expressed ORFs have a functional impact which mostly does not affect the fitness of the organism at a significant level. If most expressed ORFs have only a negligible fitness effect, they would mostly evolve via drift. Two closely related questions are how and when de novo proteins gain their function: are de novo genes usually functional from the time point of their emergence, or do they gain a cellular task only after a period of drift? Conclusions In recent years, an increasing number of studies confirmed a major role of de novo gene emergence in the evolution of new proteincoding genes. The functional description of de novo-emerged genes is still lacking, but more general findings for orphan genes suggest that novel genes have a broad functional potential. However, the more detailed functional as well as structural characterization of de novo-emerged protein-coding genes remains one of the big open questions. An interesting recent finding was the confirmation of lncRNAs as an intermediate step in de novo protein-coding gene evolution. This finding offers a solution to two of the big questions in de novo gene evolution -how and why do intergenic sequences gain transcription? However, these findings also touch upon a difficult problem in studying de novo genes: how can protein-coding genes be distinguished from non-coding ones? This problem is exacerbated by recent findings that show that very short ORFs can also be functional 63 . Tackling all of these problems and integrating them into detailed studies of the emergence, structure, and function of de novo protein-coding genes will provide new, interesting insights and allow for a deeper understanding of the inner workings of the evolution of de novo protein-coding genes. Author contributions All authors prepared, revised, and edited the manuscript. Competing interests The authors declare that they have no competing interests. Grant information The author(s) declared that no grants were involved in supporting this work.
4,565.4
2017-01-19T00:00:00.000
[ "Biology" ]
SIFT-Flow-Based Virtual Sample Generation for Single-Sample Finger Vein Recognition : Finger vein recognition is considered to be a very promising biometric identification technology due to its excellent recognition performance. However, in the real world, the finger vein recognition system inevitably suffers from the single-sample problem: that is, only one sample is registered per class. In this case, the performance of many classical finger vein recognition algorithms will decline or fail because they cannot learn enough intra-class variations. To solve this problem, in this paper, we propose a SIFT-flow-based virtual sample generation (SVSG) method. Specifically, first, on the generic set with multiple registered samples per class, the displacement matrix of each class is obtained using the scale-invariant feature transform flow (SIFT-flow) algorithm. Then, the key displacements of each displacement matrix are extracted to form a variation matrix. After removing noise displacements and redundant displacements, the final global variation matrix is obtained. On the single sample set, multiple virtual samples are generated for the single sample according to the global variation matrix. Experimental results on the public database show that this method can effectively improve the performance of single-sample finger vein recognition. Introduction Finger vein recognition is an effective biometric technology which uses subcutaneous finger vein patterns for recognition. Studies have shown that finger vein patterns are unique and stable [1,2]. Compared with other biometric features such as face, fingerprint, and gait, finger veins show the following excellent advantages in applications [3,4]. (1) Internal features: Finger vein patterns are inside the finger, so it is hard to be affected by the external environment and changes in the finger epidermis. In addition, it is very difficult for others to obtain or copy finger vein images. (2) Living body recognition: Due to the special imaging principle, the finger vein image acquisition can only be carried out in the case of living bodies. Therefore, the problem of fake image attack becomes more difficult in the finger vein recognition scenario. (3) Non-contact imaging: When capturing images, fingers do not need to touch the device, making it cleaner and more acceptable. Because of these advantages, finger vein recognition becomes a promising branch of biometrics. Generally speaking, a finger vein recognition system mainly includes four parts: image acquisition, image preprocessing, feature extraction and matching. From the perspective of feature extraction, finger vein recognition can be divided into the following types: 1. Network-based methods: These methods need to segment vein patterns first and then extract features according to the vein patterns. Related methods mainly include: repeated line tracking (RLT) [5], maximum curvature points (MaxiC) [6,7], mean curvature (MeanC) [8], region growth [9], the anatomy structure analysis-based method [10] (ASAVE), and so on. The above methods have shown excellent performance for multi-samples finger vein recognition. However, in practical applications, such as identity management systems and attendance systems, often only one image per class can be collected, which leads to the problem of single-sample finger vein recognition. In these cases, due to insufficient intra-class information, the performance of some algorithms will drop significantly, such as network-based methods and local descriptor-based methods. Since a sample cannot obtain the intra-class variations, some algorithms that require supervised learning are not available, such as dimensionality reduction-based methods and deep learning-based methods. Therefore, it is very necessary to solve the single-sample finger vein recognition problem. Furthermore, the single-sample finger vein recognition system requires less storage space and has faster acquisition speed, which will have broader application prospects than the multi-samples recognition system. In the field of face recognition, some researchers use sample expansion technology to synthesize multiple virtual samples from the original sample, making the single-sample problem into general face recognition. Thus, the face recognition algorithms based on multiple samples can continue to be used in single-sample recognition, which has considerable practical significance. Inspired by this, we propose a finger vein sample expansion method to solve the single-sample finger vein recognition problem. Similarly, the state-of-the-art algorithms widely used in the multiple samples finger vein recognition can continue to be applied in single sample recognition. Compared with symmetrical face images, finger vein images do not have regular and obvious characteristics. Therefore, we can not directly follow the virtual sample generation method of the face to generate virtual finger vein images. We found that the variations between genuine images are mainly due to the finger's translation, rotation, etc. Many persons have similar habits with their fingers, which lead to similar intra-class variations. Hence, we can capture intra-class variations on a generic set and then use these variations to generate virtual samples for a single sample. Scale-invariant feature transform flow (SIFT-flow) [25][26][27] can effectively estimate the variations between two images; thus, we adopt it in our paper. Specifically, the SIFT-flow algorithm is used to estimate the variations between genuine images, which is used as the displacement matrix within the class. Then, the key displacements of each class are obtained to form a global variation matrix. After removing the interference displacements and redundant displacements, a final variation matrix is obtained. Finally, the variations matrix is used to generate virtual samples for a single sample on a single sample set. Based on virtual samples, single-sample finger vein recognition has been transformed into multi-sample recognition. The main contributions of this paper can be summarized as follows. (1) We propose a virtual sample generation method to solve the single-sample finger vein recognition problem. By adding the generated virtual samples, the performance of classical algorithms is improved significantly. (2) In order to obtain effective virtual samples, we learn the intra-class variations on the general data set and then use these variations to generate virtual samples. (3) When learning intra-class variations, we use the SIFT-flow algorithm, which can effectively estimate the displacement between images. The experimental results show that our method can greatly improve the performance of single-sample finger vein recognition. The rest of the paper is organized as follows. We discuss related work of singlesample recognition in Section 2. In Section 3, we introduce the proposed method of solving single-sample finger vein recognition. We report the experimental protocols and results in Section 4. Finally, the conclusions of our work is given in Section 5. Related Work Single-sample recognition is an important research branch of biometrics. In particular, single-sample face recognition has attracted many researchers' interests. To solve the problem of single-sample face recognition, many methods have been designed, and the method based on virtual sample generation is one of them [28]. For the virtual sample generation approach, researchers used various technologies to construct multiple virtual images from a single face image and then applied them for recognition. For example, Shan et al. [29] generated 10 face images for each person using a combination of appropriate geometric transformations (e.g., rotation, scaling) and gray-scale transformations (e.g., simulating lighting, artificially setting noise points). Zhang et al. [30] proposed performing singular value decomposition on each image matrix and then generated multiple virtual images for each face image by perturbing the singular values. Wang et al. [31] used face symmetry and sparse theory to synthesize virtual face images for sample expansion. Hu et al. [32] proposed using a single sample to reconstruct a 3D face model and then used the reconstructed model to obtain virtual face images. Xu et al. [33] used the axial symmetry of the face to generate virtual samples. The research on single-sample finger vein recognition is scant; to the best of the author's knowledge, only Liu et al. [34] proposed a deep ensemble learning method for single-sample finger vein recognition, achieving good results. However, there are many classical algorithms for multiple-sample finger vein recognition; their performance only degrades or fails in single-sample recognition. It will be very meaningful if they can continue to be used in single-sample finger vein recognition. Existing methods cannot achieve this goal. Therefore, in this paper, we propose the method of virtual sample generation to solve the single-sample finger vein recognition problem. The Proposed Method In this section, we first introduce the SIFT-flow algorithm that will be used in our method and then introduce the proposed SIFT-flow-based virtual sample generation method in detail. SIFT-Flow Algorithm As the SIFT-flow [26] algorithm can effectively estimate the variation of two images, it is widely used in computer vision and computer graphics. For finger vein recognition, we also choose the SIFT-flow algorithm to obtain the displacement matrix between the images. SIFT-flow uses scale invariant feature transform (SIFT) [35] descriptors to build dense connections between the source and target images. The SIFT descriptor is an excellent local descriptor with illumination and rotation invariance as well as partial affine invariance. The original SIFT descriptor includes two parts: feature extraction and salient feature point detection. SIFT-flow only uses the feature extraction component. The SIFT feature extraction steps are as follows: (1) For each pixel in an image, divide its 16 × 16 neighborhoods into 4 × 4 cell arrays. (2) Count the gradient directions of each cell array into 8 main directions, so that a 128 (8 main directions × 16 cell arrays) dimension feature vector for a pixel can be obtained. The SIFT image is obtained by extracting the SIFT descriptor of each pixel in an image. In order to obtain the displacement matrix of the two SIFT images, it is necessary to find the best matching pixel for each pixel. The displacement of each pixel can be obtained by the position difference of the pixel with its best matching pixel. The displacement of each pixel consists of a horizontal displacement and a vertical displacement. Liu et al. regarded the matching problem as an optimization problem and design an objective function similar to optical flow. Suppose s 1 and s 2 are the two SIFT images. The objective function for SIFT-flow is defined as: where p = (x, y) is the coordinate of the current pixel. w(p) = (u(p), v(p)) is the displacement vector of the current pixel relative to the matching pixel, which is only allowed to be an integer. u(p) and v(p) represent the displacement in the horizontal and vertical directions, respectively. In addition, ε is the neighborhood of the pixel, and the default value is 4 neighborhoods. There are three parts of the function: a data term, a small displacement term, and a smooth term. The data item in (1) calculates the difference of two SIFT images. The small displacement term in (2) constrains the displacement vector to be as small as possible, since the best matching pixel should be chosen within the nearest neighborhood. The smoothness term in (3) is used to constrain the translation of adjacent pixels, which should have similar displacements. SIFT-flow uses dual-layer loopy belief propagation as the base algorithm to optimize the objective function. Unlike usual optical flow functions, the SIFT-flow smooth terms allow us to separate the horizontal flow u(p) and vertical flow v(p), which can greatly reduce the complexity of the algorithm. SIFT-Flow-Based Virtual Sample Generation (SVSG) The proposed SVSG method is divided into a training stage and testing stage. A schematic diagram of the virtual sample generation process is demonstrated in Figure 1. (1) Training stage. There are multiple samples for each class on the generic set. First, regions of interest (ROI) are extracted for each finger vein image through efficient preprocessing steps. Then, the displacement matrix for each class is learned using the SIFT-flow algorithm. We extract the key displacements of all displacement matrices, forming a variation matrix. The final global variation matrix is formed after removing the interference displacement and redundant displacement. (2) Testing stage. On the single sample set, there is only one registered sample per class. Using the variation matrix obtained from the generic set, multiple virtual samples are generated for each class. During recognition, the preprocessed input image is compared with the registered samples and virtual samples to obtain the recognition result. An overview of the recognition process is demonstrated in Figure 2. Preprocessing In our work, preprocessing mainly consists of ROI extraction, size normalization, and gray normalization [36]. ROI extraction: The collected finger vein images have complex backgrounds, and the noise in these backgrounds will reduce the recognition performance, so it is necessary to extract the ROI. To obtain the ROI image, we first use the edge detection operator to detect the edge of the finger. Then, the width of the finger area is determined according to the inscribed line of the edge of the finger, and the height of the finger area is detected according to the two knuckles in the finger. Finally, the ROI image can be obtained according to the above width and height Size and gray normalization: The size of the ROI images obtained using the above steps is different, which will cause trouble for subsequent operations, so we normalize the size of the ROI images. The normalized image size is 80 × 240 pixels. Then, we use gray normalization to obtain a uniform gray distribution. Variation Matrix Learning In this section, we will discuss how to learn the variation matrix from the generic set. The learning process is divided into two steps, which are displacement matrix calculation and global variation matrix calculation. The displacement matrix calculation is for one class, while the global variation matrix calculation is for all classes. As mentioned above, the calculation of the displacement matrix is based on the SIFT image pair, so we need to construct the SIFT image pair. In order to ensure that the displacement matrix can cover all displacements within the class, all images are required to participate in forming image pairs. Specifically, for a particular class w, the displacement matrix is calculated as follows: (1) Construct SIFT image pairs. For each image within the class w, we obtain the SIFT image using the SIFT descriptor, where the jth SIFT image is represented as SFimg w j . Then, taking the first SIFT image as the benchmark, the remaining other SIFT images form SIFT image pairs with it: for example, the pair (SFimg w 1 , SFimg w j ) is formed by the first SIFT image and the jth SIFT image. In this step, the SIFT-flow algorithm is used to obtain the displacement matrix disp w j of each SIFT image pair, which is given as follows: where each matrix consists of displacements in both the X-direction and the Y-direction. The process of obtaining the displacement matrix between two images is given in Figure 3. Figure 3a presents two genuine images: that is, two images from the same finger. Figure 3b shows the SIFT image pair of the two genuine images, in which the SIFT value of a pixel is represented by a white circle. In Figure 3c, the displacement matrix is given, which consists of horizontal (X-direction) displacement and vertical (Y-direction) displacement. For presentation, the values of the displacement matrix have been normalized to be between 0 and 255. By observation, we can see that the horizontal displacement of different pixels in the same image is consistent, and this feature also applies in the vertical direction. 2 Global variation matrix computation. Herein, we introduce the steps to obtain the global variation matrix. First, we obtain the key displacement of each displacement matrix and then remove the interference displacement. Finally, the variation matrix is sampled to reduce redundancy. (1) Obtain key displacements. Meng et al. [37] pointed out that in finger vein recognition, the displacements of different pixels in two images from the same finger are similar, and Figure 3c also proves this statement. So, we can use the displacement with the most occurrences as the key displacement between two images. First, the frequency of each displacement for the displacement matrix is counted. The displacement with the largest frequency is used as the key displacement of the matrix, and all key displacements are combined into a variation matrix keydisp w of class w which can be calculated as: where p(disp w j ) denotes the frequency of all displacements in the displacement matrix disp w j , and max(p(disp w j )) denotes the maximum frequency. Equation f gives the displacement with a certain frequency. We calculate the key displacements of all classes in the generic set to form a temporary variation matrix Vtemp. There are two kinds of interference displacements considered in this paper. The first is the displacement with too large value and small frequency, which is also quite different from the adjacent value. These displacements are caused by occasional large movements of a finger during acquisition and are not universal. If these displacements are used to generate virtual samples, they are likely to adversely affect recognition. The second is that the displacement value is 0 or too small, and these displacements indicate that there is almost no difference between the two images. If such displacements were used to generate virtual samples, they would not be helpful for identification but would create data redundancy. Therefore, we remove the above two displacements. (3) Sampling. The existing temporary variation matrix is displacement-intensive. For example, there will be displacements with value x and x+1 at the same time, and the virtual samples generated by these two adjacent displacements have almost the same contribution to recognition. In order to avoid data redundancy, for adjacent displacements, we just keep one. Therefore, we sample the remaining matrix according to the step size and use the remaining matrix as the final global variation matrix V = [v 1 , v 2 , ..., v k ] T , where v i = (∆x, ∆y) has two components, representing the displacement in the X-direction and Y-direction. The process to learning the global variation matrix can be summarized as Algorithm 1. While j ≤ m do \\ m is the number of samples per class 5. Get key displacements of class i 9. End while 10. Key displacements of all classes form a temporary matrix Vtmp 11. Remove interference displacement of Vtmp 12. Sampling Vtmp 13. The remainder of forms the global variation matrix V 14. Return V Virtual Sample Generation On the single sample set, there is only one registered sample of per class. We use the variation matrix V = [v 1 , v 2 , ..., v k ] T to generate different virtual samples. Assuming that (x, y) is the coordinate of point p in the registered image I , the coordinate (x , y ) of the corresponding point p in the virtual image can be calculated as: The translation vector (∆x, ∆y) is a row vector of matrix V. ∆x and ∆y represent the displacement in the X-direction and Y-direction, respectively . The number of row vectors of V is k, so k virtual images will be generated eventually. For the newly generated image, we use bilinear interpolation [38,39] to keep its size consistent with the original image. For the unknown point P = (x, y) shown in Figure 4, the four points around it are known, which are Q 11 = (x 1 , y 1 ), Q 12 = (x 1 , y 2 ), Q 21 = (x 2 , y 1 ) and Q 22 = (x 2 , y 2 ). The pixel value f (x, y) of the point can be calculated by the following equation. Figure 4. Bilinear interpolation. The generated multiple samples will participate in the recognition together with the original single sample. Figure 5 shows the process of virtual image generation. In Figure 5a, a registered image is given, and Figure 5b shows the variation matrix. The registered image is transformed with each row of the variation matrix, generating multiple virtual images. Several generated virtual sample images are given in Figure 5c Experiments To verify the effectiveness of the proposed method, we conduct experiments on a public finger vein database from Hong Kong Polytechnic University, called HKPU-FV [40]. A total of 156 volunteers participated in the collection. Each volunteer provided six or 12 images from the index and middle fingers. The finger vein image acquisition process is completed in two sessions. Only 105 volunteers participated in collection in the second session, leading to the number of images of each finger being different. We employ finger vein images acquired in the first session. Since the vein patterns of different fingers of the same person are different, there are a total of 312 (156 persons × 2 fingers) classes and 1872 (156 persons × 2 fingers × 6 images) images in our experiments. Several typical finger vein images of the HKPU-FV database are shown in Figure 6. Experiment 1: Effectiveness of SVSG In order to verify that the proposed method can effectively solve the single-sample finger vein recognition problem, we compare the recognition rates of the classical algorithms and the recognition rates of these methods combined with the proposed SVSG. In this experiment, the first image of each class on the single sample set is used as the registered sample, and the last two images of each class are used as test samples. Two types of classical methods are considered for verification, i.e., the local-based method and network-based method, which are available in single-sample scenarios. The recognition rates are reported in Table 1, and the corresponding CMCs (cumulative match curves) are illustrated in Figure 7. The experimental results from Table 1 and Figure 7 suggest that the methods combined with SVSG achieve significant improvement in recognition performance compared to the methods used alone. We believe that such a significant improvement is mainly attributed to the distinction and complementary nature of the virtual samples generated by SVSG. The combination of virtual samples and registered samples enriches the information within class, which increases the effectiveness of the successful matching of genuine images. In single-sample recognition, network-based methods (i.e., MaxiC and MeanC) have poor recognition performance, which may be largely limited by single-sample incomplete vein pattern segmentation and noise. On the other hand, local descriptor-based methods (i.e., LBP, LDC, and LLBP) have better recognition performance than network-based methods, probably because these methods do not need to segment veins and are relatively less affected by single sample. Experiment 2: Complementarity of Virtual Samples The purpose of this experiment is to demonstrate the complementarity of the generated virtual samples. Six virtual samples are generated for each registered sample, and the recognition rates of them are shown in Table 2. In Table 2, for display purposes, we use Vsamplei to distinguish different virtual sample images; for instance, Vsample1 represents the first virtual sample. In addition, this experiment and subsequent experiments 3 and 4 will adopt the LBP algorithm as the verification algorithm, and the rest of the experimental settings are the same as those of experiment 1. The data in Table 2 suggests that the highest recognition rate of the virtual sample is 74.62%, and the lowest recognition rate is 56.68%, which means that each virtual sample has a certain distinction. With the combination of virtual sample images, the recognition rate keeps improving. The recognition rate of all samples combined is 87.02%, which is much higher than each virtual sample. The experimental results show that the generated virtual samples are complementary. Specifically, the virtual samples are obtained through transformation; hence, there is a certain complementarity with the registered samples. On the other hand, after sampling, the displacements in the variation matrix are obviously different, so the generated virtual samples are also necessarily different and complementary. Experiment 3: Interference Displacement Analysis The purpose of this experiment is to determine the interference displacement. Figure 7 shows the projection of the displacement matrix in the X and Y directions. The horizontal axis is a random number between 0 and 1, and the vertical axis represents the value of the displacement. It can be seen from the figure that the displacement in the X direction is mainly concentrated in the interval [20, −10], and a few points are outside the interval. Correspondingly, the displacement in the Y direction is mainly concentrated in the interval [−5, 8]. These displacements outside the above two intervals are the first type of disturbance displacements discussed in Section 3.2.2, that is, displacements of large value and small probability. They are caused by the accidental movement of a finger and are not universal; hence, we removed them. Specifically, displacements that are more than 5 pixels from the boundary and occur only once are removed. In addition, as shown in Figure 8, a large number of points are concentrated at or near the displacement of 0. These displacements indicate that there is almost no displacement between the two images, and they are meaningless for generating virtual samples. Therefore, we also remove these displacements as disturbances. In the specific implementation, we remove all displacements of 0 and displacements less than 3. (a) X-direction displacement projection (b) Y-direction displacement projection From Figure 8, we can also see an interesting phenomenon: the displacement in the X direction is in the range of [20, −10], and the displacement in the Y direction is in the range of [−5, 8]. It means that when collecting images, the amplitude of the finger moving left and right is greater than the amplitude of the up and down movement. In addition, the upper boundary of the X direction is 20, and the lower boundary is −10, indicating that the magnitude of the finger moving to the right is greater than the magnitude of the finger moving to the left. This may be related to human behavior, which needs to be further explored. These values can guide people when they collect images, reducing intra-class variation of images and increasing the recognition rate. Experiment 4: Sampling Step Size In this experiment, we discuss the effect of the displacement sampling step t on the recognition performance. If the sampling step size t is small, more virtual samples are generated. On the contrary, if the sampling t is large, fewer virtual samples generated. Observing the distribution of key displacements in Figure 8, we found that the smaller the displacement, the more concentrated the points. This indicates that in the actual acquisition, the images with small finger movement are the majority. Therefore, we consider sampling with sequence step sizes. That is, for dense displacement points, use small steps to obtain more virtual samples. Conversely, for sparse displacement points, a small step size is used. Since the displacement varies greatly in the X direction, we use the X direction as the benchmark to sample the variation matrix. The recognition rates of different t are shown in Figure 9. It can be seen that recognition rate is largest when t is 5; when t is 4 and 6, the recognition rate is the same. Therefore, we use the three steps with the top three recognition rates to form the sampling sequence t = {4, 5, 6}. After using this sequence of sampling, a total of six virtual samples are produced, and the experiment proves that the recognition rate is also the highest. After using this sequence of sampling, a total of six virtual samples are produced. The experimental results in Table 2 prove that the recognition rate of six virtual samples reaches 86.45%, which is higher than using a fixed sampling step. Conclusions To address the problem of single-sample finger vein recognition, this paper proposes a SVSG method. Due to the similarity of intra-class variations, we learn the variation matrix on generic set and then use this matrix to generate virtual samples for the single samples on a single sample set. In order to ensure the effectiveness and simplicity of the variation matrix, SVSG also removes the interference displacement and redundant displacement. The results on the public database verify the effectiveness of the method in solving the single-sample finger vein recognition problem. The complementarity between the virtual samples is also verified by the experiment. Although the proposed SVSG can improve the problem of single-sample finger vein recognition, there is still a gap between our experimental results and the ideal results, which is mainly caused by limited information. In the proposed method, we obtained the intra-class variation matrix through learning, but the activities of some fingers in real life are also unpredictable, which will inevitably lead to some displacements that cannot be learned. The virtual samples are transformed through these displacements, so there is still a gap between the generated virtual matrix and the real collected samples. In the future work, we will dig deeper into the single sample information and look forward to obtaining a better solution to the problem.
6,619.2
2022-10-19T00:00:00.000
[ "Computer Science" ]
CO 2 Hydrogenation to Methane over Ni-Catalysts: The Effect of Support and Vanadia Promoting : Within the Waste2Fuel project, innovative, high-performance, and cost-effective fuel production methods are developed to target the “closed carbon cycle”. The catalysts supported on different metal oxides were characterized by XRD, XPS, Raman, UV-Vis, temperature-programmed techniques; then, they were tested in CO 2 hydrogenation at 1 bar. Moreover, the V 2 O 5 promotion was studied for Ni/Al 2 O 3 catalyst. The precisely designed hydrotalcite-derived catalyst and vanadia-promoted Ni-catalysts deliver exceptional conversions for the studied processes, presenting high durability and selectivity, outperforming the best-known catalysts. The equilibrium conversion was reached at temperatures around 623 K, with the primary product of reaction CH 4 (>97% CH 4 yield). Although the Ni loading in hydrotalcite-derived NiWP is lower by more than 40%, compared to reference NiR catalyst and available commercial samples, the activity increases for this sample, reaching almost equilibrium values (GHSV = 1.2 × 10 4 h –1 , 1 atm, and 293 K). possessing weak basicity. In the case SiTi support, although small crystallites were obtained, the activity toward methanation I.S., Introduction CO 2 methanation is a process that is of great importance for power-to-gas transformation (P2G) [1][2][3]. Carbon dioxide captured from industrial plants, refineries, biomass combustion etc., can be converted with H 2 derived from water electrolysis to yield methane, synthetic natural gas (SNG). The methane production process from H 2 and CO/CO 2 was discovered by Paul Sabatier and Jean-Baptiste Senderens in 1902 and is currently considered effective for greenhouse gas removal [2]. It is also of great importance that it can be used as storage of energy coming from renewable resources (H 2 production by water electrolysis) in the form of gaseous fuel (CH 4 ) [4,5], allowing for filling the gap between uneven power production and power demand. CO 2 hydrogenation is a highly exothermic process illustrated by Equation (1) ( Table 1), where pressure and temperature significantly influence the reaction equilibrium. The process mostly focuses on C 1 or short-chain products CO, HCOOH, CH 3 OH, CH 4 , and C 2 -C 4 olefins [6]. According to thermodynamics, to achieve high CO 2 conversion, the reaction needs to be operated at low temperature. However, low temperature implies slow Two reaction pathways are taken into consideration [1][2][3]. The first one consists of CO 2 to CO reduction and subsequent hydrogenation to CH 4 . The second one is associated with CO 2 hydrogenation to CH 4 via the formation of carbonates and formates as intermediate species. The implementation of a methane production technology is performed in demo-plants and industrial-scale plants [7]. ETOGAS GmbH, founded in Stuttgart by Center of Solar Energy and Hydrogen Research and later acquired by Swiss Hitachi Zosen Inova AG, is an industrial-scale plant that converts CO 2 generated in the waste-biogas plant to methane [8]. Moreover, The Karlsruhe Institute of Technology coordinates the HELMETH project, which combines high-temperature electrolysis with methanation [9]. Undoubtedly, the advantage of this process is that the heat of the exothermic methanation step can be utilized in the electrolysis process, which contributes to the efficiency increase. Electrochaea is implementing a commercial, demonstration-scale project in which bioorganisms (methanogenic archaea) are used for H 2 and CO 2 conversion to methane [10]. The microbes, patented as BioCat, are relatively resistant to contamination, have high mass conversion efficiency, and exhibit high selectivity to methane so that the generated gas can be directly used without post-treatment [3,11]. In recent years, numerous studies have been conducted on CO 2 methanation (Table S1), with the aim to develop an active, highly durable, and stable catalyst suitable for applications in the hydrogenation of CO 2 to methane [3,12]. Noble metal catalysts, such as Ru, Rh, or Pd proved to be efficient in CO 2 methanation; however, due to the high cost, their alternatives are sought for industrial applications. Ni-containing catalysts offer both high activity and affordable cost and are therefore of great interest to scientists in this field of catalysis [13]. However, for hydrogenation reaction, the Ni-based catalysts exhibit poor lowtemperature activities and stabilities. Generally, for those catalysts, a temperature above 600 K is required to achieve reasonable CO 2 and CO conversions at atmospheric pressure. However, for such condition, catalyst deactivation and coke formation is reported. Thus, various promoters, supports, and preparation methods have been proposed and tested to improve catalyst activity, stability, and durability and/or move the reaction toward mild conditions (i.e., lower temperature, where Ni sintering is avoided). As previously reported, small cubic Ni nanocrystals are very selective in the methanation, while larger nickel particles are more active for the RWGS reaction. For such small nanocrystals, the hydrogenation of surface CH x species (rds: rate-determining step of methanation reaction) by surface-dissociated hydrogen is faster than for large particles. Moreover, the carbon formation rate was extremely slow for Ni particle sizes below 2 nm [14]. Therefore, an approach to improve the low-temperature activity while increasing coke resistance can be attained by supporting small Ni nanocrystals over suitable support, providing high Ni dispersion. Apart from forming stable anchoring sites, other crucial aspects are pore volume, acid-base properties, and structural defects, which may affect both high low-temperature activity and/or good high-temperature stability. Thus, until now, various supports for Ni catalysts have already been investigated, including Al 2 O 3 , SiO 2 , TiO 2 , and Ce x Zr 1−x O 2 [15]. In case of promoters, it was demonstrated already that a 2 wt % addition of CeO 2 increases the reducibility of Ni species and improves the stability of the catalyst in a 120 hours test [1]. In addition, highly active and coking-resistant Ni-V 2 O 3 /Al 2 O 3 catalysts prepared by co-impregnation were reported with improved CO and CO 2 methanation activity [14]. Vanadium catalysts receive particular attention in the modern chemical industry. The active components of vanadium catalysts commonly used in industry include V-containing oxides, chlorides, complexes, and heteropoly salts [16]. They are used as special catalysts for sulfuric acid production, rubber synthesis, petroleum cracking, and the synthesis of some high-molecular compounds. It is well known that materials containing vanadium, with V 2 O 5 as the main component, are good catalysts for the oxidation reactions [16,17]. However, only a limited amount of work has been devoted to study the role of vanadium as a promoter for low temperature CO 2 hydrogenation to methane [17]. The hypothesis is that similar to ceria, vanadia can promote the reaction by V 3+ /V 5+ pair. Ni-and vanadia-based catalysts are already known in many catalytic applications, i.e., steam reforming or the selective catalytic reduction of NO x [14,18]. The Ni-V catalysts worked excellently for the biogas dry reforming (DR) as well as for methanol and dimethyl ether (DME) and steam reforming (SR) [18,19]. Moreover, the addition of vanadium oxide (V 2 O 5 ) on the Ni/CaO-Al 2 O 3 system was reported to improve its selectivity toward methane during CO 2 hydrogenation [20]. Thus, the catalysts are already well established, which can influence their potential application in industrial practice. In this paper, we report the support effect on the catalytic performance, selectivity, and stability for CO 2 hydrogenation to methane under atmospheric pressure. Particular attention is placed on the reaction study in the low-temperature range where the process is kinetically limited. The vanadia influence on the catalysts' selectivity and durability in the CO 2 hydrogenation to methane is shown. The structural changes of the samples and their relevance to the sample activity are studied. Finally, we compare the difference between the preparation protocol used, impregnation, hydrothermal, and co-precipitation, and reference Ni-catalyst supported on porous alumina. Synthesis, Materials, and Reagents Ni-based catalysts, listed in Table 2, supported on different oxides, were synthesized and evaluated in this work. The metal loading in these systems is expressed in (wt %). The Ni/Al 2 O 3 and Ni-0.5V/Al 2 O 3 (denoted as NiR and NiVR Table 2) catalysts were used as reference samples [21,22]. The NiR catalyst was prepared by a two-stage wet impregnation method with subsequent thermal treatment. The calcium-modified alumina was used as a support for the active phase (S BET = 2 m 2 g −1 ). Typically, the support grains were introduced into a nickel nitrate aqueous solution (220-230 gNi dm −3 ) heated to the temperature of 306 ± 2 K and kept for 1 h. The material was separated and dried at 378 K/12 h and then was calcined at 723 K for 4 h. The whole procedure was repeated to reach an NiO content of ca. 17 wt % in the precatalyst. To obtain an NiVR catalyst, NiR was impregnated with aqueous solutions of ammonium metavanadate (99% purity, Standard, Lublin, Poland) and oxalic acid (99% purity, POCH, Gliwice, Poland) and heated to a temperature in the range of 328 ± 5 K. The constant molar ratio of NH 4 VO 3 /C 2 H 2 O 4 ·2H 2 O = 0.5 was kept. The impregnation was sufficient to prepare the catalysts of nominal V 0.5 wt % (V 2 O 5 content of 0.88 wt %). The samples were dried at 393 for 12 h and subsequently calcined at 773 for 3 h. The NiWP sample was synthesized by the co-precipitation method, which is routinely employed in the synthesis of hydrotalcite-like materials [22][23][24][25][26][27][28][29]. First, Ni(NO 3 ) 2 ·6H 2 O and Al(NO 3 ) 3 ·9H 2 O were dissolved in deionized water (135 mL) at a molar ratio of M 2+ /M 3+ of 0.35, i.e., much lower than in the natural hydrotalcite (M 2+ /M 3+ = 3), so that the excess Al could form, after calcination, as an alumina support. Then, the metallic salt solution was added dropwise into 370 mL of 0.2 M Na 2 CO 3 solution under vigorous stirring. During the synthesis process, 1 M NaOH was simultaneously added to keep pH = 10, and the temperature at 328 K was kept constant. The obtained precipitate was collected by centrifugation, washed with distilled water, and dried at 323 K in air. Finally, the dry powder was ground using an agate mortar. The mixed oxide material was obtained by calcination of the hydrotalcite-like precursor in a muffle furnace at 823 K for 4 h. Ni 4 , TB, 97% pure, Sigma Aldrich) and ammonium metavanadate (NH 4 VO 3 , 99% purity, Standard, Lublin, Poland) were used. The method was adopted from the synthesis of Ni-doped ZnO nanorods. Then, 1 mmol Zn(Ac) 2 ·2H 2 O and the required amount of (Ni(Ac) 2 ·4H 2 O were dissolved in absolute ethanol to form a 25.0 mL solution). Then, 8 mmol NaOH + 10.0 mL EtOH was introduced into the above solution under magnetic stirring, and 8.0 ml PEG-400 was added in one dose. The obtained slurry was transferred to a Teflon-lined stainless autoclave of 50 mL capacity and crystalized statically at 413 K for 24 h. After the reaction, the obtained precipitate was washed by absolute ethanol and Millipore water several times. The sample was dried in air at 333 K for 4 h. The final material was calcined at 773 K for 3 h. The procedure was applied to synthesize Ni/ZnO, Ni/SiO 2 , Ni/Ti x Si y O 2 , Ni/TiO 2 , and Ni/V x Ti y O 2 catalysts. The catalysts unloaded from the experimental reactor after activity tests (spent catalysts) were additionally labeled with "AR." Physicochemical Characterization The specific surface area (S BET ) was measured with the Micromeritics ASAP apparatus (Micromeritics ASAP 2020, USA), using the BET method and N 2 as the adsorbate. Before the measurement, the samples were degassed at 473 K for 3 h. S BET was calculated based on N 2 adsorption isotherms at 77 K. X-ray diffraction patterns (XRD) were collected using a D5000 powder diffractometer (Bruker AXS, USA) equipped with a LynxEye strip detector. Cu-Kα radiation was used with an X-ray tube operating at 40 kV and 40 mA. All measurements were performed in the Bragg-Brentano geometry. The diffraction patterns were analyzed using a multipeaks fitting of the PseudoVoigt or Parson 7 function. Consequently, the examined diffraction reflections were deconvoluted on the identical shape Kα 1 and Kα 2 reflections with the ratio of their intensity 1:0.513. After subtracting the interpolated instrumental broadening, the FWHM of the Kα 1 reflections were used to estimate the mean diameter of crystallites using the Scherrer equation B(2θ) = Kλ/D·cosθ, where the peak width, B(2θ), at a particular value of 2θ (θ being the diffraction angle, λ being the X-ray wavelength) is inversely proportional to the crystallite size D; the constant K is a function of the crystallite shape generally taken as being about 1.0 for spherical particles. For calculation purposes, the shape factor of 1.10665 corresponding to ball-shaped crystallites was applied. XPS measurements were performed using a Microlab 350 instrument (Thermo Electron, USA) to determine the chemical state of the elements in the studied samples. Survey and high-resolution spectra were acquired using AlKα (hν = 1486.6 eV) radiation and pass energies of 100 and 40 eV. The XPS signal intensity was determined using linear or Shirley background subtraction. Peaks were fitted using an asymmetric Gaussian/Lorentzian mixed-function, and the measured binding energies were corrected based on the C1s energy at 285.0 eV. In situ DRIFT spectra were collected in the range 700-4000 cm −1 on a Nicolet Nexus instrument (iS50, Thermo Fisher Scientific, USA). Typically, 100 scans were collected at a resolution of 1 cm −1 . UV-vis diffuse reflectance spectra were recorded by using a Jasco V-570 instrument (Jasco, USA) equipped with a conventional integrating sphere. The Raman data were collected using a confocal Thermo DXR Raman Microscope (Thermo Scientific, Waltham, MA, USA) with a 50× air objective. The laser wavelength used was 532 nm. The parameters were optimized to obtain the best signal-to-noise ratio. The aperture was set to 50 µm pinhole, and the laser power was 8 mW. The exposure time was 4s, and the number of exposures for one spectrum was 15. The chemical maps were made with a stepsize 1.5 µm in the x and y-axis. The whole map region was 15 × 15 µm, while the depth line maps were made with a stepsize 2 µm in the x and z-axis. The entire map area was 10 × 20 µm. The analysis of the spectra (baseline correction, deconvolution) was performed using Origin Pro Software (v. 9.1, OriginLab Corporation, Northampton, MA., USA, 2013). The deconvolution points were chosen with respect to the second derivative spectra. The chemical map analysis was achieved in the Omnic Software (v. 8.2, Thermo Fischer Scientific Inc., Waltham, MA., USA, 2010), and all maps were normalized prior to the analysis. Temperature-programmed reduction runs (TPR, in 5% H 2 in He) were performed in a quartz tube reactor connected to a quadrupole mass spectrometer (HPR 60, Hiden, UK). Typically, 45 mg of the catalyst sample was used for each test. The sample was heated from RT to 773 K with a ramp rate of 10 K min −1 . CO 2 Hydrogenation Tests The CO 2 hydrogenation tests were performed under atmospheric pressure in a Catlab system (Hiden, UK), equipped with a tubular fixed-bed reactor (5 mm external diameter) and an online mass spectrometer (MS HPR 60, Hiden, UK). Usually, 40 mg of the sieved catalyst (20-40 mesh) was loaded into the reactor, and the reactor was purged with He for ca 30 min at room temperature (RT). The temperature was measured using two thermocouples to increase accuracy. Before testing, the catalysts were pre-reduced in situ upon heating from RT to 773 K at a constant rate of 10 K min −1 under a flow of 5% vol. H 2 /He. After reaching 773 K, the temperature was maintained constant for 3 h, and then the sample was cooled to RT in He flow. A total gas flow rate of 70 ml min −1 was kept within all experiments (GHSV = 1.2 × 10 4 h −1 , at 1 atm and 293 K). Thereafter, the feed was switched to CO 2 /H 2 in the molar ratio 5/1 in He balance. The pre-mixing fluxes of high-purity gases (CO 2 , H 2 , He) independently calibrated via Bronkhorst MFCs were used. Reaction tests were run in the temperature-programmed mode by heating up the reactor from RT to 773 K with a rate of 10 K min −1 . A real-time gas analyzer (MS HPR 60, Hiden, UK) was used to analyze exhaust gas. MS was set to the MID mode, and following different m/z signals of CO, CO 2 , H 2 O, O 2 , CH 4 , and H 2 were monitored continuously by quadrupole detector, with cross-sensitivity software corrections, and eventual formation of C2+ hydrocarbons (traces if any, not discussed here). The CO 2 conversion (X CO2 ), and selectivity for CH 4 (S CH4 ) and CO (S CO ) were calculated as follows: where C CO 2 , in and C CO 2 , out are the concentrations of CO 2 in the inlet and outlet of the reactor, respectively, F in and F out are the total flow rates (cm 3 s −1 ) in the inlet and outlet of the reactor, and C CH4, out and C CO, out are the outlet concentrations of CH 4 and CO, respectively. Equilibrium calculations of the methanation process at 1 bar were performed using CEA software (NASA). Physicochemical Characterization of Fresh and Spent Catalysts The physicochemical data are summarized in Table 2. It was shown that all homemade samples have specific surface area (S BET ) ranging from 18.3 (Ni/Zn catalyst) to 223 m 2 g −1 (NiWP). The NiR and NiVR samples were characterized by very low S BET equal to 2 m 2 g −1 as they contain α-Al 2 O 3. The XRD data indicated the presence of different phases for all the studied samples ( Figure 1). For studied catalysts, features at 37.3 • and 63 • (2θ) are attributable to the NiO phase (JCPDS 98-024-8910) [31]. The hexagonal planar arrangement of Ni(II) cations with octahedral coordination of oxygen (βNi(OH) 2 ), and peaks related to vanadium species, were not detected, implying only a small quantity of these species, just below the XRD detection limit. In addition, the signals overlap with other catalysts' components in the complex XRD spectra, which was also mentioned previously [32]. The XRD pattern of the Ni/Zn nanostructured catalyst ( [33,37]. These characteristic peaks are still observed for Ni/SiTi, but they broaden. Apart from anatase, a broad peak around 22 • (2θ) was attributed to amorphous silica (SiO 2 JCPDS 29-0085). This peak is also well defined in the Ni/Si sample [31]. In Figure 2a, the DRIFT results are presented for various Ni-supported catalysts. Three broad complex absorptions with maxima located at 700-1200, 1250-1750, and ca. 3500-3700 cm −1 (Figure 2a) are detected, which is in agreement with previous studies [38][39][40][41][42][43][44][45]. For the NiR and NiVR catalysts, the most intense band at ca. 1000 cm −1 is associated with stretching vibrations of Al-O [42]. The bands recorded below 1000 cm −1 correspond to the carbonate (ca. 850 cm −1 ). The carbonate presence is expected in the prepared catalysts samples as post-synthesis residue. The shoulder at ca. 1019 cm −1 (Figure 2a) corresponds to the -CO3OH modes. Moreover, these features are also typical of the spinel structure and can be related mainly to the formation of NiAl2O4 or/and CaAl12O19, as the bands at low wavenumber (usually at 700 and 570 cm −1 ) are assigned to the so-called ν1 and ν2 vibrational modes of isolated tetrahedra, i.e., [AlO4] and octahedra, i.e., [AlO6], respectively [38,46]. For oxide-type material, the region of 700-1020 cm −1 is attributed to X-O stretching vibrations (X = Si, Ti, Zn). The features at 755 and ca 1040 cm −1 can suggest the Ni occupation within TiO2, SiO2, or mixed TixSi1−xO2, similar to ZnO [43]. Moreover, for Ni/Zn, the band related to CO3 2− was described in the region of 912 cm −1 and the stretching vibration of the Ni-O bond at ≈763 cm −1 [47]. According to the literature, this feature, assignable to an overlap of ν4 of carbonate ion (CO3 2− ) together with the δOH deformation component of the hydroxide phase, is usually observed at ca. 650 cm −1 [38] In Figure 2a, the DRIFT results are presented for various Ni-supported catalysts. Three broad complex absorptions with maxima located at 700-1200, 1250-1750, and ca. 3500-3700 cm −1 (Figure 2a) are detected, which is in agreement with previous studies [38][39][40][41][42][43][44][45]. For the NiR and NiVR catalysts, the most intense band at ca. 1000 cm −1 is associated with stretching vibrations of Al-O [42]. The bands recorded below 1000 cm −1 correspond to the carbonate (ca. 850 cm −1 ). The carbonate presence is expected in the prepared catalysts samples as post-synthesis residue. The shoulder at ca. 1019 cm −1 (Figure 2a) corresponds to the -CO 3 OH modes. Moreover, these features are also typical of the spinel structure and can be related mainly to the formation of NiAl 2 O 4 or/and CaAl 12 O 19 , as the bands at low wavenumber (usually at 700 and 570 cm −1 ) are assigned to the so-called ν 1 and ν 2 vibrational modes of isolated tetrahedra, i.e., [AlO 4 ] and octahedra, i.e., [AlO 6 ], respectively [38,46]. For oxide-type material, the region of 700-1020 cm −1 is attributed to X-O stretching vibrations (X = Si, Ti, Zn). The features at 755 and ca 1040 cm −1 can suggest the Ni occupation within TiO 2 , SiO 2 , or mixed Ti x Si 1−x O 2 , similar to ZnO [43]. Moreover, for Ni/Zn, the band related to CO 3 2− was described in the region of 912 cm −1 and the stretching vibration of the Ni-O bond at ≈763 cm −1 [47]. According to the literature, this feature, assignable to an overlap of ν 4 of carbonate ion (CO 3 2− ) together with the δOH deformation component of the hydroxide phase, is usually observed at ca. 650 cm −1 [38]. The 1250-1750 cm −1 range, most complex for NiWP and Si-containing catalysts (Ni/Si, Ni/SiTi), and V-modified samples (NiVR, Ni/VTi), is characteristic for various type of carbonates, C-O and the C=O stretching modes [38,48]. The recorded spectra exhibit the typical stretching C-O bands of carbonate at 1040-1070, 1300-1370, and 1480-1530 cm −1 . According to the literature, these modes are assigned as the high-and low-frequency asymmetric C-O stretching modes (ν 3h and ν 3l ) and symmetric C-O stretching (ν 1 ), re-spectively, which arise due to the adsorption-induced symmetry changes and the double degeneration of the ν 3 C-O stretching mode of free carbonate ion [38]. Both unidentate and bidentate carbonates were detected with characteristic features as the ν C-O bands are split into two subsets, which is indicative of the presence of at least two adsorbed carbonate species, i.e., monodentate (unidentate), weakened by co-adsorbed hydronium ion, at 1067, 1350-1365, and 1470-1486 cm −1 , as well as bidentate at 1040-1050 (ν 1 ) 1330-1300 (ν 3l ), and 1520-1545 cm −1 (ν 3h ) [49][50][51][52]. For NiWP catalyst, the appearance of carbonate bands is consistent with a partial reconstruction of the hydrotalcite-like phase confirmed by XRD. The 1250-1750 cm −1 range, most complex for NiWP and Si-containing catalysts (N Ni/SiTi), and V-modified samples (NiVR, Ni/VTi), is characteristic for various type o bonates, C-O and the C=O stretching modes [38,48]. The recorded spectra exhibit the ical stretching C-O bands of carbonate at 1040-1070, 1300-1370, and 1480-1530 cm −1 cording to the literature, these modes are assigned as the high-and low-frequency a metric C-O stretching modes (ν3h and ν3l) and symmetric C-O stretching (ν1), respecti which arise due to the adsorption-induced symmetry changes and the double degen tion of the ν3 C-O stretching mode of free carbonate ion [38]. Both unidentate and bi tate carbonates were detected with characteristic features as the ν C-O bands are split two subsets, which is indicative of the presence of at least two adsorbed carbonate spe i.e., monodentate (unidentate), weakened by co-adsorbed hydronium ion, at 1067, 1 1365, and 1470-1486 cm −1 , as well as bidentate at 1040-1050 (ν1) 1330-1300 (ν3l), and 1 1545 cm −1 (ν3h) [49][50][51][52]. For NiWP catalyst, the appearance of carbonate bands is consi with a partial reconstruction of the hydrotalcite-like phase confirmed by XRD. The 1350-1600 cm −1 range is also a fingerprint of the acetate anion, which may residue after the synthesis of the catalysts, as the acetate salts of Zn and/or Ni were in the case of Ni/Si, Ni/SiTi, Ni/Ti, Ni/VTi, and Ni/Zn [52,53]. For the Si-containing samples, the DRIFT spectra of all the supports show fea that are assignable to the Si-O-Si bending vibrations (830-870 cm −1 ), the Si-OH stretc vibrations of non-bridging oxygen (950-980 cm −1 ), the asymmetric stretching (1090cm −1 ), the deformation vibrations of adsorbed water molecules (1600 cm −1 ), the stretc of adsorbed water molecules (3400 cm −1 ), and the OH vibrations of free silanol gr (3750 cm −1 ). For the Ni/SiTi sample, the feature at 3750 cm −1 almost vanished along the peak at 1340 cm −1 . This could be due to the addition of Ti, which led to the fixin the free silanol groups in the Ti-O-Ti, or Ti-O-Si disordered network, impeding vibrations [45,46,54]. In fact, for the Ni/Ti and Ni/VTi samples, the spectra closely re ble those of the alumina-supported Ni. The broad bands at 3000-3500 cm −1 and 1600-1640 cm −1 ( Figure S1a) were attrib to the OH stretching mode ν(OH) -of physisorbed water on the catalyst surface [38 For the Ni/VTi, similarly to the NiVR catalysts, V2O5 does not change the band positi the spectra. However, the intensity of the bands ascribed to X-O stretching vibration = Si, Ti, Zn) decreases due to carbonates' presence. The 1350-1600 cm −1 range is also a fingerprint of the acetate anion, which may be a residue after the synthesis of the catalysts, as the acetate salts of Zn and/or Ni were used in the case of Ni/Si, Ni/SiTi, Ni/Ti, Ni/VTi, and Ni/Zn [52,53]. For the Si-containing samples, the DRIFT spectra of all the supports show features that are assignable to the Si-O-Si bending vibrations (830-870 cm −1 ), the Si-OH stretching vibrations of non-bridging oxygen (950-980 cm −1 ), the asymmetric stretching (1090-1120 cm −1 ), the deformation vibrations of adsorbed water molecules (1600 cm −1 ), the stretching of adsorbed water molecules (3400 cm −1 ), and the OH vibrations of free silanol groups (3750 cm −1 ). For the Ni/SiTi sample, the feature at 3750 cm −1 almost vanished along with the peak at 1340 cm −1 . This could be due to the addition of Ti, which led to the fixing of the free silanol groups in the Ti-O-Ti, or Ti-O-Si disordered network, impeding their vibrations [45,46,54]. In fact, for the Ni/Ti and Ni/VTi samples, the spectra closely resemble those of the alumina-supported Ni. The broad bands at 3000-3500 cm −1 and 1600-1640 cm −1 ( Figure S1a) were attributed to the OH stretching mode ν(OH)of physisorbed water on the catalyst surface [38,40]. For the Ni/VTi, similarly to the NiVR catalysts, V 2 O 5 does not change the band position in the spectra. However, the intensity of the bands ascribed to X-O stretching vibrations (X = Si, Ti, Zn) decreases due to carbonates' presence. The UV-Vis absorption spectra of the Ni catalysts are presented in Figure 2b, with calculated values for allowed and not allowed transitions Figure S1a. In the spectral region (200-1000 nm), molecules of the material undergo electronic transition. All the spectra are dominated by strong absorption below 400 nm, which is attributed to the intrinsic band gap absorption of the corresponding oxide types (TiO 2 , SiO 2 , mixed Ti x Si 1−x O 2 , and Al 2 O 3 ) because of electron transition from the valence to the conduction band. The contributions visible in the range 200-245 nm are attributable to the O 2 → X 2+ , O 2 → X 3+ , or O 2 → X 4+ charge transfer transition (i.e., ligand-centered transitions where X 2+ = Zn 2+ , Ni 2+ , X 3+ = Al 3+ , X 4+ = Si 4+ , Ti 4+ ) [54]. For NiR and NiVR, the spectrum has a very sharp absorption peak at 240-300 nm due to the O 2 → Al 3+ charge transfer transitions, which confirms the presence of α-Al 2 O 3 phase. Moreover, with these catalysts, the weak absorption of the alumina support in the region below 400 nm is likely due to trace impurities [55]. The peaks above 400 nm were assigned to metal-to-ligand charge transfer (MLCT) [56]. In all cases, Ni catalysts exhibited weak features at 590-723 nm, which can be attributed to dd transition of Ni 2+ [54]. The doublets at ca. 590 nm and 660 nm indicate the presence of Ni 2+ ions in tetrahedral coordination, similarly to NiAl 2 O 4 , while the broad absorption at about 723 nm is characteristic of octahedral coordination of Ni 2+ , typical for NiO species. It can be concluded that nickel ions are present in different environments in octahedral coordinations, which is characteristic of nickel aluminate spinel [54,57]. Thus, the performed UV-Vis study confirmed both tetrahedral ("built-in structure") and octahedral (NiO) coordination. For Ni/SiTi, in comparison to the Ni/Si catalyst, the broadening of the peak centered at ca. 270 nm is observed, which most likely can be indicative of the presence of Ti 4+ ions in the SiO 2 structure and/or it could be related to the higher refractive index of TiO 2 (n~2.2-2.6) particles compared to SiO 2 crystalities (n~1.45) [54,56,[58][59][60]. For Ni/Zn, the very broad and sharp absorption edge cut off at 385 nm is observed. According to the literature, it can be attributed to the electronic transition of ZnO, but due to the strong interfacial electronic coupling between the neighboring ZnO and Ni nanoparticles, the peak broadened more than those of other investigated Ni catalysts [61,62]. Catalyst Reducibility The reducibility of the catalysts was examined by temperature-programmed reduction (TPR) over the temperature range 298-773 K. In Figure 3, the H 2 responses as a function of temperature are presented (please note that signals are multiplied for some catalysts). The TPR profiles revealed significant differences between the Ni-supported samples. It can be seen that both TPR profiles and the amount of hydrogen consumed as well as the reduction temperature are strongly influenced by the catalyst support. Moreover, the vanadia influence is also noticed for both Al 2 O 3 and TiO 2 -supported samples. The TPR profile of the NiR reference catalyst shows peaks with maxima at 600, 673, and above 773 K, which can be associated with NiO bulk and small NiO crystallites (600 K), noncrystalline NiO (673 K) species, and NiAl 2 O 4 (above 773 K) reduction, respectively [63]. This corresponds well with the literature, where the reduction of bulk NiO and noncrystalline NiO is usually ascribed to the features located at~650 K and~750 K, respectively, and XPS data presented in Figure 4 [64][65][66]. It is well known that the sample reducibility strongly depends on the metal-support interactions [3]. For Ni-based catalysts, three types of species, characterized by (i) low reduction temperature, 573-623 K; (ii) intermediate temperature, 623-773 K; and (iii) high temperature >973 K, are reported. They are attributed to the following: free NiO, which has a weak interaction with the support and is easily reduced at lower temperatures (α species); nickel species, which are not entirely associated with spinel reducible at intermediate temperatures (β type species), and nickel aluminate spinel (γ-type of Ni species), which is characterized with high reduction temperature, above 973 K [67,68]. However, the observed reduction temperature differs from those reported in literature, which can be mostly related to the standard composition of reference samples, namely metal-support interaction [69] and the presence of Ca and modifiers. Catalysts 2021, 11, x FOR PEER REVIEW 10 of 21 The TPR profile of the NiR reference catalyst shows peaks with maxima at 600, 673, and above 773 K, which can be associated with NiO bulk and small NiO crystallites (600 K), non-crystalline NiO (673 K) species, and NiAl2O4 (above 773 K) reduction, respectively [63]. This corresponds well with the literature, where the reduction of bulk NiO and noncrystalline NiO is usually ascribed to the features located at ⁓650 K and ⁓750 K, respectively, and XPS data presented in Figure 4 [64][65][66]. It is well known that the sample reducibility strongly depends on the metal-support interactions [3]. For Ni-based catalysts, three types of species, characterized by (i) low reduction temperature, 573-623 K; (ii) intermediate temperature, 623-773 K; and (iii) high temperature >973 K, are reported. They are attributed to the following: free NiO, which has a weak interaction with the support and is easily reduced at lower temperatures (α species); nickel species, which are not entirely associated with spinel reducible at intermediate temperatures (β type species), and nickel aluminate spinel (γ-type of Ni species), which is characterized with high reduction temperature, above 973 K [67,68]. However, the observed reduction temperature differs from those reported in literature, which can be mostly related to the standard composition of reference samples, namely metal-support interaction [69] and the presence of Ca and modifiers. On the other hand, NiVR shows a reduction peak shifted to a slightly higher range. It can be explained by the stronger Ni-support interactions caused by vanadia doping [66], as shown by an XPS study (Figure 4). The amphoteric properties of V2O5 influenced the number of exposed Ni species, and therefore, the number of reducible NiO species increased [70]. In both catalytic systems, Al2O3and TiO2-supported hydrogen consumption starts at 500 K for NiV/Ti and 550 K for NiVR, while the hydrogen consumption maxima are shifted toward a higher region by 70 and 10 K, respectively. On the other hand, it is also expected that together with the Ni-species, the V2O5 undergoes simultaneous reduction, thus increasing the H2 uptake. Usually, one single reduction peak during the TPR is observed for vanadium oxide catalysts if up to four layers of vanadium oxide are deposited on TiO2 [71]. Similarly, for the V2O5/Al2O3 catalyst, the reduction onset at 600 K, and the peak maximum located at 800 K were reported and referred to V 5+ to V 3+ reduction [72]. Therefore, the reduction shift for the V-containing samples can be related to the presence of the vanadia phase and most likely attributable to the reduction of the vanadia phase, as stated above. For NiWP, prepared by precursor co-precipitation and subsequent calcination at 823 K, a very broad signal of hydrogen consumption is visible. The hydrogen consumption started from ca. 675 K, with the maximum outside the recorded temperature range (>773 K). Among other catalysts, Ni/Zn exhibits an H2 uptake maxima at 615 and 650 K, suggesting NiO-support interactions of weak and medium strength, respectively. For Ni/Si, no change in hydrogen signals is observed in the investigated temperature range. The Ni 2p core-level XPS spectra (Figure 4) show the doublet Ni 2p3/2 and Ni2p1/2 and their shake-up satellites. Two different types of Ni species can be distinguished for the investigated catalyst, as indicated by the splitting of the peak at 854-856 eV. For fresh catalysts, the signals at 854.6-855.4 eV are related to Ni 2+ (NiO) and at 856.1-856.4 eV to Ni 2+ , strongly interacting with alumina or in the form of the spinel NiAl2O4. For fresh catalyst samples (Figure 4a,b), the most intense peak is associated with Ni 2+ as NiO, indicating that those species are present on the catalyst surfaces in higher proportion compared to spinel-type species [20]. For the reduced NiR sample, the signal intensity related to NiO oxidic to spinel species equals almost unity, while for NiVR, the most intense peak is associated with Ni 2+ as NiO, revealing that NiO species are present on the catalysts' surfaces On the other hand, NiVR shows a reduction peak shifted to a slightly higher range. It can be explained by the stronger Ni-support interactions caused by vanadia doping [66], as shown by an XPS study (Figure 4). The amphoteric properties of V 2 O 5 influenced the number of exposed Ni species, and therefore, the number of reducible NiO species increased [70]. In both catalytic systems, Al 2 O 3 -and TiO 2 -supported hydrogen consumption starts at 500 K for NiV/Ti and 550 K for NiVR, while the hydrogen consumption maxima are shifted toward a higher region by 70 and 10 K, respectively. On the other hand, it is also expected that together with the Ni-species, the V 2 O 5 undergoes simultaneous reduction, thus increasing the H 2 uptake. Usually, one single reduction peak during the TPR is observed for vanadium oxide catalysts if up to four layers of vanadium oxide are deposited on TiO 2 [71]. Similarly, for the V 2 O 5 /Al 2 O 3 catalyst, the reduction onset at 600 K, and the peak maximum located at 800 K were reported and referred to V 5+ to V 3+ reduction [72]. Therefore, the reduction shift for the V-containing samples can be related to the presence of the vanadia phase and most likely attributable to the reduction of the vanadia phase, as stated above. For NiWP, prepared by precursor co-precipitation and subsequent calcination at 823 K, a very broad signal of hydrogen consumption is visible. The hydrogen consumption started from ca. 675 K, with the maximum outside the recorded temperature range (>773 K). Among other catalysts, Ni/Zn exhibits an H 2 uptake maxima at 615 and 650 K, suggesting NiO-support interactions of weak and medium strength, respectively. For Ni/Si, no change in hydrogen signals is observed in the investigated temperature range. The Ni 2p core-level XPS spectra (Figure 4) show the doublet Ni 2p3/2 and Ni2p1/2 and their shake-up satellites. Two different types of Ni species can be distinguished for the investigated catalyst, as indicated by the splitting of the peak at 854-856 eV. For fresh catalysts, the signals at 854.6-855.4 eV are related to Ni 2+ (NiO) and at 856.1-856.4 eV to Ni 2+ , strongly interacting with alumina or in the form of the spinel NiAl 2 O 4 . For fresh catalyst samples (Figure 4a,b), the most intense peak is associated with Ni 2+ as NiO, indicating that those species are present on the catalyst surfaces in higher proportion compared to spinel-type species [20]. For the reduced NiR sample, the signal intensity related to NiO oxidic to spinel species equals almost unity, while for NiVR, the most intense peak is associated with Ni 2+ as NiO, revealing that NiO species are present on the catalysts' surfaces in a higher proportion compared to spinel-type species ( Figure S2). This could indicate that the V incorporation prevents the formation of a hardly reducible spinel. However, it is hard to conclude the alloying of those two metals, as the vanadia signal is largely shielded by O1s. Catalyst Activity The activity results of the catalyst in a methanation reaction are shown in Figure 5. The mass spectra (MS) profiles of the reactor outlet gases are presented along with the equilibrium calculated data. Before the reaction, the catalyst samples have been activated by an in situ NiO to Ni reduction in a H 2 stream. The data for C-species, namely CO2, CH4, and CO, as well as for H2O, were conti ously monitored. Catalysts can be divided into two groups regarding their activity in bon dioxide methanation: alumina-supported catalysts-NiWP, NiR, and NiVR and alysts deposited on other supports, Ni/Zn, Ni/Si, Ni/TiSi, Ni/Ti, and Ni/VTi. The form group shows significantly better performance (considering CO2 conversion and C yield) than the latter. Moreover, the obtained Ni-based vanadia promoted catalysts o The data for C-species, namely CO 2 , CH 4 , and CO, as well as for H 2 O, were continuously monitored. Catalysts can be divided into two groups regarding their activity in carbon dioxide methanation: alumina-supported catalysts-NiWP, NiR, and NiVR and catalysts deposited on other supports, Ni/Zn, Ni/Si, Ni/TiSi, Ni/Ti, and Ni/VTi. The former group shows significantly better performance (considering CO 2 conversion and CH 4 yield) than the latter. Moreover, the obtained Ni-based vanadia promoted catalysts outperform the reference NiR, while NiWP shows high activity, much higher than reported in the literature, as shown in Table S1. It is worth emphasizing that although the Ni loading in NiWP is lower by more than 40%, compared to the reference NiR, the activity increases for this sample, reaching almost equilibrium values. Moreover, it has a far higher S BET than NiR and NiVR. For the studied process, the reaction described by Equations (1)- (3) in Table 1 are considered; however, reactions 1 and 3 are of higher importance due to enthalpy values. For NiWP, NiR, and NiVR, the CO 2 conversion begins between 400 and 450 K. In contrast, for other catalysts, it is shifted to higher temperatures-above 500 K. However, for this region (below 573 K), the conversion of CO 2 is kinetically limited, as the conversion profiles are remarkably below the thermodynamic equilibrium curve (Figure 5a). For the NiWP and NiR catalysts, the CO 2 conversion and CH 4 yield maxima are observed between 630 and 670 K, and they almost match the values allowed by thermodynamics. The corresponding CO profiles showed that the RGWS is negligible for those catalysts. For other-oxide-supported catalysts, CO 2 equilibrium conversion is approached at the higher temperature range i.e., above 750 K. In the case of the CH 4 signal, the maxima are present between 670 and 740 K for NiVR, Ni/Zn, Ni/Si, Ni/TiSi, Ni/Ti, and Ni/VTi [73,74]. As previously reported, small cubic nickel nanocrystals are very selective in the methanation, while larger nickel particles are more active in the RWGS reaction [75]. This is valid for specific support, possibly possessing weak basicity. In the case SiTi support, although small NiO crystallites were obtained, the activity toward methanation is poor. The catalyst activity has been compared at 600 K and displayed in Figure 6. Relatively high activity of alumina-based catalysts (NiWP, NiR, and associated with the basicity of the support and its capacity to adsorb CO2. R on temperature programmed desorption of CO2 (CO2-TPD) conducted b Deo revealed that the amount of CO2 adsorbed on the support follows the TiO2 > ZrO2 > SiO2 and is the highest for alumina [76]. Moreover, due to the support interactions in Al2O3, NiO crystallites are better dispersed and w avoiding sintering and Oswald ripening [77]. On the other hand, it is visibl based on titania (Ni/Ti, Ni/TiSi) have the lowest activity in CO2 methanat investigated samples. Relatively high activity of alumina-based catalysts (NiWP, NiR, and NiVR) may be associated with the basicity of the support and its capacity to adsorb CO 2 . Research-based on temperature programmed desorption of CO 2 (CO 2 -TPD) conducted by Pandey and Deo revealed that the amount of CO 2 adsorbed on the support follows the trend Al 2 O 3 > TiO 2 > ZrO 2 > SiO 2 and is the highest for alumina [76]. Moreover, due to the strong metalsupport interactions in Al 2 O 3 , NiO crystallites are better dispersed and well-stabilized, avoiding sintering and Oswald ripening [77]. On the other hand, it is visible that catalysts based on titania (Ni/Ti, Ni/TiSi) have the lowest activity in CO 2 methanation among the investigated samples. While comparing vanadia-doped Ni/VTi with pristine Ni/Ti, it is visible that the addition of vanadia significantly enhances the catalyst's activity, which is reflected by higher CO 2 conversion and CH 4 yield and initiation of the reaction at a lower temperature (500 K vs. 550 K, respectively). Such an effect of vanadia as a promoter has already been reported [70,78]. Hamid et al. showed that promotion with vanadia increases the basicity of the catalyst, providing additional CO 2 adsorption sites leading to the formation of unidentate carbonates [70]. Those kinds of species are considered the most reactive form, thus facilitating the hydrogenation reaction. This was proved by calculating the activation energy, which was lower for vanadia-doped than a bare nickel catalyst. The authors also showed that the V 2 O 5 -doped fibrous silica-based Ni/KCC-1 catalyst exhibited better NiO dispersion. During the H 2 -TPR measurement, enhanced ß-stage reduction of NiO was observed, and the signal was shifted to higher temperatures, pointing to a higher exposure of Ni species for the methanation reaction and stronger Ni-support interactions providing their better stabilization [70]. Lu et al. also described the positive effect of vanadia doping on the Ni catalyst deposited on modified bentonite, where the higher NiO dispersion, better separation of catalyst particles, and smaller crystallite size, along with stronger NiO-support interactions, were confirmed by XRD and TPR measurements [78]. Moreover, the performed investigation by XPS revealed an electronic effect of VO x species, which increased the electron cloud density of Ni atoms [78]. Examination of the reduced and used catalyst points to the occurrence of redox cycle between V 3+ , V 4+ , and V 5+ in the methanation reaction. Nevertheless, vanadia's addition has a positive effect on CO 2 conversion and methane yield only to some extent. The dopant excess can lead to covering the catalytically active sites and activity loss [70,78]. It is well known that CO can be produced during the CO 2 methanation process, which is mainly due to the reverse water-gas shift reaction (RWGS) ( Table 1, Equation (4)) [18,41]. Temperature rise leads to a decrease in CH 4 production along with the increase in the CO formation [79][80][81]. For Ni/Si, Ni/VTi, and Ni/Zn, the CO production is initiated between 500 and 550 K and keeps increasing with a rising temperature. The shape of the CO curve is unique for NiR, as it ascends at 530 K, reaches a maximum at 570 K, and descends to get the initial level at 630 K, followed by the next increase from 650 K until the end of the temperature range. For the rest of the catalysts, CO production is observed above 600 K. Therefore, it can be stated that alumina-containing catalysts are the most selective to methane in the examined temperature range. Interestingly, vanadia in Ni/VTi enhances not only CH 4 yield but also CO yield compared to Ni/Ti. Moreover, the RWGS reaction is initiated in a significantly lower temperature in the presence of vanadia. Previous research has shown that a ceria-promoted catalyst leads to the efficient activation of CO 2 , because of oxygen vacancies in ceria caused by the existence of Ce 3+ /Ce 4+ ion pairs [82]. A similar effect can be observed for NiVR and NiV/Ti catalysts, i.e., similar to ceria, surface methanation is enhanced by V 3+ /V 5+ pair. In addition, V 2 O 5 presence influences the catalyst structure changes, as shown by the detailed study of the catalysts' physicochemical properties. Thus, the enhanced activity can be attributed to (i) the V-Ni interaction (alloying V and Ni and a cubic solid solution formation, which was shown to be much more active than Ni) [83]; (ii) the amphoteric properties of V 2 O 5 , which can provide additional adsorption sites of CO 2 [70]; and (iii) the increase in the number of accessible NiO species [78] overall, leading to lower light-off temperatures. Catalyst Stability The stability of best-performing catalysts with different Ni loading (13% NiR and NiVR vs. 7% NiWP) was studied for 72 h-on-stream at 623 K. Almost 90% conversion with 100% selectivity to methane was observed for NiWP catalyst in the whole time studied, as shown in Figure 7. Catalyst Stability The stability of best-performing catalysts with different Ni loading (13% NiR and NiVR vs. 7% NiWP) was studied for 72 h-on-stream at 623 K. Almost 90% conversion with 100% selectivity to methane was observed for NiWP catalyst in the whole time studied, as shown in Figure 7. In contrast, NiR catalysts exhibit a CO2 conversion hardly reaching 50%, with selectivity toward CH4 slightly decreasing from 98.0% to ca. 97.0% (Figure 7). The addition of vanadia improved both CO2 conversion and selectivity to methane, obtaining values near 100%. The catalysts were characterized after the 72 h on-stream reaction. Temperatureprogrammed oxidation runs, under 5%O2/He flow, showed no relevant carbon-related signals while increasing temperature up to 773 K, indicating limited coking occurring upon H2-reach streams, if any ( Figure 8). Moreover, the high resolution transmission electron microscopy (HRTEM) study shows only small changes in NiO average crystallites size (less than 15%) [20]. Thus, the slight crystallites sintering in the case of NiVR sample is most likely compensated by the presence of the amphoteric V2O5, which influences the number of exposed Ni species. Moreover, vanadia will act as an oxidation catalyst, preventing catalyst coking [20]. In contrast, NiR catalysts exhibit a CO 2 conversion hardly reaching 50%, with selectivity toward CH 4 slightly decreasing from 98.0% to ca. 97.0% (Figure 7). The addition of vanadia improved both CO 2 conversion and selectivity to methane, obtaining values near 100%. The catalysts were characterized after the 72 h on-stream reaction. Temperatureprogrammed oxidation runs, under 5%O 2 /He flow, showed no relevant carbon-related signals while increasing temperature up to 773 K, indicating limited coking occurring upon H 2 -reach streams, if any ( Figure 8). Moreover, the high resolution transmission electron microscopy (HRTEM) study shows only small changes in NiO average crystallites size (less than 15%) [20]. Thus, the slight crystallites sintering in the case of NiVR sample is most likely compensated by the presence of the amphoteric V 2 O 5 , which influences the number of exposed Ni species. Moreover, vanadia will act as an oxidation catalyst, preventing catalyst coking [20]. The slight changes in the support have been detected upon stability test as presented in Figure 8 for NiR and the NiVR vanadia-modified sample ( Figures S3-S7 for other studied oxide supported catalysts). The average Raman spectra collected before and after reaction (AR) for NiR and NiVR are presented in Figure 8a-d, respectively, with corresponding maps given in Figure 4e. The NiR spectrum before reaction (Figure 8a) shows two bands attributable to isolated octahedral AlO 6 , at 421 cm −1 and 517 cm −1 , and one of significantly lower intensity at 763 cm −1 assigned to isolated tetrahedral AlO 4 [84,85]. The corresponding chemical maps of the NiR before reaction (Figure 4e) show that all of the bands related to alumina octaand tetrahedra (AlO x ) are distributed evenly throughout the whole sample. Similarly, for NiR after reaction (Figure 4a vs. Figure 4b), bands at 407 cm −1 and 575 cm −1 and one band at 689 cm −1 attributable to isolated AlO 6 and AlO 4 are present [84,85]; however, they have significantly lower intensity. Moreover, they are red shifted, thus indicating a slight change of the support (Ca-modified porous Al 2 O 3 ) during the reaction. In addition, in NiR AR, the bands at 1347 cm −1 and 1580 cm −1 , characteristic of disordered graphite (D band) and highly crystalline graphite (G band), respectively, were detected [86]. Usually, they are used to calculate the coke's disorder in spent catalysts and the size of the graphite microcrystals in the sample, as the ratio ID/IG relates to the average graphite domain dimension [87]. Those bands were observed only for spent alumina-supported catalysts, as shown in Figure 8 and Figures S2-S6. The relative intensity of the ID/IG bands is higher than unity, indicating that the coke crystallite size is bigger than 30 Å. The corresponding chemical maps for NiR AR (Figure 8e) revealed that the bands distribution corresponding to AlO x does not correlate with the D and G bands' distribution. Moreover, the regions of the D band's low intensity are similar to the regions of low intensity of the G band. In pure graphite, the G peak is narrow, and the broadness of the peaks can be related to the degree of disorder/inhomogeneity in the deposited coke on the catalyst. Both of these features have very high intensity, with no correlation with AlO x -related bands, thus indicating that the methanation reaction was robustly catalyzed by Ni-species leaving a carbon deposit via methane cracking or Boduouard reaction (Equations (5) and (6); see Table 1). Catalysts 2021, 11, x FOR PEER REVIEW 15 of 21 Figure 8. Raman deconvoluted spectra and chemical maps of NiR and NiVR samples before and after reaction (AR). (a) NiR before reaction spectrum, (b) NiR AR spectrum, (c) NiVR before reaction spectrum, (d) NiVR AR spectrum, (e) Raman maps with the distribution of selected bands for NiR before reaction, NiR AR, NiVR before reaction, and NiVR AR. Red spectra correspond to the cumulative spectrum consisting of the deconvoluted bands, other colors of the spectra correspond to the single deconvoluted bands. The white scale bar corresponds to 4.5 µm. The slight changes in the support have been detected upon stability test as presented in Figure 8 for NiR and the NiVR vanadia-modified sample ( Figures S3-S7 for other studied oxide supported catalysts). The average Raman spectra collected before and after reaction (AR) for NiR and NiVR are presented in Figure 8a-d, respectively, with corresponding maps given in Figure 4e. Raman deconvoluted spectra and chemical maps of NiR and NiVR samples before and after reaction (AR). (a) NiR before reaction spectrum, (b) NiR AR spectrum, (c) NiVR before reaction spectrum, (d) NiVR AR spectrum, (e) Raman maps with the distribution of selected bands for NiR before reaction, NiR AR, NiVR before reaction, and NiVR AR. Red spectra correspond to the cumulative spectrum consisting of the deconvoluted bands, other colors of the spectra correspond to the single deconvoluted bands. The white scale bar corresponds to 4.5 µm. The analysis of the confocal depth map showed a similar distribution of the band intensities in the samples before and after the reaction-the bands on the surface were the most intense, while the deeper bands were less intense in both samples. When the intensity of the AlO x -related bands with the confocal depth profiling of the NiR AR sample is analyzed, it can be noticed that the bands' intensity changes with the sample depth. This fact could be related to the sample transparency and technical resolution of the spectrometer; on the other hand, it may suggest surface changes evoked by the diffusion and segregation of abundant surface elements. The average Raman spectra of the NiVR before and after the reaction are presented in Figure 8c,d, respectively. The spectrum of NiVR before reaction shows bands at 412 cm −1 , 528 cm −1 , and 763 cm −1 , ascribed to isolated octahedral AlO 6 , and isolated tetrahedral AlO 4 , respectively. However, in contrast to the V-free sample (NiR), the bands corresponding to isolated octahedral AlO 6 have significantly lower intensity than the bands related to tetrahedral AlO 4 , thus implying that vanadia changes the catalyst structure. Presumably, this can indicate the V-Ni interaction, as stated previously for Ni-V systems where a cubic solid solution formation (Ni and V alloys) was shown and explained [83]. In the NiVR sample, after the reaction (Figure 8d), the bands at 578 cm −1 and 682 cm −1 were attributed to AlO 6 and AlO 4 [84,85]. The significant shifts (ca. 60-100 cm −1 for condensed octahedral and isolated tetrahedral) along with the lack of some of the bands can be attributed to changes in the catalyst structure during the reaction. While for NiVR, the bands corresponding to AlO 6 reversely correlate with the band assigned to the isolated, the condensed tetrahedral structures are independent of other aluminum structures. They occur in the highest amounts in the NiVR sample before the reaction (Figure 4e). In the AR sample, there is a reverse correlation between the aluminum bands and the bands corresponding to graphene and imperfect graphite (NiVR vs. NiVR AR). Therefore, it can be concluded that the catalysis structural changes have been limited after the vanadia modification. This was also evidenced by the analysis of the confocal depth map, which showed a similar distribution of the band intensities in the samples before and after the reaction-the bands on the surface were the most intense. In contrast, the deeper bands were less intense in both of the samples. The bands corresponding to AlO x were less intense in the depth of the sample after the reaction. Conclusions The CO 2 hydrogenation to methane technology has been known for many years as a method of carbon dioxide conversion to chemical raw materials and fuels, including drop-in-fuels. Similar to the CO 2 -to-methanol process, it is considered a relatively mature concept. Nevertheless, the crucial issue for increasing the efficiency of the process is finding a durable catalyst showing high activity in a low-temperature range. The obtained results over the Ni-based catalyst supported on various oxides revealed an excellent performance of NiWP catalysts derived from a hydrotalcite-like precursor. Although the Ni content (7 wt %) in NiWP sample is low, the catalyst outperforms the reference NiR and NiVR catalysts, supported on porous alumina. However, this catalyst was prepared by applying the co-precipitation method, which led to (i) an increase in the specific surface area of the sample, (ii) the formation of the small Ni nanocrystallites with an average diameter size of ca. 6 nm. It resulted in improved catalytic activity at low-temperature CO 2 methanation. All catalysts showed activity toward a CO 2 hydrogenation reaction. The CO 2 conversion at temperatures around 623 K almost matches the values allowed by thermodynamics for the NiWP catalyst, while for reference, the NiR and the NiVR catalyst were 43% and 67%, respectively. Among other materials at 623 K, only Ni/Si shows similar conversion to that obtained for NiR. For other oxide-supported catalysts, the CO 2 conversion increased with temperature and approached maxima above 700 K, revealing the influence of the support on the sample activity and selectivity. The primary reaction product was CH 4 , while CO was detected for catalysts with increased surface acidity. Moreover, an evident vanadia promotion is observed for both NiVR and NiV/Ti systems. Vanadia incorporation changes the catalyst's structure as a result of which the activity of the V-modified catalyst increases. The investigation results lead to the conclusion that vanadia promotion of a Ni/Al 2 O 3 creates low-temperature activity toward CO 2 methanation and provides the opportunity to design modern catalysts, which may be attractive in the hydrogen economy and the CO 2 utilization industry Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/catal11040433/s1, Figure S1: DRIFT spectra of Ni-supported catalysts, (a) whole range spectra (b) enhanced OH-region. UV-VIS spectra (a,b) and Tauc plots for allowed (c) and not allowed transitions (d) for Ni-supported catalysts, Figure S2. Raman spectra and chemical maps of the NiSiO 2 before and after reaction (AR), Figure S3. Raman spectra and chemical maps of NiTiO 2 before and after reaction (AR). The Figure S4. Raman spectra and chemical maps of the NiVTiO 2 before and after reaction (AR), Figure S5. Raman spectra and chemical maps of the Ni/ZnO before and after reaction (AR), Figure S6. Raman spectra and chemical maps of the Ni/TiSi before and after reaction (AR). Table S1: Catalyst screening in CO 2 methanation.
13,911.8
2021-03-28T00:00:00.000
[ "Chemistry", "Engineering", "Environmental Science" ]
Practical remediation of the PCB-contaminated soils A practical method for the elimination of PCBs from PCB-contaminated soil has been developed by the combination of Soxhlet extraction using a newly-developed modified Soxhlet extractor possessing an outlet valve on the extraction chamber with the chemical degradation. Various types of PCBs contaminated in soils could be completely extracted in refluxing hexane, and the subsequent hydrodechlorination could also be completed within 1 h in a hexane–MeOH (1 : 5) solution in the presence of Pd/C and Et3N under ordinary hydrogen pressure and temperature without the transfer of the extracted PCBs to other reaction container (a complete one-pot procedure). The present system is quite useful as a simple, safe, mild and reliable remediation method of PCB-contaminated soil. Introduction Polychlorinated biphenyls (PCBs) were commercially produced as mixtures of 209 possible congeners beginning in 1929 through the mid-1970s and are non-polar chlorinated hydrocarbons with a biphenyl nucleus, on which one to ten hydrogen atoms have been replaced with chlorine atoms. PCBs were mainly used as electric insulating oils, lubricants, and coolants in electrical devices, such as transformers and capacitors, all over the world due to their chemical and thermal stabilities. However, their production, import and novel use were banned during the mid-1970s, as such properties led to serious environmental pollution including bioaccumulations and biomagnifications through the food chain [1][2][3][4]. While PCBs in general could be decomposed by combustion using high-temperature incinerators (above 1100°C), any drop in temperature below 1100°C creates the possibility to generate more toxic dioxins [5]; hence, it is quite difficult to reach an agreement by local communities for the construction of such incinerators. Therefore, tons of PCBs have been stored by each facility under the strict conditions in accordance with each country's laws. However, a huge expense and fear of improper disposal and accidental leaks of the PCBs, have increased with the longer periods of storage [6,7]. On the other hand, the fact is that most of the environmental PCB mass has been found in soil including marine, rivered, and lacustrine muck due to their high specific gravity and highly hydrophobic properties [8]. Although many methods for the remediation of PCBpolluted soils have been reported in the literature, such as ultrasonication [9,10], photochemical degradation [11][12][13], reductive dechlorination using metals [14][15][16], base-catalyzed decomposition [17][18][19], hydrogen-transfer hydrodechlorination [20,21], and fungous and bacterial treatments [22][23][24][25], most of these remediation methods require severe reaction conditions, such as high heat, high pressure and/or strongly basic conditions, due to the superior chemical stability of PCBs and the difficulty of complete extraction from the contaminated soil due to the strong affinity between the PCBs and soil based on a hydrophobic interaction. While direct methods for the PCB degradation without the PCB-extraction process from soil have been reported [13,18,19,[22][23][24][25], a PCB extraction process was involved in most remediation parts for the reliable and reproducible PCB clean up using chemical degradation methods [9][10][11][14][15][16]. PCBs are basically extracted from their polluted soils by the separatory funnel method using dichloromethane [16] or hexane-acetone [15], the ultrasonic extraction method in a hexane-acetone mixed solvent [14], and the Soxhlet extraction method [11,[26][27][28]. The United States Environmental Protection Agency (U.S. EPA) recommends Soxhlet extraction using hexane and acetone (1 : 1, v/v) as a standard method to isolate PCBs from soil samples [26,27]. The Soxhlet extraction using toluene as a solvent was adopted as an official method to survey the dioxin concentration including co-planar PCBs in soil samples by the Ministry of the Environment of the Japan government [28]. We recently established an efficient method for the palladium on carbon (Pd/C)-catalyzed hydrodechlorination of aromatic chlorides using triethylamine (Et 3 N) as a single electron donor at ambient temperatures and pressures [29,30], which has been successfully used for the degradation of dichlorodiphenyltrichloroethanes (DDTs) [31] and PCBs [32,33] based on the removal of the chlorine atoms from the aromatic nuclei, and a pilot study using a 50 L vessel for the practical PCB degradation was achieved [34]. Furthermore, magnesium metal was also used as a single electron donor instead of Et 3 N for the Pd/C-catalyzed PCB degradation methods [35,36]. In this paper, we describe an easy and reliable remediation method of PCB-polluted soils by PCB extraction and continuous Pd/C-Et 3 N-mediated complete hydrodechlorination of the extracted PCBs. Materials Quartz sand, diatomaceous earth, and bentonite were purchased from Nacalai Tesque, Inc. (Kyoto, Japan), and the silica sand (30-50 mesh) was purchased from Koso Chemical Co., Ltd. (Tokyo, Japan). The soil samples, Akadama-tsuchi and Kanuma-tsuchi, are clean, nonpollutant, mildly acidic, and produced in Tochigi prefecture, Japan. They were purchased as 2 L packs from Tachikawa Heiwa Nouen Co., Ltd. (Tochigi, Japan). The soil was ground by a mortar and pestle before use. Also no contamination by PCBs of all purchased soil samples (quartz sand, diatomaceous earth, bentonite, silica sand, Akadama-tsuchi and Kanuma-tsuchi) was confirmed by gas chromatography/mass spectrometry (GC/MS) analysis after Soxhlet extraction using hexane as an extraction solvent. A Soxhlet extractor, which has an outlet valve on the extraction chamber, was designed for the easy removal of the collected (distilled) solvents without taking apart the Soxhlet extraction setup (Figure 1). Preparation of PCB-contaminated soils A diethyl ether solution of Aroclor 1242 (1 mg/mL) was spread evenly to the purchased typical soil (quartz sand, silica sand, diatomaceous earth, bentonite, Akadama-tsuchi or Kanuma-tsuchi) to prepare the PCB-contaminated soils (250 μg PCBs/g). Each PCB-contaminated soil was stored at 35°C (oil bath) for 0.5-1 h to allow evaporation of the diethyl ether. PCB extraction from PCB-contaminated soils using Soxhlet extractor MeOH (150 mL) as an extraction solvent for 2 h using a modified Soxhlet extractor that has an outlet valve connected on the extraction chamber for the easy removal of the collected (distilled) solvents after completion of the extraction (Figure 1). The PCB extract in the receiving flask was then heated at 90°C, and the distilled MeOH (ca. 35 mL) was collected in the extraction chamber and removed through the outlet valve. The removed MeOH solution (1.5 mL) was concentrated to dryness, and the residue was dissolved in hexane (1.5 mL) and analyzed by GC/MS. The residual PCBs in the extraction chamber were estimated based on the comparison of the total peak area of the PCBs between the PCB contents in the original PCB-contaminated soils and the PCB contents in the recovered soils after extraction. A part (1.5 mL) of the Soxhlet extract in the receiving flask [MeOH solution (100 mL)] was also concentrated to dryness and dissolved in hexane (1.5 mL). The hexane solution (800 μL) was diluted with additional hexane (400 μL), and the GC/MS analysis was conducted to confirm the recovery of the Aroclor 1242 in the receiving flask. The recovery ratio of Aroclor 1242 was estimated to be 93% by comparison of the peak area of the prepared hexane sample using the recovered PCBs in MeOH in the receiving flask with the PCBs in the original PCB-contaminated quartz sand (7.5 mg). Figure 2, Entry 2: The same procedure used in Entry 1 was followed except for using silica sand (30 g) in place of the quartz sand (30 g). The same procedure used in Entry 2 was followed, and the extraction time was extended to 4 h. The silica sand in the extraction thimble was recovered and extracted again with fresh hexane (150 mL) using the Soxhlet extractor for 2 h. No residual PCBs in the extraction chamber and receiving flask were detected by GC/MS. Figure 2, Entry 4: The procedure was the same as the method in Entry 1 except for using diatomaceous earth (10 g) in place of the quartz sand (30 g). The amount of soil was determined according to its bulkiness due to the limited volume of the extraction thimble. Figure 2, Entry 5: The same procedure used in Entry 4 was followed, and the extraction time was extended to 4 h. The diatomaceous earth in the extraction thimble was recovered and extracted again with fresh hexane (150 mL) using the Soxhlet extractor for 2 h. No residual PCBs in the extraction chamber and receiving flask were detected by GC/MS. Figure 2, Entry 6: The procedure was the same as the method in Entry 1 except for using bentonite (20 g) in place of the quartz sand (30 g). (1 μL) from the extraction chamber was analyzed by GC/ MS without dilution. Figure 3, Entry 2: The extraction procedure was the same as the method for Figure 3, Entry 1 except for the extraction time (2 h). Figure 3, Entry 3: The extraction procedure was the same as the method for Figure 3, Entry 2 except for the use of silica sand (30 g) in place of the quartz sand (30 g). Figure 3, Entry 4: The extraction procedure was the same as the method for Figure 3, Entry 2 except for the use of diatomaceous earth (10 g) in place of the quartz sand (30 g). Figure 3, Entry 5: The extraction procedure was the same as the method for Figure 3, Entry 2 except for the use of bentonite (20 g) in place of the quartz sand (30 g). In Figure 3, Entries 2-4 and 6-8, the Soxhlet extracts in the receiving flask was used for the PCB degradation under the 10% Pd/C-Et 3 N-H 2 conditions using the procedure described in the method section "Hydrodechlorination of PCB extracts from PCB-polluted soils under 10% Pd/C-Et 3 N-H 2 conditions". sampled reaction mixture (1.5 mL), which was successively washed with H 2 O (1.5 mL). The conversion yield (%) was determined by GC/MS analysis using the doubly diluted hexane layer. Figure 4, Entry 3: The procedure was the same as the method for Figure 4, Entry 1 except for the solvent [hexane-MeOH (1 : 5, 60 mL)] and the amount of the sampled reaction mixture (1.5 mL). Hexane (1.5 mL) and H 2 O (1.5 mL) were added to the sampled reaction mixture, the mixture was shaken, then the hexane layer was used for the GC/MS analysis. The GC charts after 0, 1, and 2 h are shown in Figure 5. Figure 4, Entry 4: The procedure was the same as the method for Figure 4, Entry 1 except for the solvent [MeOH (60 mL)]. The reaction mixture (1.5 mL) was sampled, filtered, and concentrated to dryness, then the residue was dissolved in hexane (1 mL). The solution was washed with H 2 O (1 mL) and the hexane layer was used for the GC/MS analysis after a double dilution. Evaluation of residual PCBs in distilled hexane in the extraction chamber after the Soxhlet extraction The Soxhlet extraction of PCBs from the PCBcontaminated quartz sand (30 g) using hexane (150 mL) was carried out for 2 h. The extract in the receiving flask was concentrated at 90°C. The distilled and collected hexane solution (ca. 30 mL, first fraction) was removed from the extraction chamber through the outlet valve (Figure 1), and the first fraction (1 μL) was analyzed by GC/MS (Figure 6a). The remaining extract in the receiving flask was gently and further concentrated at 90°C, the distilled and collected hexane in the extraction chamber (ca. 30 mL) was removed through the outlet valve as the second fraction, and its GC/MS analysis (1 μL) was also performed (Figure 6b). Two more hexane fractions (third and fourth fractions, each ca. 30 mL) were collected in the extraction chamber from the extracts in the receiving flask by heating at 90°C, removed through the outlet valve, and analyzed using GC/MS in the same manner as described for the first and second fractions (Figure 6c and d). Hydrodechlorination of PCB extracts from PCB-polluted soils under 10% Pd/C-Et 3 N-H 2 conditions Figure 2, "PCB extraction from PCB-contaminated soils using Soxhlet extractor"). The mixture was vigorously stirred using a stirring bar under a hydrogen atmosphere (balloon) at ambient temperature (ca. 25°C) for 1 h. The reaction mixture (ca. 5 mL) was filtered through cotton, and the filtrate (3 mL) was concentrated to dryness. The residue was diluted with hexane (3 mL) and washed with H 2 O (3 mL). The hexane layer (800 μL) was diluted with additional hexane (400 μL), and the resulting hexane solution was analyzed by GC/MS to analyze the degradation. Figure 7: Each extracted PCB in hexane in the receiving flask (ca. 100 mL, see the method section for mixture was vigorously stirred using a stirring bar under a hydrogen atmosphere (balloon) at ambient temperature (ca. 25°C) for 1 h. The reaction mixture (ca. 6 mL) was filtered through a 0.45 μm membrane filter (Millipore, USA) and concentrated to dryness. The residue was diluted with hexane (5 mL) and washed with H 2 O (5 mL). The hexane layer (400 μL) was diluted with additional hexane (600 μL), and the resulting hexane solution was analyzed by GC/MS to analyze the degradation. Instrumental analysis A JMS Q1000 GC [7890A gas chromatography (Agilent Technologies, USA) equipped with a JEOL MK II mass spectrometer (JEOL Co., Ltd., Japan)] and an Inert Cap 5MS/sil + GD capillary column (30 m × 0.25 mm i.d., 0.25 μm film thickness: GL Science, Japan) were used for the PCB analysis. Helium was employed as a carrier gas at the flow rate of 1.3 mL/min. The injector and detector temperatures were 280°C. The column temperature was programmed to ramp from 70°C (5 min hold) to 280°C (4 min hold) to the rate of 10°C/min. One μL of the sample solution was injected. The retention time of biphenyl was 13.01 min and those of the PCBs (Aroclor 1242) were 16.26-21.29 min. The products were identified by their GC/MS retention times in comparison to those of authentic commercial samples. Soxhlet extraction of PCBs from PCB-contaminated soil PCBs were extracted from a variety of artificially PCBcontaminated soils (quartz sand, silica sand, diatomaceous earth and bentonite) for 2 h using a modified Soxhlet extractor that has an outlet valve attached to the extraction chamber for the easy removal of the extraction solvent (Figure 1). MeOH was first examined as an extraction solvent for the present Soxhlet extraction (Figure 2), since MeOH was found to be an effective and preferable solvent for the Pd/C-catalyzed PCB degradation based on the dechlorination reaction in the presence of Et 3 N under an H 2 atomosphere [29,30,32,33]. Thus, the use of MeOH for the extraction could avoid the solvent exchange process for the PCB degradation after the extraction. When the 2-h Soxhlet extraction of the PCBs from the PCB-contaminated quartz sand was carried out, no PCBs were detected from the MeOH collected in the extraction chamber (Entry 1), and the nearly quantitative extraction of the PCBs was confirmed by the GC/MS analysis of the PCBs in the receiving flask (93%). We determined that the PCBs were nearly quantitatively extracted in the receiving flask when no PCBs were detected in the solvents obtained from the extraction chamber after the Soxhlet extraction. Although the MeOH extract in the receiving flask was vigorously stirred with 10% Pd/C and Et 3 N under an H 2 atmosphere (balloon) at room temperature (ca. 25°C) for 1 h, the hydrodechlorination of the PCBs in the extract was not completed (98% conversion) contrary to our expectation. The incomplete reaction would be attributed to the sulfur contaminants in the original quartz sand, since significant amounts of sulfur contaminants [1800 μg/g by the oxygen bomb combustion ion chromatography (Sumika Chemical Analytical Service, Ltd., Japan)] were extracted by MeOH from the quartz sand. Such sulfur components could function as catalyst poisons toward the Pd/C-catalyzed hydrodechlorination based on the significant coordination activity of sulfur atoms to palladium atoms. Therefore, MeOH was not an appropriate solvent for the Soxhlet extraction from the PCB-polluted quartz sand. Although a small amount of PCBs was still extracted in MeOH in the extraction chamber after the 2-h extraction of the PCB-contaminated silica sand (2%, Figure 2, Entry 2) and diatomaceous earth (7%, Entry 4), the lengthening of the extraction time to 4 h certainly enhanced the extraction efficiency, and no PCBs were detected in MeOH in the extraction chamber (Entries 3 and 5). The soils in the extraction thimble after 4-h extraction (Entries 3 and 5) were recovered, and further extraction was conducted with fresh hexane using the Soxhlet extractor for 2 h, since hexane was generally employed as an effective solvent for the PCB extraction [11,14]. Consequently, no PCBs were detected from both the extraction chamber and the receiving flask, indicating that the PCBs were completely extracted from soils by the first 4-h Soxhlet extraction. Therefore, we estimated that the complete PCB extraction was achieved when no PCBs were detected in the solvent in the extraction chamber. Furthermore, PCBs in the MeOH extracts of the PCB-contaminated silica sand and diatomaceous earth underwent a complete degradation under Pd/C-Et 3 N-mediated hydrodechlorination conditions (Entries 3 and 5). On the other hand, the Soxhlet extraction of the PCB-contaminated bentonite using MeOH was unsuccessful probably due to the strong adsorptive property, affinity for MeOH and/or viscosity of the bentonite, and a significant amount of PCBs (12%) still remained in MeOH in the extraction chamber after 2-h extraction. Furthermore, the subsequent 10% Pdcatalyzed hydrodechlorination proceeded in only 30% (Entry 6). Bentonite is originally derived from volcanic ash and contains a substantial amount of sulfur concomitants, such as sulfates and sulfides. The sulfur materials would be extracted from the bentonite together with PCBs by the Soxhlet extraction with MeOH and strongly suppress the PCB degradation as catalyst poisons in analogy (more significantly) with the hydrodechlorination of the MeOH extracts of quartz sand (Entry 1). Therefore, we concluded that MeOH was not effective as a solvent for the Soxhlet extraction of PCBs. Hexane has been mainly used as an efficient solvent for the extraction of PCBs from the PCB-contaminated solid materials and water in the literature [11,14,37,38]. Furthermore, the low solubility of sulfur contaminants in hexane is also easily predicted compared to MeOH. Therefore, the Soxhlet extraction of PCBs with hexane from the PCB-contaminated quartz sand was carried out to establish the effective PCB extraction method (Figure 3, Entries 1 and 2). Although the 1-h Soxhlet extraction was incomplete for the perfect removal of PCBs from the PCB-contaminated quartz sand and 0.2% of the PCBs still remained in hexane in the extraction chamber (Entry 1), the PCB-residue completely disappeared from the extraction chamber by extending the extraction time to 2 h (Entry 2). The Soxhlet extraction procedure for PCBs with hexane was reliable since it could be reproduced three times (Entry 2), and the amounts of the extracted sulfur contaminants from the PCB-contaminated quartz sand were notably decreased compared to the extraction with MeOH [880 μg/g by oxygen bomb combustion ion chromatography (Sumika Chemical Analytical Service, Ltd., Japan)]. These results indicated that the PCBs were completely and selectively extracted from the polluted quartz sand by hexane. The Soxhlet extraction method using hexane could also be applied to extraction from the PCB-contaminated silica sand and diatomaceous earth (Entries 3 and 4), although the extraction of PCBcontaminated bentonite was incomplete within 2 h (Entry 5) probably because bentonite could strongly adsorb PCBs due to its strong viscosity property. A quantitative extraction of PCBs from PCB-contaminated bentonite was successfully achieved by extending the extraction time to 4 h (Entry 6). The method for the remediation of PCB-polluted soils was also used for the cleanup of horticultural soils, Akadama-tsuchi and Kanuma-tsuchi (Entries 7 and 8). The PCBs in both the PCB-contaminated Akadama-tsuchi (Entry 7) and Kanuma-tsuchi (Entry 8) were completely extracted with hexane within 2 h using the modified Soxhlet extractor. Solvent effect on the PCB dechlorination Since hexane was found to be appropriate as a solvent for the Soxhlet extraction of PCBs, the use of hexane as a solvent for the degradation of PCBs based on the hydrodechlorination of aromatic chloride under Pd/C-Et 3 N-H 2 conditions would be desired due to the safe and easy handling on the basis of no need to exchange solvents after the extraction. However, the 10% Pd/Ccatalyzed hydrodechlorination of Aroclor 1242 in the presence of Et 3 N hardly proceeded in hexane (Figure 4 Entry 1). The detailed study of the Pd/C-catalyzed hydrodechlorination in the presence of Et 3 N under an H 2 atmosphere has already revealed that MeOH was an effective and preferable solvent [29,30,32,33]. Aroclor 1242 was completely hydrodechlorinated at room temperature within 1 h in MeOH instead of hexane (Entry 4). The mixed solvent of hexane and MeOH was then investigated, since the complete removal of hexane from the receiving flask for the exchange of solvents after the Soxhlet extraction includes the risk of the possible evaporation or leak of PCBs together with hexane (see the discussion in the section "Procedure for the concentration of the PCB extract in the receiving flask"). The higher MeOH ratio of the mixed solvents more effectively promoted the hydrodechlorination (Figure 4 Entries 2 and 3), and it could be completed in hexane-MeOH (1 : 5) within 2 h under ambient pressures and temperatures (Entry 3) as shown in Figure 5. Concentration of the PCB extract in the receiving flask We next investigated the establishment of a procedure for the removal of hexane from the Soxhlet extract in the receiving flask for the subsequent Pd/C-catalyzed hydrodechlorination of PCBs in hexane-MeOH (1 : 5). After the 2-h Soxhlet extraction of PCBs from the PCBcontaminated quartz sand using hexane, the hexane was transferred to the extraction chamber from the receiving flask by the four-step distillation, and collected as different fractions ( Figure 6). Although no PCBs were detected in the first fraction (Figure 6a), a small quantity of PCBs was found in the 2nd fraction (Figure 6b), and the detected PCB quantity increased in a fraction-number-dependent manner (Figure 6b-d). These results suggest that a small amount of PCBs could be vaporized together with the distilled hexane with decreasing amount of hexane in the receiving flask even under gentle refluxing conditions. On the other hand, when the further gentle concentration of the hexane extract in the receiving flask using a rotary evaporator under reduced pressure (ca. 20 mmHg) at 40°C was done, no PCBs were detected in the collected hexane in both the rotary evaporator trap [b) in Figure 8] and the solvent receiving flask [a) in Figure 8] by GC/MS, indicating that no PCBs had been vaporized during the gentle evaporation under a sufficiently reduced pressure. Continuous operation of the PCB extraction using the Soxhlet extractor and dechlorination using the 10% Pd/C-Et 3 N-H 2 conditions The PCB extraction from a variety of PCB-contaminated soils (quartz sand, silica sand, diatomaceous earth, bentonite, Akadama-tsuchi and Kanuma-tsuchi) with hexane ( Figure 3) and the subsequent dechlorination using Pd/C-Et 3 N-H 2 conditions in hexane-MeOH were carried out under ambient H 2 pressures (balloon) and temperatures (Figure 7). While the PCBs in the MeOH extract from the PCB-contaminated quartz sand were not completely hydrodechlorinated under 10% Pd/C-H 2 -Et 3 N conditions (Figure 2, Entry 1), the degradation of the hexane extract ( Figure 3, Entry 2) was completed (Figure 7, Entry 1) because the amount of sulfurcontaminants in the hexane extract was sufficiently low (880 μg/g) compared to that of MeOH (1800 μg/g). Furthermore, all other PCB extracts derived from the silica sand, diatomaceous earth, bentonite, Akadama-tsuchi and Kanuma-tsuchi were also smoothly and completely degraded within 1 h (Figure 7, Entries 2-7). The present soil remediation method using a modified (newly-developed) Soxhlet extractor that bears an outlet valve on the extraction chamber is able to extract PCBs from PCB-contaminated soils completely and the subsequent chemical degradation is successfully accomplished by a one-pot procedure. In reported remediation methods in the literature involved extraction process, the PCB extraction from soil and its chemical degradation could be achieved totally independent manner with the exchange of the reaction containers [9][10][11][14][15][16], to our knowledge. The present method is quite safe by reason that the operators could prevent the extracted PCBs from the direct contacting. Furthermore, PCBs extracted from PCB-contaminated soils quantitatively degrade based on a Pd/C-catalyzed dechlorination in the presence of Et 3 N under ambient hydrogen pressure and temperature in a short time. Moreover, the Pd/C might be recovered and reused by the simple wash in water and hexane sequentially [32]. The present remediation method achieved extensive improvement in simplicity, safety and time-efficiencies over the previously reported remediation processes. Conclusions We developed a practical method for the remediation of PCB-contaminated soils. One of the notable features of the PCB remediation of PCB-contaminated soils is the use of a receiving flask for the Soxhlet extractor as a reaction flask for the hydrodechlorination of PCBs. Therefore, the present remediation method never requires the transfer of the extracted PCBs to another reaction container and the degradation is successfully accomplished by a one-pot procedure. Thus, the possible secondary pollution by the extracted (concentrated) PCBs does not occur. PCBs are efficiently extracted with hexane from various types of polluted soils, and their dechlorination could be easily achieved in a hexane-MeOH (1 : 5) solution using the 10% Pd/C-Et 3 N-H 2 conditions under ambient pressures (balloon) and temperatures (ca. 25°C). The present method is quite simple and would be useful for the mild and practical remediation of PCB-polluted soils.
5,871.6
2015-02-10T00:00:00.000
[ "Engineering" ]
LabelRS: An Automated Toolbox to Make Deep Learning Samples from Remote Sensing Images : Deep learning technology has achieved great success in the field of remote sensing processing. However, the lack of tools for making deep learning samples with remote sensing images is a problem, so researchers have to rely on a small amount of existing public data sets that may influence the learning effect. Therefore, we developed an add-in (LabelRS) based on ArcGIS to help researchers make their own deep learning samples in a simple way. In this work, we proposed a feature merging strategy that enables LabelRS to automatically adapt to both sparsely distributed and densely distributed scenarios. LabelRS solves the problem of size diversity of the targets in remote sensing images through sliding windows. We have designed and built in multiple band stretching, image resampling, and gray level transformation algorithms for LabelRS to deal with the high spectral remote sensing images. In addition, the attached geographic information helps to achieve seamless conversion between natural samples, and geographic samples. To evaluate the reliability of LabelRS, we used its three sub-tools to make semantic segmentation, object detection and image classification samples, respectively. The experimental results show that LabelRS can produce deep learning samples with remote sensing images automatically and efficiently. Introduction With the development of artificial intelligence, deep learning has achieved great success in image classification [1,2], object detection [3,4], and semantic segmentation [5,6] tasks in the field of computer vision. At the same time, more and more researchers use technologies similar to convolutional neural networks (CNNs) to process and analyze remote sensing images. Compared with traditional image processing methods, deep learning has achieved state-of-the-art success. For example, Rezaee et al. [7] used AlexNet [8] for complex wetland classification, and the results show that the CNN is better than the random forest. Chen et al. [9] built an end-to-end aircraft detection framework using VGG16 and transfer learning. Wei et al. [10] regarded road extraction from remote sensing images as a semantic segmentation task and used boosting segmentation based on D-LinkNet [11] to enhance the robustness of the model. In addition, deep learning is also used for remote sensing image fusion [12,13] and image registration [14,15]. Samples are the foundation of deep learning. The quality and quantity of samples directly affect the accuracy and generalization ability of the model. Due to the dependence of deep learning technology on massive samples, making samples is always an important task that consumes a lot of manpower and time and relies on expert knowledge. At present, more and more researchers and institutions begin to pay attention to how to design and implement high-efficiency annotation methods and tools for images [16,17], video [18,19], text [20,21], and speech [22,23]. In the field of computer vision, representative tools and platforms include Labelme [24], LabelImg [25], Computer Vision Annotation Tool (CVAT) [26], RectLabel [27] and Labelbox [28]. A brief comparison between these annotation tools is shown in Table 1. These tools are fully functional and support the use of boxes, lines, dots, polygons, and bitmap brushes to label images and videos. Advanced commercial annotation tools also integrate project management, task collaboration, and deep learning functions. However, none of them support labeling multispectral remote sensing images. The main reason is that the processing of remote sensing images is very complicated. As the data volume of a remote sensing image is huge generally compared to ordinary natural images, ordinary annotation tools cannot complete basic image loading and display functions, not to mention complex tasks such as pyramid building, spatial reference conversion, and vector annotation. [29] Without an effective and universal annotation tool for remote sensing images, researchers can only rely on existing public data sets, such as the UC Merced Land Use Dataset [30], WHU-RS19 [31], RSSCN7 [32], AID [33], Vaihingen dataset [34] and the Deep-Globe Land Cover Classification Challenge dataset [35]. But these data sets have limited categories. For example, WHU-RS19 and RSSCN7 contain 19 categories and 7 categories, respectively. In addition, they have specific image sources and spatial-temporal resolutions, and the quality of annotations is also uneven. These reasons show it difficult for the existing remote sensing data sets to meet the actual needs in complex scenarios. Therefore, it is definitely necessary to develop a universal remote sensing image annotation tool. ArcGIS is one of the most representative software in geography, earth sciences, environment, and other related disciplines. It has diverse functions like huge image display, data processing, spatial analysis, and thematic mapping. Although the high version (Ar-cGIS 10.6) has added the function of making deep learning samples, there are still obvious limitations, such as (1) the tool cannot be used in lower versions of ArcGIS. We used "ArcGIS + version number" and "ArcMap + version number" as keywords and retrieved a total of 765 related papers in the past three years from the Web of Science (WoS). We counted the ArcGIS versions used in these papers, as shown in Figure 1. More than 90% of ArcGIS currently used does not have the function of making deep learning samples, and ArcGIS 10.2 and 10.3 are still the mainstream versions. (2) Output format restriction. The tool does not consider the high color depth and multiple bands of remote sensing images, which results in the format of the output sample must be consistent with the input. (3) The target size and distribution patterns are ignored. (4) Poor flexibility. The sample creation tool in ArcGIS requires that the input vector layers must follow a training sample format as generated by the ArcGIS image classification tool. According to the above analysis and considering both the development cost and the use cost, we developed the annotation tool LabelRS for remote sensing images based on ArcGIS 10.2. LabelRS enables researchers to easily and efficiently make remote sensing samples for computer vision tasks such as semantic segmentation, object detection, and image classification. Specifically, our tool supports inputting multispectral images with arbitrary bands and adapts to both sparse and densely distributed scenarios through a feature merging strategy. LabelRS solves the scaling problem of objects of different sizes through a sliding window. And a variety of band stretching, image resampling, and gray level transformation algorithms are used to enable the output samples to meet the actual needs of users and reduce the workload of postprocessing. In addition, we designed XML files to store metadata information to ensure the traceability of the samples. Each sample contains a spatial coordinate file to seamlessly realize the conversion between ordinary images and geographic images. More importantly, the sample production process we designed can also be potentially used to unravel many other problems of multispectral image classification, such as mineral recognition from X-ray maps [36,37] and breast cancer diagnosis from medical images [38]. All of them have a high spectral and spatial resolution, changing the reading and writing way of images according to the type of images can support the migration and reuse of LabelRS. The main contributions of our work are summarized as follows. 1. An efficient framework is proposed to make deep learning samples using multispectral remote sensing images, which contains sub-tools as semantic segmentation, object detection, and image classification. To our knowledge, it is the first complete framework for image annotation with remote sensing images. 3. Three cases are implemented to evaluate the reliability of LabelRS, and the experimental results show that LabelRS can automatically and efficiently produce deep learning samples for remote sensing images. The remainder of this paper is structured as follows. Section 2 explains the design principle and implementation process of LabelRS. Section 3 introduces three cases and the corresponding experimental results. Finally, in Section 4, conclusions are drawn, and recommendations for use are given. Functionality and Implementation The LabelRS toolbox we designed contains three sub-tools, namely Semantic Segmentation Tool, Object Detection Tool, and Image Classification Tool. Section 2.1 firstly introduces the design principles and functions of these three aspects, and then Section 2.2 introduces the specific implementation, including the interface design and input parameters of these tools. Semantic Segmentation The left part of Figure 2 shows the processing flow of the semantic segmentation module. In addition to remote sensing images, the input of the tool also requires a vector file of the region of interests, and a field indicates the categories of different features. Such vector files can be drawn by users themselves in ArcGIS or other GIS tools, can be derived from NDVI [39] or NDWI [40], or can be derived from land survey data or other opensource geographic data, such as OpenStreetMap [41]. (1) Feature Merging Strategy We divide the distribution patterns of objects into sparse distribution and dense distribution, as shown in Figure 3. Figure 3a shows dense buildings, and Figure 3b shows sparse ponds. For densely distributed objects, if each object is processed separately, a lot of computing resources will be consumed, and the generated samples have much redundancy and overlap. To solve this problem, we proposed a strategy of merging features, as shown in Figure 4. First, create a buffer on the outside of each object. The buffer of the building in Figure 3a is shown in Figure 4a. The radius of the buffer depends on the input sample size and the spatial resolution of the image. The calculation formula of the radius d is defined as: where r is the spatial resolution of the image in the vertical or horizontal direction. It represents the spatial range represented by each pixel and can be obtained by reading the metadata of the image. t represents the sample size input by the user, and the unit of t is pixel. Then, the overlapping buffers are merged, and the obtained range is shown in Figure 4b. Finally, the densely distributed targets become independent objects. (2) Split Large Targets The independent objects are divided into three parts according to their size, as shown in Figure 5. We use the sliding window algorithm to split the target. First, determine the relationship between the height h, width w of the target envelop, and the sample size t. If h < 2t and w < 2t, take the center of the envelop as the sample center and r × t as the side length to construct a square as the range of the output sample. If h < 2t and w ≥ 2t, take the center of the envelop as the sample center O, and then slide left and right respectively. Set the center point after sliding as O', then O' can be calculated as follows: where x and y represent the longitude and latitude of the sample center, respectively. m is the overlap size defined by the user. If h ≥ 2t and w < 2t, then slide up and down respectively, the principle is similar to the above. In the above two cases, we choose to start from the center of the envelop instead of starting from the upper left corner. This is really necessary because we found that when sliding from the upper left corner, the original complete object would be very fragmented. Starting from the center can guarantee its original distribution patterns to the greatest extent. Finally, if h > 2t and w > 2t, start from the upper left corner of the envelop and slide to the right and down, respectively, to generate the sample area. The detailed sliding window algorithm for semantic segmentation is also given in Algorithm 1. In addition, a potential problem with the above process is that when the boundaries of vector features are very complicated, creating buffers and fusing buffers will be very time-consuming. Therefore, another innovation of this paper is to use the Douglas-Peucker Algorithm [42] to simplify the polygons in the early stage. Our experiments show that for irregular objects with complex boundaries, the sample generation efficiency can be increased by more than 100 times after adding a simplified surface step. (c) Objects whose height or width is larger than the sample size. (d) Objects whose height and width are both larger than the sample size. ( 3) Band Combination and Stretching The above steps create regular grids for making samples (we call them sample grids), and then these sample grids are used to crop the images and labels. Multispectral remote sensing images generally have more than three bands. Meanwhile, the pixel depth of remote sensing images is different from ordinary natural images and can usually reach 16BIT or 32BIT. But in the field of computer vision, natural images similar to JPEG and PNG are more common. Therefore, we allow users to choose the three bands mapped to RGB and set the image stretching method for multispectral images when users need to generate samples in JPEG or PNG formats. The image stretching algorithms we defined involves (1) Percentage Truncation Stretching (PTS). The algorithm needs to set the minimum percentage threshold minP and the maximum percentage threshold maxP. The two ends of the gray histogram of remote sensing images are usually noise. Assuming that the corresponding values of minP and maxP in the histogram are c and d, respectively. Then the stretched pixel x can be calculated as follows: where x is the pixel value before stretching. (2) Standard Deviation Stretching (SDS). It is similar to PTS, the difference is the calculation of c and d. The formula is as follows: where m and s represent the mean and standard deviation of the band, respectively. k is a user-defined parameter, and the default value is 2.5. It is suitable for non-numeric attribute values. For example, the input shapefile may use the string 'water' or the character 'w' to represent water bodies. In this case, the first two methods are invalid, and different types of features can be assigned from 1 in the order of positive integers. (4) Custom. Users customize the gray value of different types of features in the label. The above four different gray-level transformation methods will be recorded and saved in the output XML file. (5) Quality Control The last step is quality control. Since we have created a buffer during the sliding and cropping process, part of the generated samples may not include any target or only a few pixels of the target, which will cause class imbalance and affect the training of the deep learning model. Therefore, we set a filter parameter f. If the ratio of foreground pixels to background pixels in the label is less than f, then the sample will be considered unqualified and discarded. Another problem is that the 'no data' area of the remote sensing images may be cropped, which will also be automatically identified and then eliminated. In addition, different from the semantic segmentation samples in computer vision, spatial reference information is also an important component of remote sensing image samples. Therefore, we create jgw and pgw files for JPEG and PNG images, respectively, which are used to store the geographic coordinates of the upper left corner of the sample and the spatial resolution in the east-west and north-south directions. Finally, we use an XML file to record the metadata information of the sample to facilitate traceability and inspection. center.x = center.x + (t-m)*r 24 center.y = center.y − (t-m)*r 25 center.x = ex.Xmin + t*r/2 26 return SG Object Detection We first explain the related definitions. The entire image is regarded as a sample, and the coordinate range of the image is the sample range, as shown in the yellow box in Figure 6. The objects marked by the user in the sample are called targets or objects, as shown in the red box in Figure 6. The object detection tool is to separately record the sample range and target range of each labeled object. The processing flow of the object detection tool is shown in the middle of Figure 2. For the object detection samples, we are more concerned about the relationship between the sample size and the target size, and the position of the target in the sample. truncated is an important attribute of the sample in the object detection task. It represents the completeness of the target in the sample. If the target is completely within the image range, truncated = 0, indicating that no truncation has occurred. Suppose we want to generate samples with a size of 512 × 512, but the length and width of the target are greater than 512, then the target needs to be truncated, and each sample contains only a part of the target. Therefore, we first need to use the sliding window algorithm to segment large targets. Different from semantic segmentation, no buffer is created in the target detection tool, so the length and width of the object are compared with the size of the sample, rather than twice the size of the sample. Assuming that the red rectangle in Figure 7a is the target O marked by the user, and the grid obtained after segmentation using the sliding window is shown as the yellow rectangle in the figure, marked as G, then the calculation formula for truncated is as follow: where T i represents the truncation of the ith grid, S() is the function of calculating the area, and G i is the ith grid. The actual labels of the targets after splitting are shown as the green rectangles in Figure 7b. Different from the pixel-level labeling of semantic segmentation, the object detection task needs to minimize the truncation of the target and ensure that the target is located in the center of the sample as much as possible. Therefore, a sample is made with each object as the center after segmenting the large target, and the sample may also contain other surrounding targets. In order to improve retrieval efficiency, we use t × r/2 as the radius for retrieval and only consider other targets within this radius. The calculated sample range is then used to crop the input image while recording the pixel position of the target in the sample. We have set up three currently popular metadata formats of object detection for users to choose from, namely PASCAL VOC [43], YOLO [4], and KITTI [44]. PASCAL VOC uses the xml file to store xmin, xmax, ymin, ymax, the category, and truncation value of each object. YOLO uses the txt file to store the category of each target, the normalized coordinates of the target center, and the normalized width and height of the target. KITTI is mainly used for autonomous driving and uses the txt file to store the category, truncation, and bounding box of each target. In addition, because the annotations are recorded in text, users cannot directly judge the quality of the annotations of the samples. We have designed a visualization function to superimpose the bounding box of the target onto the sample while creating samples and annotations so users can visually browse the annotation results. Image Classification In the image classification task, the entire image has only one value as a label, and there is no specific object position information. Therefore, the biggest difference between the above two tools is that other pixel information cannot be introduced during the splitting process. The processing flow of the image classification tool is shown in the right of Figure 2. First, segment the large target, as shown in Figure 8. Swipe to the right and down from the upper left corner of the target, then fill in from the right to left or bottom to top when the distance to the right or the bottom is less than the size of a sample. This can guarantee the integrity of the target without introducing new image information. Objects smaller than the sample will be resampled to the sample size. We set up three interpolation methods, Nearest, Bilinear, and Cubic, among which Nearest is simple and fast, but the result is rougher. Cubic generates smoother images, but the computational cost is high. Finally, the samples support arbitrary band combinations and stretching methods, and different types of samples are stored in different folders. Implementation Based on ArcGIS 10.2, we developed an upward compatible annotation tool LabelRS, for remote sensing images using Python 2.7. The python libraries imported by LabelRS mainly include ArcPy, OpenCV, and Pillow. LabelRS has two versions. One is the ArcGIS toolbox. Its advantage is that it has a visual graphical interface, which is convenient for parameter input and can be quickly integrated into ArcGIS. The other is Python scripts, which facilitate code debugging and integration. This version has higher flexibility and is more suitable for batch data processing environments. The following describes the implementation of the three sub-modules of LabelRS. Semantic Segmentation Tool Dialog box of semantic segmentation tool is shown in Figure 9a. There are 14 input parameters in total, four of which are required. The meaning of each parameter is shown in Table 2. Object Detection Tool Dialog box of object detection tool is shown in Figure 9b. There are 11 input parameters in total, four of which are required. The meaning of each parameter is shown in Table 3. Image Classification Tool Dialog box of image classification tool is shown in Figure 9c. There are 11 input parameters in total, 4 of which are required. The meaning of each parameter is shown in Table 4. Making Water Samples Water is the most common and precious natural object on the surface [45]. Many researchers try to use deep learning methods to extract water bodies [46][47][48]. The boundary of the water body is very complicated, and manual labeling is time-consuming and labor-intensive. Therefore, we combined NDWI and LabelRS to propose an automated production process for water body samples. First, use the red and near-infrared band to calculate the NDWI, and then use the OTSU algorithm [49] to determine the segmentation threshold of water and non-water bodies. Filter out the interference of non-water objects such as farmland and mountain shadows through the area threshold. The finally obtained vector of water bodies can be input into the semantic segmentation tool to make water body samples. We used the GaoFen-2 (GF-2) satellite images, which have a spatial resolution of 4 m and contain four bands as red, green, blue, and near-infrared. The Beijing-Tianjin-Hebei region and Zhenjiang, Jiangsu Province were selected as the experimental areas. The climate types and land covers of these two regions are completely different. Due to the unique sensitivity of water to the near-infrared band, we chose the near-infrared, red, and green bands of GF-2 as the output wavebands of the sample. The sample size was set to 256. The gray level transformation method was Maximum Contrast, and the band stretching method was PTS. The experiment was carried out on a desktop computer, and the CPU was Intel Core i7-6700 3.40 GHz with 32 GB RAM. Finally, some samples made using semantic segmentation tools are shown in Figure 10. It can be seen that LabelRS combined with NDWI can segment the water body area well. The water body boundaries in the generated label are very detailed and smooth. Table 5 shows the processing time for a different task, and we can find that LabelRS is very efficient. The average time to produce a single multispectral remote sensing sample is 1-2 s. Making Dam Samples Dams are important water conservancy infrastructures with functions such as flood control, water supply, irrigation, hydroelectric power generation, and tourism. We chose the same experimental areas and data source as in the previous section. Due to the similarities between dams and bridges in geometric, spectral, and texture features, we treat bridges as negative samples. First, we manually marked the locations of dams and bridges in ArcGIS and saved them in a vector file. Then the object detection tool was used to make samples. The sample size was set to 512, and the metadata format was PASCAL VOC. In order to perform data augmentation, both true-color and false-color composite samples are generated, as shown in Figure 11. To visualize the labeling effect, the bounding boxes of dams and bridges have been drawn in different colors in the figure. Figure 12 is an example of PASCAL VOC annotation. Making Land Cover Classification Samples The classification of images at the image level instead of the pixel level means that we do not need to know the details of the distribution of the objects in the image. It is widely used in land use classification. We used Huairou District and Changping District in Beijing as experimental areas and selected GF-2 and GF-6 images as the main data sources. Figure 13 shows the main land use types in some parts of the experimental area. It can be seen that the main land cover classes include forest, water, buildings, and farmland. We first manually drew different features in ArcGIS software and then used the image classification tool of LabelRS to produce classification samples. The sample size was set to 128. The samples obtained are shown in Figure 14. Conclusions In response to the current lack of labeling tools for remote sensing images, we developed LabelRS based on ArcGIS and Python. Unlike normal images, remote sensing images have more bands, higher pixel depth, and larger breadth. The targets on remote sensing images also have different sizes and diverse distribution patterns. LabelRS has overcome these difficulties to a certain extent. It solves the problem of densely distributed targets by merging elements and divides large targets through sliding windows. A variety of band stretching, resampling, and gray level transformation algorithms are set to solve the problem of image spectral combination and pixel depth conversion. In addition, on the basis of conventional samples, unique spatial information is added to realize seamless conversion between natural samples and geographic samples. Our tool can assist researchers in making their own deep learning samples, which can reduce the burden of data preprocessing and the reliance on existing public samples, and ultimately helping researchers use deep learning techniques to solve specific target detection and recognition tasks. LabelRS also have certain limitations. (1) The object detection sub-tool does not support a rotating bounding box for the time being. (2) LabelRS currently rely on ArcPy scripts. Later we will use the GDAL library to achieve full open source. LabelRS's current limitations are identified and will be the base for future developments. Finally, we propose some suggestions on parameter settings. The first is how to choose adaptive tools. Large-size and irregular objects are not suitable for object detection because the rectangular bounding box may not effectively cover the target and will introduce a lot of interference. In this case, the semantic segmentation tool is more appropriate. The road map for selecting the tools is shown in Figure 15. The second is the sample size. The sample size for object detection can be set larger to avoid the target from being truncated. The sample size of image classification should be set as small as possible, because the larger the rectangle, the more non-label types of pixels may be introduced. The last is the output format. At present, most users are more accustomed to processing ordinary JPEG or PNG images. Because of this, LabelRS provides several useful data stretching functions to meet this demand. But in some cases, we prefer users to choose TIFF as the output format to avoid stretching the remote sensing images. For example, for a water body with uniform color, the image after stretching may look strange, which is caused by incorrect histogram truncation. In future research, we will continue to improve our code to make LabelRS easier to understand, easier to use, and easier to integrate.
6,417.6
2021-05-24T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Stability of orthotropic plates . The solution to the problem of the stability of a rectangular orthotropic plate is described by the numerical-analytical method of boundary elements. As is known, the basis of this method is the analytical construction of the fundamental system of solutions and Green’s functions for the differential equation (or their system) for the problem under consideration. To account for certain boundary conditions, or contact conditions between the individual elements of the system, a small system of linear algebraic equations is compiled, which is then solved numerically. It is shown that four combinations of the roots of the characteristic equation corresponding to the differential equation of the problem are possible, which leads to the need to determine sixty-four analytical expressions of fundamental functions. The matrix of fundamental functions, which is the basis of the transcendental stability equation, is very sparse, which significantly improves the stability of numerical operations and ensures high accuracy of the results. An analysis of the numerical results obtained by the author’s method shows very good convergence with the results of finite element analysis. For both variants of the boundary conditions, the discrepancy for the corresponding critical loads is almost the same, and increases slightly with increasing critical load. Moreover, this discrepancy does not exceed one percent. It is noted that under both variants of the boundary conditions, the critical loads calculated by the boundary element method are less than in the finite element calculations. The obtained transcendental stability equation allows to determine critical forces both by the static method and by the dynamic one. From this equation it is possible to obtain a spectrum of critical forces for a fixed number of half-waves in the direction of one of the coordinate axes. The proposed approach allows us to obtain a solution to the stability problem of an orthotropic plate under any homogeneous and inhomogeneous boundary conditions. Introduction The development level of production at the present stage is characterized by the widespread introduction of new technologies for the manufacture of high-strength materials with orthotropic (orthogonally anisotropic) properties. Such materials include fiberglass; composite materials reinforced with sequentially alternating layers of fibers in two mutually perpendicular directions; glued wood plates; sheet rolled metals, in which anisotropy begins to appear upon transition to the plastic stage of work, etc. The widespread use of materials with anisotropic properties has given rise to large-scale studies in the field of mechanics of anisotropic structures and, in the first place, plates. In many industries, designs in the form of plates made of orthotropic materials with three planes of symmetry of elastic properties are widely used. Under certain conditions, the operation of such plates is accompanied by the appearance of compressive stresses in the median plane, which can lead to a loss of stability and bearing capacity of the plate. Determining the critical load on a plate presents serious mathematical difficulties not only for orthotropic, but also for isotropic plates. In well-known monographs and reference books, only the stability problem of a rectangular plate with a hinged support along the contour is solved [1][2][3][4]. Latest researches analysis The stability problem of an isotropic rectangular plate loaded on two opposite edges by forces distributed according to a linear law was first solved by I. G. Bubnov and S. P. Timoshenko [5]. For an orthotropic plate, this problem was solved by S. G. Lehnitsky [6]. All these classical solutions are obtained for the case of the edges simple support of the plate in the form of double trigonometric rows. From later works, we note I. E. Harik papers [7,8], the first of which sets out a numerical-analytical method for the analysis of orthotropic rectangular plates subject to uniform, linearly changing and piecewise-continuous plane loads; the decision procedure is based on the classical method of variables separation. And the second one proposes an analytical method for solving the problem of elastic bending of orthotropic rectangular plates under different boundary conditions. A generalization of all the results obtained in this direction was made by F. Bloom and D. Coffin in an interesting Handbook [9], which is, in our opinion, the most comprehensive review of theoretical methods for calculating the stability of thin plates. There are very few contemporary works devoted to the stability of anisotropic plates. We note the paper [10], where the stability problem of an anisotropic plate is solved (but again with a hinged support along the contour) and the work [11], where the stability problem is solved for pure bending of an orthotropic plate, in which two opposite edges are free and two other edges articulated. The finite difference method was used here. The works of foreign authors are based primarily on the use of numerical methods, and, most often, the finite element method and its modifications are encountered. The paper [12] considers the analysis of vibrations and stability of thick orthotropic plates using finite elements based on the Treffz hybrid formula. The Treffz type finite element method (TFEM) is used by C. Young [13] to solve some potential problems in orthotropic environment. The method of boundary elements in classical form was used in [14]. Here, fundamental solutions are obtained for orthotropic thick plates with allowance for lateral shear strain. Boundary integral equations are formulated that are adapted to arbitrary boundary conditions. Examples of numerical implementation are given. Research aim The aim of research is an experimental study of the influence of steel fiber on the bearing capacity, deformability and crack resistance of serial reinforced concrete multi-hollow slabs manufactured in the factory. Materials and methods Problems of this kind can be solved by numerical methods, such as the finite element method, the finite difference method, the R-function method, etc., but it is advisable to verify the results by any analytical method. It seems that such is the NA BEM. As is known [15], the basis of this method is the analytical construction of the fundamental system of solutions and Green's functions for the differential equation (or their system) for the problem under consideration. To account for certain boundary conditions, or contact conditions between the individual elements of the system, a small system of linear algebraic equations is compiled, which is then solved numerically. In this case, no restrictions are imposed either on the boundary conditions or on the nature of the external load. Note that the method is strictly mathematically justified, since it uses fundamental solutions of differential equations, therefore, taking into account the initially accepted hypotheses, it allows to obtain exact values of the desired quantities of the problem. Research results The differential stability equation of an orthotropic plate within the framework of the Kirchhoff-Love hypothesis ( Fig. 1) can be written as 4 4 4 where stiffnesses are defined by formulas 3 Here E x , E y -moduli of elasticity in the axis directions; G -shear modulus; h -plate thickness; μ xy , μ yx -Poisson's ratio; W(x, y) -deflection amplitude value; N x (y); N xy ; N y (x) -forces in the middle plane; q(x, y) -transverse load amplitude. The stability equation (1) is of the fourth order and is a partial differential equation. Function W(x, y), which is the solution of this equation, depends on two variables. The transition from a two-dimensional problem to a onedimensional one, as required by the algorithm of the NA BEM, can be accomplished by applying the Kantorovich-Vlasov variational method. Let's expand the deflection W(x, y) into functional series: Functions X i (x) it is necessary to choose such that they most accurately describe the shape of the curved surface of the plate in the direction of the axis ox . Deflection curves for a beam that has the same abutment conditions as the plate in the direction of the axis ox meet his requirement. To select the lateral deflection distribution function X i (x) there are known two methods -static and dynamic [15]. When using the static method, the deflection of the beam is determined by the static load. This load should be such that symmetric and skewsymmetric forms of the deflection curve alternate alternately. Functions X i (x) are represented in the form of power polynomials that are easy to differentiate, integrate and calculate without the use of complex programs. When using the dynamic method, beam deflections are represented by forms of its own vibrations. In a static way, you need to build functions X i (x) depending on the load and reactions of the beam. In the dynamic method, it is enough to change only the values of the natural frequencies, which is very convenient. We will keep in (2) one member of the series, which, as shown in [15], is sufficient to obtain a result of acceptable accuracy, i.e. Let's substitute (3) into (1): Multiply both sides of (4) by X and integrate within [0; l 1 ], where l 1 -plate dimension in axis x direction (Fig. 1). Coefficients A, B, K, C can be calculated in any mathematical program with almost any accuracy. From the introduced notation it follows that N y (x) can be any function of x, while N x (y) and N y (x) should be piecewise constant functions of y, since otherwise, equation (5) will be a differential equation with variable coefficients. Divide all the terms of equation (5) by А: Equation (6) The solution of the Cauchy problem allows us to determine the fundamental functions, the form of which depends on the roots of the characteristic equation 4 2 2 3 4 Consider the practically important case when N xy = 0. Moreover, in equation (7) will be f = 0, and the roots of the characteristic equation are calculated by the formulas The form of fundamental functions is determined by the relation between r and s, which depends on the fixing conditions of the longitudinal edges of the orthotropic plate. Four options are possible for this ratio: 1. s r  -complex roots: where * A -a square matrix of values of fundamental orthonormal functions with compensating elements describing the topology of the system. The roots of equation (8) form the spectrum of critical forces of the plate in question. Let's look at some examples. We will determine the first three critical loads and three forms of buckling of a plate made of orthotropic material under two boundary conditions: hinged support along the entire contour (option 1) and rigid fixing of the plate on three sides with a free fourth side (option 2) (Fig. 2). To assess the accuracy of the results, the plate was modeled in ANSYS [16]. The calculation of plates by the finite element method under two variants of the boundary conditions showed good convergence of the results obtained by the two methods ( Table 1). The first three forms of buckling are given in table 2. Conclusions Thus, the stability problem for an orthotropic rectangular plate leads to four possible combinations of the roots of the characteristic equation of the problem, and, therefore, the complete solution of the problem will be determined by 64 analytical expressions of fundamental functions. The matrix of fundamental functions, which is the basis of the transcendental stability equation, is very sparse, which significantly improves the stability of numerical operations and ensures high accuracy of the results. An analysis of the numerical results obtained by the author's method shows very good convergence with the results of finite element analysis. For both variants of the boundary conditions, the discrepancy for the corresponding critical loads is almost the same, and increases slightly with increasing critical load. Moreover, this discrepancy does not exceed one percent. It should be noted that for both variants of the boundary conditions, the critical loads calculated by the boundary element method are less than in the finite element calculations. The obtained transcendental stability equation allows one to determine critical forces both by the static method and by the dynamic one. From this equation it is possible to obtain a spectrum of critical forces for a fixed number of half-waves in the direction of one of the coordinate axes. For example, one half-wave in the direction of the axis Ox and many half-waves in the direction of the axis Oy (Fig. 1), two half-waves in the direction of the axis Ox and many half-waves in the direction of the axis Oy etc., depending on the magnitude of the coefficients A, B, K, C. The proposed approach allows us to obtain a solution to the stability problem of an orthotropic plate under any homogeneous and inhomogeneous boundary conditions.
2,920.4
2020-01-01T00:00:00.000
[ "Mathematics" ]
Thermal characterisation of the cooling phase of post-flashover compartment fires The main characteristics of the cooling phase of post-flashover compartment fires are studied using a simplified first-principles heat transfer approach to establish key limitations of more traditional methodologies (e.g., Eurocode). To this purpose, the boundary conditions during cooling are analysed. To illustrate the importance of a first-principles approach, a detailed review of the literature is presented followed by the presentation of a simplified numerical model. The model is constructed to calculate first-order thermal conditions during the cooling phase. The model is not intended to provide a precise calculation method but rather baseline estimates that incorporate all key thermal inputs and outputs. First, the thermal boundary conditions in the heating phase are approximated with a single (gas) temperature and the Eurocode parametric fire curves, to provide a consistent initial condition for the cooling phase and to be able to compare the traditional approach to the first-principles approach. After fuel burnout, the compartment gases become optically thin and temperatures decay to ambient values, while the compartment solid elements slowly cool down. For simplicity, convective cooling of the compartment linings is estimated using a constant convective heat transfer coefficient and all linings surfaces are assumed to have the same temperature (no net radiative heat exchange). All structural elements are assumed to be thermally thick. While these simplifications introduce quantitative errors, they enable an analytical solu-tion for transient heat conduction in a semi-infinite solid that captures all key heat transfer processes. Comparisons between the results obtained using both approaches highlight how, even when considering the same fire energy input, the thermal boundary conditions according to the Eurocode parametric fire curves lead to an increase energy accumulated in the solid after fuel burnout and a delay in the onset of cooling. This is not physically correct, and it may lead to misrepresentation of the impact of post-flashover fires on structural behaviour. Introduction and background Ensuring the structural stability and integrity of a building in case of fire is of the utmost importance for the safety of its occupants and firefighters as well as for other fire safety objectives, such as property protection and business continuity.Traditional prescriptive design methodologies for structural systems exposed to fire are based on the concept of the standard fire curve as the main design scenario for postflashover compartment fires [1].This fire exposure is characterised by a monotonically increasing temperature-time curve, deemed to represent a worst-case scenario for non-combustible construction materials during the growth and fully-developed phases of a fire [2,3].However, the past two decades of research have highlighted the need to adopt a holistic performance-based methodology for the design of structures that ensures structural integrity and stability.Thus, it is necessary to deliver a performance analysis of the structure using a more realistic thermal exposure until complete fuel burnout [4].This approach allows then to analyse the structural behaviour of load-bearing systems during all phases of a fire: growth, fully-developed, decay, and cooling. Given that structural design seeks to optimize materials usage, the literature highlights the importance of understanding the behaviour of the structure during the fire decay and cooling phase.Once the fuel has been consumed, the structure continues to evolve thermally and, given common structural optimization practises, thermal evolution after the heating phase can lead to failure.Therefore, the cooling of the structural systems is relevant, independently of the construction materials used. With regards to concrete structures, during a fire, the temperatures inside concrete members continue to evolve after the period of maximum gas temperature.This means that the highest temperatures in the reinforcement might occur during the fire decay or cooling phase [5].Furthermore, concrete can experience an overall further loss of strength while cooling [6,7].These effects depend on many variables, including heating rates.Studies have shown that the effects of the fire on the structure are potentially more critical for fast-growing fires (steep temperature gradients) than for slowly growing ones [8], thus similar differences could be expected for fast or slow cooling.It is also expected that these consequences may be more critical for structural elements with 3-or 4-side fire exposure (i.e., beams and columns) [9][10][11]. Steel structures typically experience large variations in the force distributions within a structural frame during a fire event [12].During the heating phase, steel connections usually suffer large forces due to thermal expansions and deformations of longitudinal elements, while during cooling high tensile stresses challenge steel connections due to the thermal contraction of the same longitudinal elements [13].The specific characteristics of the cooling phase can therefore impact the behaviour of a steel structure in this latter stages of a fire event. Finally, the fire decay and cooling phases can also challenge wooden structural systems because, when the contents-fire extinguish, the heat wave continues to propagate within the timber and, accordingly, the load-bearing capacity of the structural system can, potentially, continue to decrease [14][15][16][17][18][19].The problem is more critical for timber structures because wood irreversibly loses its mechanical properties at relatively low temperatures, compared to traditional construction materials like steel and concrete.For example, at 100 • C a typical softwood column has a reduction of compressive capacity of about 75 %, compared to its capacity at ambient temperature [20], and the zero-strength layer within timber cross-sections can extend to areas at temperatures as low as 90 • C [19,21].A problem, specific to timber, is the influence of internal temperature gradients on self-extinction [22].Conditions during fire decay and cooling can delay self-extinction enhancing the potential for challenging behaviour such as delamination. Consequently, for all construction materials, structural failure can occur during the fire decay and subsequent cooling, even hours after a fire is extinguished [10].The main challenge is related to the fact that, after fire extinction, this hazard cannot be easily detected, and structural collapse may occur with little or no warning.This situation is a critical scenario for fire brigade interventions.This is what happened in Switzerland in 2004, when an underground car park collapsed after fire extinguishment, claiming the lives of seven firefighters [23]. Historically, most of the research efforts have been directed towards comprehending the thermal effects of a fully-developed fire during heating, which leads to the highest heat fluxes and temperatures, and it is commonly considered as the most challenging phase for load-bearing systems.It is also common to find large-scale compartment fire test results where the fire decay and cooling phases are not even characterised/reported.Due to safety concerns, in many cases, the fire is manually extinguished with water rapidly cooling all structural systems [24].As a result, engineering tools, aimed at estimating the thermal conditions to load-bearing systems in the event of a fire, focus on the maximum temperature achieved by the structure during the fully-developed phase of the fire, and little interest has been devoted to the fire decay and cooling phases [25,26]. Given the recognition that structural behaviour in fire needs to be optimized and explicitly assessed, burnout needs to be characterised, therefore various models that also include the fire decay and cooling phases have been developed and included in structural fire engineering design practices [27].Existing computer models solve the conservation equations to estimate the fire conditions in post-flashover compartments and have the capacity to introduce burnout and the subsequent cooling phase [27]: from simplified single-or two-zone models (e.g., OZone [28,29], CFAST [30], and B-RISK [31]) to higher complexity computational fluid dynamics (CFD) software [32] (e.g., Fire Dynamics Simulator -FDS [33]).These models are in some cases coupled with a finite-element model for structural analysis (e.g., SAFIR [34] and OpenSEES [35]).Despite their physical basis, all these models impose strong simplifications and approximations while remaining of complex use.Many of these approximations and simplifications reside within inputs and are many times hidden under the complexity of the modelling strategy.Simpler approaches seem to remain necessary and are currently used widely for design purposes. Nomenclature The most widely adopted methodology to characterise natural fire conditions for structures is a heat transfer analysis that uses as input the temperature histories provided by the Eurocode parametric fire curves (EPFC) [36], which were developed starting from the Swedish fire curves [37][38][39][40].This method offers analytical expressions to generate the temperature-time history of a fire as a function of the fuel load density and the ventilation factor as first described by Thomas [25].The thermo-physical properties of linings materials also form part of the definition of these curves.However, in its current formulation, the cooling phase is substantially simplified into a linear decay relationship, following constant cooling rates as prescribed in the 1975 edition of the ISO 834 standard, which are not based on fundamental physical principles or a comprehensive research study [38,41].This approach can lead to a potentially unrealistic definition of the cooling phase [42,43], nevertheless there is no detailed quantitative or qualitative evaluation of the nature or magnitude of potential errors introduced through this simplification. Over the past decades, several other engineering approaches to estimate the thermal conditions of natural fires on structural elements have also been proposed.These methodologies rely on different assumptions and simplifications.Examples are the formulations given by the BFD temperature-time curves, obtained by Barnett through an empirical regression analysis of full-scale experiments [44][45][46] and the iBMB parametric fire curves, implemented in the German national annex of EN 1991-1-2.In the latter case, the heating curves offer a method similar to the Eurocode parametric fire curves, but the cooling phase is defined as a parabolic decay curve [47,48].No substantial evidence for the choice of mathematical representation is provided.Other methods of various complexities and based on different sets of experimental data can be also found in other studies [49][50][51][52].A comparison of different approaches is shown in Fig. 1.Fig. 1 offers an example of the several temperature-time curves that can be obtained for a specific post-flashover compartment fire, and it compares the modelled temperature histories to those obtained experimentally from a large-scale fire test [53].All the graphs have been shifted to have the same flashover time, defined at a smoke layer temperature above 550 • C [54].While Fig. 1 seems to demonstrate agreement between models and experiments, this is purely based on temperature measurements and does not describe the heat transfer environment.Furthermore, the lack of detail on the radiative heat exchange corrections for the thermocouples makes it difficult to assess where the temperatures can be used as input for heat transfer calculations.This was made evident by Welch et al. [55] who showed the importance of these corrections when the gas was optically thin. Overlooking or highly simplifying the fire decay and cooling phases can affect the accuracy of any heat transfer calculations to the structure and therefore, potentially hiding dangerous failure modes.Comprehending and properly considering the fire decay and cooling phases can therefore significantly improve performance-based design methodologies that aim to deliver fire-safe structures.This study aims to provide an assessment of the different factors influencing the thermal boundary conditions for a structure in the event of a fire.The focus is on the decay and cooling phases.This study does not aim at producing a precise method of assessment, instead it attempts to analyse the problem with simple tools that are based on fundamental principles, enabling a transparent discussion of assumptions and limitations.Thus, the current research study describes the principles and characteristics of natural fires in post-flashover compartments, considering both the heating phase and the cooling phase.Based on energy conservation equations, a simplified numerical model is introduced to approximate the thermal conditions to structural elements within an enclosure during the cooling phase and define appropriate thermal boundary conditions.Simplifications and assumptions are discussed in detail enabling a clear understanding of the potential impact of each simplification and assumption on the thermal evolution of the structure.The presented methodology aims at highlighting the importance of approaching the problem of the cooling phase from a first-principles perspective, suggesting the use of analytical solutions and various simplifications to properly treat the convective and radiative heat transfer. Phases of post-flahover compartment fires A natural fire within a building enclosure is typically composed of a growth, a fully-developed, a decay, and a cooling phase [54,56].After fire ignition, the growth phase is characterised by a gradual increment of Fig. 1.Comparison between experimental results (Test 8, Cardington fire tests [53]) and various methodologies to estimate temperature-time curve of post-flashover compartment fires.For purposes of illustration, the curves were shifted to achieve flashover at the same instant.While it is recognised that flashover is a complex process, the instant where flashover occurs is defined as when the temperature reaches 550 • C [54]. A. Lucherini et al. the fire heat release rate, and the average compartment temperature and heat fluxes are relatively low, as the fire is localised in the vicinity of its origin.If the conditions are met, at flashover all combustible items in the compartment ignite simultaneously and flames appear to fill essentially the entire volume.There is a rapid increase in the fire heat release rate and compartment temperature.Depending on the compartment and fuel characteristics, the fully-developed phase of the fire can be typically associated with a quasi-constant heat release rate (ventilation-or fuel-controlled) and homogeneous temperature. Regarding the stability and integrity of load-bearing elements, structural fire engineering usually focuses on post-flashover fires because, after flashover, the thermal attack caused by the fire is significant.Consequently, the mechanical strength and stiffness of construction materials and structural systems can be significantly affected, and the stability and integrity of load-bearing elements can be compromised [57].The pre-flashover and the fully-developed phase traditionally represent the heating phase of the structure during a fire event.After that, the fire starts decreasing, leading to the fire decay and cooling phases. The fire decay phase and the cooling phase are often mixed up in the literature.However, they refer to two slightly different periods of the fire.This distinction has been recently underlined and associated with the time-history of fire heat release rate [56]. Within the fire decay phase, there is still a "fire", therefore during this phase there is still some flaming combustion taking place in the compartment, hence the fire heat release rate is different from zero.The decay phase is characterised by a progressive transformation of the fire from a fully-developed fire, where heat generation from the fire is a critical term in the energy equation, to a fire where heat generation can be neglected and eventually disappears from the energy equation.In this phase, structural elements can still heat up as long as they receive a positive heat flux, therefore the fire decay phase can still be part of the heating phase for the structure [36,56]. On the other hand, in the cooling phase, the fire is extinguished and therefore there is no heat release.After burnout, the total amount of energy within the compartment control volume decreases (i.e. the compartment gases and solids are losing heat to the surrounding environment).Nevertheless, the compartment elements still exchange heat between them.Structural elements, as any other component within the compartment, cool down according to their material properties and the conditions in the surrounding environment (e.g., compartment gases and linings).This phenomenon primarily depends on the characteristics of the compartment (e.g., geometry and opening) and its elements (e.g.linings). In some models available in the literature, the fire decay phase is assumed to start when 70-80 % of the total fuel load has been consumed and, after this point, the fire heat release rate decreases linearly to zero [28,29,36,41,58].However, there is no conclusive data on the amount of heat being released in the decay phase.The transition is highly fuel-dependent, and the available literature offers little evidence on the decay period of the heat release rate of post-flashover compartment fires [56]. Therefore, as in other simplified models (e.g., Eurocode parametric fire curves, discussed in section 7.1), this study assumes that the end of the fire fully-developed phase corresponds to complete fuel burnout [56].Hence, all the fuel is assumed to fully combust during the fully-developed phase, the fire heat release rate instantaneously drops to zero, and no fire decay phase is considered.From a structural point of view, this is generally considered to represent a more critical case for traditional load-bearing elements because it leads to a longer fully-developed phase and, therefore a higher maximum temperature and longer exposure to higher temperatures.For consistency with future work, this research study exclusively refers to the cooling phase as the phase after the fully-developed phase (without a fire decay phase). Compartment energy balance In a fire event, the gas phase enclosed by the compartment can be considered as a control volume of constant volume.While making the control volume the size of the compartment represents a very strong simplification that eliminates all terms associated with the distribution and transfer of energy within the compartment, the detail of Equation ( 1) is sufficient for the purposes of this study.As illustrated in Fig. 2, the energy conservation equation for the compartment enclosure is usually described as follows [57,59]: where The described terms play different roles in the energy balance within the compartment.Their order of magnitude is very diverse, and their signs can vary depending on the heat gains or heat losses.For the purposes of this study and simplification, the opening is assumed much smaller than the surface area of the inner boundaries and, accordingly, the radiative heat transfer from the hot gases through the opening ( Qrad ) can be assumed to be negligible (less than 10 % for the presented case).Furthermore, while radiation losses through the openings might not be negligible (large openings, optically thick gas phase, etc.), for small openings, and when integrated over the duration of the fire, it has been estimated to be approximately 3 % of the total energy released [58].Thus, the assumption is limited but not incorrect, and given that the heating of the compartment elements is the result of a process of heat transfer integrated over time, it is reasonable to neglect this term. Heating phase The fire growth phase and in particular the fire fully-developed phase are usually considered the main heating phases for load-bearing structural systems.As shown in Fig. 3, the energy contribution of the fire heat release rate ( Qfire ) and the heat exchange at the compartment opening ( Qflow ) and linings ( Qwall ) define the compartment fire dynamics, and they must be analysed and quantified in detail.Accordingly, Equation (1) during the heating phase can be re-written as follows: By adopting simplifying assumptions, the various terms can be estimated, and the differential equation can be solved.A classic simplification formulates that heat release rate by the fire and advective heat Fig. 2. Energy balance within the fire compartment (control volume, CV). A. Lucherini et al. transfer flowing outwards through openings ( Qflow,out ) dominate the fire dynamics within a compartment, with Qflow,in can generally considered negligible (compared to Qflow,out ) [60].These assumptions and the additional simplification of a homogeneous control volume allow for the thermal conditions to compartment elements during the heating phase to be defined by a single temperature evolving in time.Thomas [61] suggests that the maximum gas phase temperature (attained at steady state) is the conservative compartment temperature that should be used to quantify heat transfer to the compartment elements.This approach is typically valid for relatively small compartments, but the assumption of homogenous temperature distribution is no longer valid for large compartments, particularly if the aspect ratio deviates from cubic compartments.Many examples of non-homogeneous compartment temperatures have been reported and reviewed by Stern-Gotfried et al. [62].A more recent example has been reported by Gupta et al. [63] who describes the mechanisms leading to heterogeneities. For the estimation of the heat transfer rate to the compartment boundaries ( Qwall ), it is necessary to address the heat transfer from the gas phase to the solid.Heat transfer to the compartment elements occurs by convection and radiation.Several experimental studies have measured the different heat transfer components in compartment fires.Most studies use small (order of 1 m) compartments with small openings.A notable example is the experiments by Veloo and Quintiere [64].Larger compartment fire experiments with larger openings were reported by Lennon and Moore [53] and the heat transfer fields were analysed by Welch et al. [55].Their results were consistent with those of Veloo and Quintiere [64].Detailed methodologies to assess both forms of heat transfer have been developed for zone models that include multiple compartments [65] but that remain restricted to small compartments with small openings.The only generalised model that includes all compartment geometries was developed by Jowsey [66] who established the different heat transfer regimes covered through a broad range of velocities and optical thickness.Other simplified methodologies include the adiabatic surface temperature formulation.The adiabatic surface temperature converts radiative and convective heat transfer to a single temperature.This conversion requires a single temperature for the sources of heat along with defined heat transfer coefficients (e.g., convective heat transfer coefficient) [67].When assuming an optically-thick gas medium during the heating phase, a single gas temperature defines both radiative and convective heat transfer from the compartment gases.Thus, the heat flux to compartment elements in a post-flashover compartment fire is generally associated with the temporal evolution of the compartment gas temperature (T g ). Cooling phase Energy considerations as shown in Equation ( 2) can also be expressed for the cooling phase, as shown in Fig. 4. The exchange of gases at the compartment opening (i.e., inflow and outflow) plays a key role in the cooling phase, and the difference ( Q flow,out − Qflow,in ) has to be considered.In addition, the compartment linings gradually cool down and the heat transfer Qwall flows from the linings into the compartment gases (i.e., it has a negative value).If the compartment is once again assumed to have a homogeneous temperature, for the cooling phase, Equation ( 1) can be written as follows: As for the heating phase, the presented differential equation can be solved by adopting different assumptions and simplifications.The compartment gas temperature is primarily governed by advection into and out of the enclosure and the thermo-physical material properties of the compartment linings. During the heating phase, heat exchange by the flow at the opening is dominated by the static pressure difference between the compartment (hot) and the external environment (cold).In this case, the mass flow rate through a vertical opening (and resulting energy transfer) can be estimated using simple formulas and typical flow velocities are in the order of a few meters per second [59].During the cooling phase, the temperature difference between the hot compartment linings and the cold air results in natural convection.By considering buoyancy as the dominant force, the velocity of these convective flows can be approximated as ̅̅̅̅̅̅ gH √ , taking the compartment height H [m] as characteristic length and g as the gravitational acceleration (9.81 m/s 2 ).Considering typical compartment heights in the order of a few meters, the characteristic velocity can again be estimated to be of the order of a few meters per second.This corresponds to a characteristic residence time (t k [s]) in the order of a few seconds [68].Therefore, the convective cooling of the control volume is typically a very rapid phenomenon. The compartment elements gradually cool down.The rate of cooling depends on the individual element's physical and thermal properties as well as the convective velocity.A characteristic time for this process can be estimated from the expression of the thermal propagation depth (L th [m]) [69].Accordingly, the characteristic time (t k [s]) is proportional to L 2 th /α, where α [m 2 /s] is the thermal diffusivity of the compartment linings.For typical construction materials, this value can be estimated to be in the order of hours. This confirms that, after flames quench, the entire cooling process of the compartment elements is much slower than the cooling of the gases in the control volume.After a rapid transition period, the smoke inside the compartment is released and fresh air continuously enters the enclosure through the opening and leaves carrying energy from the compartment elements.The gases inside the compartment quickly become optically thin and radiative exchange between gas and the solids can be considered negligible.A possible simplification, given the short time scales associated with advection, is to assume that the gas temperature is that of ambient (T g ≈ T a ).This eliminates the need for Equation (3) and allows simply to estimate Qwall using a convective heat transfer coefficient and radiative exchange between the surfaces of the enclosure.This will be explored in Section 5.2. As in the heating phase, the concept of adiabatic surface temperature [67] can be applied to lump the thermal boundary conditions into a single temperature.However, in the cooling phase, the theoretical adiabatic surface temperature cannot be readily interpreted as corresponding to a physical gas temperature because of the significantly different contributions of convection and radiation. Thermal boundary conditions and compartment fire tests The literature review highlighted that engineering tools and models for natural fires are generally based on the outcomes of large-scale fire tests.In fire tests, thermocouples of various types and sizes are typically employed with the intention of measuring the evolution of the gas temperature within the compartment during heating and cooling without a clear consideration of the distinction between convective and radiative heat exchanges.However, as described in Section 3.2, during the cooling phase, there is a clear difference between the compartment's gas temperature and the linings' surface temperature.This distinction is not captured by current measurements because the thermocouples are strongly influenced by radiation from the hot compartment linings (see Fig. 5).During cooling the gas phase is optically thin, so to obtain the gas phase temperature, significant corrections for radiation are necessary [55,70,71].Without a precise correction, thermocouples provide an estimation of the combined effects of the gas phase temperature and the radiative heat being transferred from the surface of the compartment linings.The effect of radiative exchange with the linings increases in importance when thermocouples of larger characteristic dimensions are used, like in the BRE Cardington fire tests [53,55] or in the case of plate thermometers. Confusion on the definition of the thermal boundary conditions during the cooling phase has nevertheless prevailed.The cooling phase is normally characterised by a parabolic/exponential decreasing branch matched to measured temperatures, see Fig. 1 [39,53,[72][73][74][75].Gas temperatures measured using thermocouples have led to various formulations of the cooling phase in engineering models like the Swedish fire curves [39,40].These temperatures have been interpreted as an opaque gas phase temperature for heat transfer calculations, nevertheless, there is no evidence of appropriate corrections for radiative exchange between the thermocouple and the linings.In contrast, when a radiative correction has been implemented [55], it has been demonstrated that the correction is negligible when the gas phase is optically thick but extremely important when it is optically thin, like in the cooling phase.The lack of comprehensive research studies on this topic has allowed for highly simplified models to prevail despite being frequently criticised [41][42][43]. Simplified numerical model Based on the considerations and energy balance equations presented in Section 2 on the heating and cooling phases of natural fires, a simplified numerical model is formulated here.This model aims at estimating the thermal conditions inside a compartment (as any other internal compartment element) during a natural fire, with a special focus on the evolution of the cooling phase.In particular, special attention is paid to the temperature evolution of the inner gases and linings surface within the compartment. Heating phase Within the scope of this research study, the fire heating phase (growth and fully-developed) is estimated according to the Eurocode parametric fire curves, currently the most widely adopted methodology, where the temperature history can be calculated starting from a few input parameters, namely the fuel load density, compartment geometry and characteristics (i.e., ventilation and linings' thermal inertia) [4,5,7,8,10,11,14,21,36,43]. As a result, a temperature-time curve of the compartment gases (T g ) during the heating phase is obtained.In the case of Eurocode parametric fire curves, this temperature can be used as a thermal boundary condition.This is appropriate because for optically thick smoke including the effect of the radiation coming from the heated compartment linings is not necessary, and a single gas temperature controls both convection and radiation heat transfer to the compartment elements [55].The boundary condition can therefore be defined as: where A w [m 2 ] is the compartment linings surface, ε w [− ] is the linings emissivity, and σ is the Stefan-Boltzmann constant (5.67 × 10 − 8 W/ m 2 K 4 ).Assuming an optically thick smoke, the compartment gases emissivity is not included in the expression, as it is assumed equal to 1. The constant value of 35 W/m 2 K for natural fire models recommended by Eurocode is used for the convective heat transfer coefficient h c [36].This value is an object of intense discussion, underlying the fact that the low velocities assumed in the compartment during the fire fully-developed phase should lead to lower values (values above 25 W/m 2 K are typically considered to relate to forced convection) [54].However, given the high temperatures and radiation-dominated heat transfer, the convective heat transfer coefficient has a limited impact during the heating phase. The end of the fully-developed phase (t max [s]) is a function of the compartment fuel load density (q ″ f [MJ/m 2 ]), opening factor (O [m 0.5 ]) and resulting burning rate ( ṁ″ f [kg/s]).For ventilation-controlled fires, the burning rate can be calculated from the opening factor using Kawagoe's correlation [54].The opening factor (O [m 0.5 ]) is usually expressed as: where H o [m] and A o [m 2 ] are the compartment opening height and surface area, and A t [m 2 ] is the total surface area of the compartment enclosure. Under the assumption that the total fuel load is consumed during the heating phase (fully-developed fire) and a steady state burning rate, the following expression can be derived to obtain the commonly used expression for the time to burnout: Fig. 5. Schematisation of using a thermocouple to estimate the fire conditions during the cooling phase. where ΔH c [MJ/kg] is the fuel effective heat of combustion.In order to consider a consistent heating period, it is important to introduce these equations.However, this part of the solution will not be implemented any further, since similar empirical expressions can be found in various methodologies, like the Eurocode parametric fire curves [36,41].Using these thermal boundary conditions, the evolution of the surface and in-depth temperatures of the compartment linings can be estimated until the end of the heating phase (here burnout, since all fuel is assumed consumed in the heating phase).The temperatures are calculated using a simplified finite-difference conductive heat transfer model (explicit scheme) [69]. Cooling phase Given the description of the cooling phase in Section 3.2, the compartment environment is assumed to be an optically thin, smokefree, environment and the temperature of the gas phase is assumed to be that of ambient (T a , i.e. outside).The temperatures of all compartment surfaces evolve due to convective cooling by the gas inside the compartment, as well as by radiation exchange between the various compartment elements.The boundary condition for a specific compartment element (j) of surface temperature T w,j and area A w,j for the cooling phase can thus be written as: where F ij [− ] is the view factor between the specific compartment element (j) and another compartment element (i), and ε [− ] is the effective emissivity (ε ) [69].The total value can be estimated by summing the contribution of all the single elements within the compartment.This formulation assumes that the view factor to the outside is small.With regards to the conduction heat transfer, the same simplified finite-difference conductive heat transfer model can be used as in the heating phase.However, in the case of compartment elements characterised by different thermo-physical properties, the heat transfer Qwall,j in the cooling phase needs to be estimated for each element. Different from the heating phase, the estimation of the convective heat transfer coefficient (h c ) has a key role in the quantification of the thermal boundary conditions during the cooling phase.Convective heat transfer has a primary importance because it controls the cooling of the compartment linings, and therefore the heat fluxes within the compartment.Typical values for free convection are in the range of 5-25 W/m 2 K [54].Considering the compartment walls as vertical hot flat surfaces subjected to natural free convection, empirical correlations based on the Nusselt number can be used to obtain a first approximation of the convective heat transfer coefficient [69,76].It is certain that more refined approaches can be followed, nevertheless, the precision by which the heat transfer coefficient is estimated is not the subject of this study.Further refinements could follow but will require a more detailed characterisation of the regimes being explored as has been done by multiple authors for several specific regimes [55,63,64,66].Assuming a characteristic length of 1-4 m (assumed as compartment height) and surface temperatures of 200-1200 • C, Fig. 6 shows that the convective heat transfer coefficient has limited variability for these conditions (6.4-7.6 W/m 2 K).Due to recirculation flows, higher velocities and convective heat transfer coefficients could be expected in the enclosure.However, for simplicity, the convective heat transfer coefficient for the cooling phase is set equal to 7 W/m 2 K [77]. With regards to the radiative heat transfer, the simplified model assumes that rapid cooling of the compartment gases at fuel burnout leads to a transparent (optically thin) environment inside the enclosure.Thus, the surfaces are free to exchange heat.Compartment surfaces can have different linings with different thermal properties.This can result in different surfaces temperatures and thus radiative heat exchange within surfaces.Any precise calculation of the temperature histories within structural elements will require the inclusion of these property differences.This study does not aim to deliver such level of precision, but rather an illustration of the impact of convective cooling.Therefore, as a simplification, all the compartment linings will be assumed to be made of the same material and, given the assumed uniform gas temperature and convection coefficient, therefore will follow identical temperature evolutions.Accordingly, the second term in the right-hand side of Equation ( 8) can be completely eliminated.However, if the analysis focuses on the thermal evolution of a compartment element that is highly sensitive to the temperature of the surrounding surfaces, the radiation exchange must be included into the calculation.This aspect is discussed in detail in Section 7.2. Case study The presented numerical model is analysed to estimate the thermal conditions from a natural fire, with a specific focus on the cooling phase.A case study is defined: a square compartment of 7.5 × 7.5 m 2 in plan, 3 m in height, with an opening factor of 0.04 m 0.5 , a fuel load density per unit floor area of 720 MJ/m 2 and compartment linings with a thermal inertia of 1160 J/m 2 s 0.5 K (typical value for lightweight concrete), surface emissivity of 0.8, and thickness of 0.20 m.All compartment linings are the same, so heating and cooling rates are the same as well, considering uniform convection coefficients and 1D heat transfer.This case is chosen to provide a heating phase of 1 h similar to the standard fire curve according to the Eurocode parametric fire curves methodology (scaling factor Γ equal to 1) [36].To solve the simplified numerical model, the spatial discretisation finite-difference conductive heat transfer model inside the linings is set equal to 1 mm and the time step as 0.01 s.It has been confirmed that these settings result in numerical stability.The ambient temperature is specified as 20 • C. With the described assumptions and input parameters, the numerical model enables the evaluation of the temperature-time histories of the compartment gases and linings.These outcomes are shown in Fig. 7, which also reports the ISO 834 standard fire curve and the corresponding Eurocode parametric fire curve.In addition, Fig. 8 reports the calculated heat fluxes of various natures between the compartment gases and linings, and Fig. 9 shows the in-depth temperature profiles within the compartment linings during heating and cooling. With regards to the heating phase, the gas and linings surface temperatures rapidly increase until they reach a maximum value, achieved at fuel burnout.The gas temperature is higher than the surface temperature, but they follow a very similar trend, defined by the thermophysical properties of the linings material (i.e., its thermal inertia).In this phase, there is a significant net heat flux entering the compartment linings from the hot gases ( qnet '' ), where the radiative component ( qrad '' ) dominates the heat transfer compared to the convective ( qconv '' ) component.The net heat flux absorbed at the linings surface ( qnet '' ) is conducted in-depth into the material ( qcond '' ), so these fluxes are equal (with opposite sign) at all times.At fuel burnout, the gas temperature is set to ambient conditions, while the linings surface temperature gradually decreases.This temperature-time curve is characterised by decreasing cooling rates (high for elevated temperatures and low for lower temperatures, see Fig. 7 after 60 min), and it is determined by the convective cooling of the compartment linings surface temperature and their thermo-physical properties.For this example, and given that all the linings are the same, the net radiative exchange between them is zero.The radiative heat flux between the gases and linings ( qrad '' ) also decreases to zero, while all the heat conducted through the linings surface ( qcond '' ) is transferred to the compartment gases due to surface convective cooling.Fig. 8 highlights how, in the heating and cooling phase, the direction of heat transfer (i.e., signs positive/negative) is opposite.Due to the strong assumption of full fuel consumption in the heating phase (no gradual fire decay phase) and consequent simplification of compartment gases at ambient temperature during the cooling phase, there is a significant discontinuity between the two phases, but the simplified numerical model provides a correct phenomenological description of the compartment conditions during cooling.The results could certainly be improved by including a transient phase that considers the gradual decrease of the fire heat release rate, as well as the gradual cooling of the compartment gases. Parametric study To investigate the influence of various assumptions and input parameters on the numerical model, a parametric study is carried out focusing on the cooling phase.Fig. 10 presents the results obtained from the parametric study on the evolution of the compartment gases and linings surface temperatures with varying convective (cooling) heat transfer coefficient (h c ), opening factor (O), linings' thermal inertia (b), and fuel load density (q ″ f ).Fig. 10 highlights how a higher convective (cooling) heat transfer coefficient produces a faster surface cooling.However, the difference between 5 and 10 W/m 2 K is quite limited.On the other hand, Fig. 10 underlines that the other variables affect the cooling phase in a more significant manner, particularly because they have an important impact on the heating phase.This emphasises how there is a strong relationship between the heating and the cooling phase, and the two phases cannot be fully decoupled.For instance, a higher compartment opening factor generally creates a higher maximum gas temperature at fuel burnout, but a shorter fully-developed fire with a higher heating rate: this leads to a higher compartment linings temperature at the end of the heating phase, reduced thermal penetration depth and faster cooling.As regards the compartment linings material, higher thermal inertia does not affect the fully-developed fire duration, but it results in lower maximum gas temperature (higher convection losses at boundaries), a lower heating rate, and lower compartment linings temperature at burnout.However, in all cases, the surface temperature follows temperature trends similar to each other, even if they start cooling at significantly different temperatures.This is related to the thermal inertia because, similarly to the heating phase, the linings material responds to surface thermal conditions changes according to its thermal inertia: the lower the thermal inertia, the faster the surface responds to a gas temperature change. Finally, a higher compartment fuel load density prolongs the heating phase with the same heating rate, and it leads to a higher maximum gas temperature and compartment linings temperature at burnout.Accordingly, the surface cooling is slower due to the deeper heat penetration.A similar effect can be also created by a larger compartment size, if the compartment opening factor is fixed. Comparison with the eurocode parametric fire curve methodology The majority of research studies available in the literature that focus on the decay phase and cooling phase effects of the post-flashover compartment fires on structural elements typically define the thermal boundary conditions by the Eurocode parametric fire curves [4,5,7,8,10,11,14,21,43].The effect of imposing different boundary conditions in the cooling phase becomes evident if the simplified numerical model is compared to the Eurocode parametric fire curves methodology [36]. It is important to highlight that, despite a different common belief, the Eurocode parametric fire curves methodology assumes that the total fire heat is released during the fully-developed phase, therefore the end of this phase corresponds to the beginning of the cooling phase, without any decay phase [56].Indeed, the duration of the fire fully-developed phase is calculated considering the total fuel load, in analogy with Kawagoe's theory [54].Accordingly, the temporal evolution of the fire heat release rate underlying the presented simplified model and the Eurocode parametric fire curves is the same. This section provides a comparison between the model and the Eurocode approach to illustrate any potential differences: the case study discussed in Section 6.1 is solved according to the two methodologies, given identical heating phases. Fig. 11 first evidences the considerable difference between the evolution of the compartment gases (T g ) and linings surface temperatures (T w ) during the cooling phase following the two methodologies.The linings surface temperature for the simplified model reproduces a classical non-linear cooling down evolution, while the parametric curve shows that linings and gas phase temperature do not differ much.Indeed, the evolution of the linings surface temperature (T w ) according to the Eurocode parametric fire curves methodology was calculated using the prescribed Eurocode thermal boundary conditions starting from the evolution of the compartment gases (T g ) [36].Similarly, the in-depth temperature profiles within the compartment linings (Fig. 12) and the conductive heat fluxes at the linings surface (Fig. 13) differ significantly.Based on the defined conditions for the cooling phase in the simplified numerical model, the fuel burnout (60 min) corresponds to the beginning of the compartment gases and linings cooling.This can also be confirmed by the positive conductive heat fluxes at the compartment linings surface (heat transfer from solid into gases).In contrast, the Eurocode parametric fire curves methodology imposes thermal boundary conditions that lead to negative heat fluxes for an additional 40 min (until about 100 min).Indeed, for this period following the end of the heating phase, the compartment gas temperature is still higher than the linings surface temperature, causing a net heat flux into the solid.Only when T g < T w , the compartment linings commence the cooling process with the surrounding environment, yielding positive conductive heat fluxes (refer to Fig. 13).Consequently, the two different thermal boundary conditions lead to an important difference in the in-depth temperature distributions. To investigate the effects of the thermal boundary conditions defined according to the two methodologies on the thermal energy of the compartment elements, the evolution of the total in-depth thermal energy (per unit area) accumulated within the compartment linings was calculated as: where ρ is the mass density [kg/m 3 ], c p is the specific heat capacity [J/ kgK], and ΔT(x, t) [K] is the time-varying in-depth temperature rise within the thickness of the solid (d [m]).For simplicity and in analogy with the case study described in Section 6.1, the material properties are kept constant.Fig. 14 underlines how, during the cooling phase, the total thermal energy differs significantly between the two methodologies.With regards to the simplified model, the end of the heating phase corresponds to the beginning of thermal energy decrease, as the model assumes fuel burnout and convective cooling after this point.On the contrary, in the case of the Eurocode parametric fire curves, the end of the heating phase does not constitute the end of thermal gain for the solid.For these specific conditions, there is an additional thermal gain in the system during the cooling phase, until when the conductive heat flux at the compartment linings surface is positive, hence only when T g < T w .Given that the fire has burnt out, this is physically incorrect and represents an unnecessary over-dimensioning of the thermal load.In the Eurocode case, the thermal state (i.e. total in-depth thermal energy) at the end of the heating phase is reached again at about 150 min, creating a "cooling delay" of about 1.5 h compared to the simulation results from the simplified model.After this point, the cooling is faster than the simplified model, due to the contribution of the imposed radiation losses and larger convective (cooling) heat transfer coefficient (35 W/m 2 K vs. 7 W/m 2 K). This analysis highlighted the impact of the definition of the thermal boundary conditions and the significant differences between the two approaches.It has been shown how, according to the Eurocode parametric fire curves methodology, which does consider fuel burnout at the end of the heating phase, the end of the heating phase does not correspond to the end of thermal gain within the solid.By defining these thermal boundary conditions, the thermal energy in the solid continues to grow for a significant amount of time.This leads to a delay in the onset of cooling of the compartment elements and, possibly, an overestimation of the impact of post-flashover fires.Indeed, if the duration of the heating phase is estimated assuming that all the fuel within the compartment is consumed at the end of the heating phase (i.e., burnout), it is incorrect to consider an additional thermal gain during the cooling phase.It is recalled that the present model: (i) does not consider a decay phase, thus releases all energy as contained in the combustible materials during the heating period, (ii) assumes the absorptivity of the gas to be zero and its temperature to be equal to ambient during the cooling phase, and (iii) fully captures the radiative heat exchange between surfaces, assuming a uniform temperature for each surface.Because of these simplifications, the temperature evolution of the structural elements will introduce errors, but there is no heat added into the compartment after completion of fuel burnout, thus avoiding a nonphysical heat flux into the structure. Including the radiative heat exchange In the presented simplified model, no radiative heat exchange has been estimated between the compartment elements during the cooling phase because, for simplicity, equal surface temperatures were assumed for all compartment surfaces.To obtain a more realistic result, the radiation feedback and heat transfer during the cooling phase should be included, particularly considering how various compartment elements (and surfaces) would cool down and exchange heat through radiation within the enclosure.To do so, the thermal boundary conditions described in Equation ( 8) (final term) must be solved considering the view factor, temperature and area of each single element within the compartment.Thus, gas phase and radiative heat transfer between the various compartment surfaces (e.g., structural elements and linings) need to be determined in a coupled manner.This process might be computationally intensive as it requires solving the entire coupled heat transfer problem at each instant in time (i.e., surface and in-depth temperatures, as well as thermal boundary conditions, for each element within the compartment).This approach might prove impractical for complex building geometries and compartments composed of multiple linings materials.It is therefore useful to explore a simplified approximation for the evolution of cooling compartment surfaces in order to estimate and incorporate the radiative heat exchange. Given that convective and radiative heat transfer are much faster than conductive cooling of the solid, convection and radiation can be treated as steady processes, while conductive cooling of the solid remains the only transient process.Convection has already been decoupled by assuming a constant convective heat transfer coefficient, while quasi-steady radiative heat exchange can be estimated from a representative temperature evolution of the compartment linings surface. An analytical expression of the cooling branch can be obtained by adjusting the well-known analytical solution for transient conduction in a semi-infinite solid with convective thermal boundary conditions through a variable transformation [69].The analytical solution is typically used for heating or cooling of semi-infinite solids, given a convective heat transfer coefficient, ambient temperature, and initial temperature.In this case, the solution is altered to estimate the surface cooling of the compartment linings, starting from a given convective heat transfer coefficient and in-depth temperature profile (i.e., achieved at the end of the heating phase).As in the numerical model, the convective (cooling) heat transfer coefficient can be set equal to 7 W/m 2 K. On the other hand, the thermal gradient within the compartment linings at the end of the heating phase is defined as a function of the thermal penetration depth (L th [m]): where n [− ] is a multiplication factor for the estimation of the thermal penetration depth (typically 4 for most applications [69]), α [m 2 /s] is the thermal diffusivity of the compartment linings, and t k [s] is the characteristic time, assumed equal to the duration of the heating phase [69].The extremes of the thermal gradient are defined by the compartment linings surface temperature at the end of the heating phase (maximum temperature) and ambient temperature (see Fig. 15).To obtain a simple analytical solution, the thermal gradient is assumed to be linear to avoid a second-order derivation of the transformation equation different from zero.The transformed solution is obtained by removing the convective cooling contribution from the initial thermal gradient estimated at the end of the heating phase.The derivation of the analytical solution and the variable transformation are explained in detail in Appendix A. Accordingly, starting from the compartment linings surface temperature at the end of the heating phase and knowing the thermophysical properties of the compartment linings, the thermal gradient at the end of the heating phase can be estimated according to the thermal penetration depth L th (Equation ( 10)).Starting from this gradient, the thermal gradient within the compartment linings is decreased by adding the convective cooling contribution estimated according to the analytical solution with variable transformation (refer to Appendix A).This methodology produces an approximation of the surface cooling of the compartment linings, as shown in Fig. 16.Different values of n produce various levels of accuracy compared to the temperature curve obtained from the numerical model.This aspect is closely related to the approximation of the thermal gradient within the compartment linings at the end of the heating phase.As highlighted in Fig. 15, this thermal gradient acts as a starting point for the approximation of the in-depth temperatures during cooling, and it directly depends on n because it defines the thermal penetration depth, hence its slope.Fig. 16 evidences how higher values of n produce slower cooling due to the deeper thermal penetration depth and a less steep thermal gradient within the solid.The choice of the value n is key because it generates an over-or underestimation of the temperature evolution of the compartment linings surface and this choice strongly depends on the application of the presented methodology.For structural fire engineering problems, the overestimation of the thermal exposure generally produces a conservative solution.However, it is also important to estimate realistic thermal conditions and to consider which temperature ranges are relevant for the construction material and structural system under analysis.For instance, the case study presented in Fig. 16 (n = 2) offers a good approximation of the cooling branch for temperatures above 350 • C. In general, this methodology offers an analytical approximation for the surface cooling of the compartment linings, which can be used to estimate the radiative heat exchange to an element inside the compartment (Equation ( 8)) during the cooling phase of a post-flashover compartment fire. Conclusions The current study focused on describing the main characteristics of the cooling phase of post-flashover compartment fires, evidencing the differences in the manner the thermal boundary conditions are expressed when contrasted with the heating phase.The heating phase is typically characterised by optically thick smoke with high temperatures and the thermal boundary conditions can be correctly approximated by a single (gas) temperature.After fuel burnout, the compartment gases quickly become optically thin and tend to ambient temperature, while the compartment linings gradually cool down.The different characteristic time scales of convective cooling of the compartment, convective cooling of the solid, radiative heat exchange between solid surfaces and conductive heat transfer within the compartment require a more detailed mathematical representation of the thermal boundary conditions.Accordingly, a simplified numerical model was formulated to estimate the thermal conditions during the cooling phase.The heating phase is defined according to the Eurocode parametric fire curves methodology, while the thermal boundary conditions during the cooling phase are defined by a detailed analysis of all heat transfer modes and simplifications consistent with the different time scales.A constant convective (cooling) heat transfer coefficient can be defined to characterise natural convective cooling of the compartment linings, while the evolution of the linings surface temperature characterises radiative heat exchange between the compartment linings and the construction element under analysis.The time-history of the linings surface temperature can be also approximated transforming the well-known analytical solution for transient conduction in a semi-infinite solid. The application of the numerical model evidenced the importance of defining the thermal boundary conditions of the compartment structure during the cooling phase.Moreover, a parametric study underlined how the heating and the cooling phases are closely related and cannot be fully decoupled.Indeed, key parameters (convective heat transfer coefficient, opening factor, linings material and fuel load density) that have an important impact on the heating phase also affect the cooling phase in a significant manner. The simplified model suggests various simplifications to estimate the convective and radiative heat transfer during the cooling phase of postflashover compartment fires.In particular, it shows how the system starts cooling immediately after burnout.In contrast, the thermal boundary conditions (based on a single temperature) recommended by the Eurocode parametric fire curves methodology lead to energy and temperature increase in the solid beyond fuel burnout and therefore an unphysical delay in the onset of cooling of the compartment elements.This can possibly cause an over-estimation of the impact of postflashover fires on load-bearing elements. The presented simplified approach was proposed to provide a first example of how the cooling phase of post-flashover compartment fires can be thermally characterised.Indeed, the present study does not provide a distinction between the decay phase and the cooling phase of post-flashover compartment fires, and it actually delivers an averaged representation of the different surface temperatures of a compartment at the limit where all combustible materials are consumed during the heating phase.This is the case because it relies on two important simplifications: the end of the fully-developed phase is assumed as fuel burnout and therefore the beginning of the cooling phase, and during the cooling phase the compartment gases are assumed to have an absorptivity of zero and to remain at ambient temperature.This is the reason for the evident discontinuity in the compartment gas temperature evolution at the (sudden) transition between the heating and cooling phase.Increasing the complexity and challenging the model assumptions (e.g.introducing a decay phase for the fire heat release rate) will be addressed in future work, also investigating existing experimental data from large-scale compartment fire tests. APPENDIX A Analytical solution [69] General transient conduction problem Boundary conditions for semi-infinite solid with surface convection (convective heat transfer coefficient h c and ambient temperature T a ) and initial temperature T i .)][ erfc Analytical solution for surface temperature evolution T(0, t) at time t. Variable transformation Variable transformation: analytical solution T(x, t) for semi-infinite solid with surface convection and initial temperature subtracted by a known function f(x), which represents the initial thermal condition (i.e., in-depth thermal gradient achieved at the end of the heating phase) T(x, t) = T(x, t) − f (x) Boundary conditions of the function f(x). is the rate of energy change within the control volume, Qfire [W] is the total heat release rate (HRR) within the enclosure by the fire, Qflow,in and Qflow,out [W] are the energy entering and leaving the control volume per unit time by flows through the opening, Qwall [W] represents the rate of heat transferred to (+) or from (− ) the enclosure boundaries (load-bearing or not), and Qrad [W] represents the radiative heat per unit time transferred through the opening. Fig. 6 . Fig. 6.Convective heat transfer coefficient h c as a function of characteristic length and surface temperature. Fig. 7 . Fig. 7. Evolution of the compartment gases (T g ) and linings surface temperatures (T w ) obtained from the simplified model, compared to the standard fire curve (T SFC ) and the corresponding Eurocode parametric fire curve (T EPFC ). Fig. 9 . Fig. 9. In-depth temperature profiles within the compartment linings at different instants. Fig. 11 . Fig. 11.Compartment gases (T g ) and linings surface temperatures (T w ) according to the simplified model and the Eurocode parametric fire curve methodology. Fig. 12 . Fig. 12. In-depth temperature profiles within the compartment linings at different instants according to the simplified model and the Eurocode parametric fire curve methodology. Fig. 15 . Fig. 15.Comparison between the in-depth temperature profiles within the compartment linings during cooling obtained using the numerical model (continuous lines) and the analytical approximation (n = 2, dashed lines). Fig. 16 . Fig. 16.Surface cooling of compartment linings for different n obtained using the analytical approximation (dashed lines). for t = 0, T = T i for t > 0 and x = 0, − k ∂T ∂x = h c [T a − T(0, t)]for t > 0 and x→∞, T = T i Analytical solution for in-depth temperature evolution T(x, t) at time t and depth x.T(x, t) − T i T a − T i = erfc f∂x 2 ∂x 2 (x) = {x = 0, 0 x→∞, − T s,max + T a Function f(x), with first and second derivative, defined based on the ambient temperature T a , the maximum surface temperature T s,max (achieved at the end of heating phase) and characteristic thermal penetration depth L th (i.e., define the in-depth thermal gradient achieved at the end of the heating phase)f (x) = T a − T s,max L th x ∂f (x) ∂x = T a − T s,max L th ∂ 2 f (x) ∂x 2 = 0Characteristic thermal penetration depth L th defined based on the material thermal diffusivity α, a characteristic time t k (heating time), and a multiplication factor n (discussed in Section 7.2):Derivation of boundary conditions for t > 0 and x = 0.− λ ∂T ∂x = h c [T(0) + f (0) − T a ] + ∂f (x) ∂x Boundary conditions of transient conduction problem with transformed variable ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ for t = 0, T = T s,max for t > 0 and x = 0, − k ∂T ∂x = h c (T − T * ) with T * = T a − k h c T a − T s,max n ̅̅̅̅̅̅ αt k √ for t > 0 and x→∞, T = T s,maxAnalytical solution for in-depth temperature evolution T(x,t) at time t and depth xT(x, t) − T i T a − T i = erfc T s,max T a = T a − k h c T a − T s,max n ̅̅̅̅̅̅ αt k √ T(x, t) = T(x, t) + f (x)Analytical solution for surface temperature evolution T(0,t) at time t T(0, t) = T i + (T a − T i ) t) = T(0, t) Rate of energy change within the control volume [W] A.Lucherini et al.
13,879.6
2024-05-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Ice Detection Model of Wind Turbine Blades Based on Random Forest Classifier : When wind turbine blades are icing, the output power of a wind turbine tends to reduce, thus informing the selection of two basic variables of wind speed and power. Then other features, such as the degree of power deviation from the power curve fitted by normal sample data, are extracted to build the model based on the random forest classifier with the confusion matrix for result assessment. The model indicates that it has high accuracy and good generalization ability verified with the data from the China Industrial Big Data Innovation Competition. This study looks at ice detection on wind turbine blades using supervisory control and data acquisition (SCADA) data and thereafter a model based on the random forest classifier is proposed. Compared with other classification models, the model based on the random forest classifier is more accurate and more efficient in terms of computing capabilities, making it more suitable for the practical application on ice detection. Introduction With the gradual depletion of traditional fossil fuels such as coal, oil and natural gas, the development and use of new energy such as wind power has received increasing attention making wind power one of the fastest growing energy sources in the world [1].In 2017, the newly installed capacity of wind power worldwide reached 52,492 MW, and the cumulative installed capacity reached 539,123 MW.Among them, the newly installed capacity of wind power in China accounted for 37%, and the cumulative installed capacity accounted for 35%.The newly installed capacity of wind power accounts for more than 15% of the total installed capacity in recent years, and the cumulative installed capacity accounts for a steady increase.The Global Wind Energy Council (GWEC) predicts that as costs drop and the market begins to recover at the end of this decade; global wind power installed capacity will increase by more than 50% over the next five years.According to GWEC, as countries around the world develop renewable energy sources to achieve emission reduction targets, wind energy costs continue to decline, and by the end of 2022 installed capacity global wind power is expected to increase to 840 GW [2]. Wind power however faces some challenges that restrict its development with cost making of the list of the important issue.According to the study of Department of Energy (DOE), United States, 20% revenue growth of wind farms by 2030 will come from improvement of wind turbine working status and reduction of maintenance costs.Using the appropriate maintenance and maintenance strategy to reduce the cost of operation and maintenance is an important way to increase wind farm income [3].Land-based wind farms are established mostly based on high altitude mountainous areas.These regions experience low temperature and high humidity, which makes it possible for wind turbine blades to form varying degrees of icing easily, especially in winter.However, there are shutdown events caused by wind turbine blade icing, which seriously threatens the normal operation of wind power plants.Any wind turbine blade icing will cause power loss, mechanical failure, equipment failure, and safety issues [4].The freezing of wind turbine blades changes the aerodynamic performance of the blades, which yield into power generation loss.Equally, the uneven distribution of ice from the blade changes the original mass distribution, making the wind turbine to run unstably, and causing severe damage to the blade in varying degrees, which not only lead to huge economic losses, but also have serious security risks [5].Therefore, a reliable detection method for icing wind turbine blade is very important, especially in the early stage icing detection.Now there are many standards and guidelines that have been developed by the IEC (International Electro technical Commission), and it has helped us a lot in analyzing turbine faults [6].For the problem of ice detection in the blades, the existing methods mainly use the mechanism of icing to conduct theoretical analysis and research and establish the physical model of icing, then according to the monitoring data, make a judgment whether the wind turbine blades are frozen at the current moment.Davies et al. studied three methods of creating a power threshold curve to distinguish the ice growth period from the non-icing period to identify the power loss caused by icing [7].Wang et al. proposed a numerical simulation method for three-dimensional wind turbine blade icing and compared it with experimental results to verify the effectiveness of the method [8].Shu et al. studied the characteristics of leaf icing and the severity of icing on the power characteristics of wind turbines under natural icing conditions [9].Blasco et al. performed a quantitative analysis of the power loss of a representative 1.5 MW wind turbine under various icing conditions, attempting to reduce the loss of wind farms in cold regions by formulating some control strategies [10].Based on the analysis of supervisory control and data acquisition (SCADA) data, Li et al. proposed a method for detection of blade icing based on logistic regression [11].Aral et al. proposes and demonstrates Phase-based Motion Estimation (PME) and a motion magnification algorithm to perform non-contact structural damage detection of a wind turbine blade [12].Yu et al. developed a simple method to detect damage based on a discrete mathematical model for fan blades using changes in natural frequencies combined with a fluid-structure analysis [13].The above research generally requires additional sensor placement for wind turbine blades.The disadvantages such as inconvenient practical application and increased wind farm operation and maintenance cost make them unable to be widely used in practice. Vibration signal analysis [14] and SCADA system data analysis are two different aspects.The former pays more attention to the analysis of the equipment mechanism, while the latter tends to analyze the data.Both have their own advantages.Wind turbine blades work at high altitude, and they are inconvenient to measure the vibration acceleration signal offline.The amount of acceleration signal on the line is larger, which is inconvenient to transmit and store.Therefore, more and more people are committed to using SCADA data to predict and diagnose wind turbine faults.The SCADA system is the most widely used and technologically advanced data acquisition and monitoring system, in fault diagnosis of a large amount of wind power equipment [14][15][16][17][18][19].This system collects environmental parameters and operating parameters of wind power equipment, which can fully characterize the operational status of the wind turbine.More and more people use it for data modeling and analysis to mine information of equipment fault and blade icing detection, etc. [20][21][22][23][24]. When the wind turbine blade is early frozen and detected by the model in this paper, this warning information will be fed back to the wind farm owners (or managers, or controllers).At this time, the wind farm has not experienced a serious accident.This early warning information of early icing gave them time to deal with the icing of the wind turbine blades.During this time, they could use other methods to reasonably arrange when to take measures such as deicing the blades, to prevent loss and damage due to severe icing of wind turbine blades.This paper analyzes the SCADA data of a wind farm, combines the mechanism analysis and data analysis of the wind turbine icing to extract features that are sensitive to wind turbine icing, and then uses a random forest-based classification algorithm to achieve the detection of wind turbine blade icing.The first section introduces the related theories of icing of wind turbine blades and the research ideas of this paper; the second section introduces the related theories of the model based on the random forest classifier and the model assessment method selected for this paper; the third section is the data preprocessing which extracts the sensitive characteristics of early icing of wind turbine blades by analyzing the SCADA data; the fourth section is optimization and comparison of the model based on the random forest classifier with the results of other classifiers; and the last section is the conclusion. Theory and Process of Icing Icing is a physical phenomenon with a complete and specialized theory and research system.Blade icing of wind turbines is a type of atmospheric icing.The international standard ISO12494: 2017 [25] describes in detail the definition, scope, classification, principle, characteristics, and effects of such an icing.For wind turbines, atmospheric icing refers to the process of icing in the air frozen or adhered to objects exposed in the atmosphere under certain atmospheric conditions, including water droplets, rain, drizzle, snow, and other forms. There are three forms of blade icing: cloud ice, sedimentation ice and accumulation of frost.Cloud ice refers to icing condensed from sub-cooled water droplets floating in clouds; sedimentation ice refers to icing caused by freezing rain or wet snow under low temperature conditions; frost accumulation refers to the direct phase change of water vapor.The icing process usually occurs at low temperatures.Among them, cloud ice and sedimentation ice are more common in wind turbine icing, and once it occurs, it will have a serious impact on the wind turbine and cause more damage. Ice Detection Method Ice detection analysis on blades is generally composed of several parts such as physical principle analysis, icing process analysis, feature extraction, detection model establishment and result presentation.This paper adopts blade detection model construction process based on a random forest classification, as shown in Figure 1.This paper analyzes the SCADA data of a wind farm, combines the mechanism analysis and data analysis of the wind turbine icing to extract features that are sensitive to wind turbine icing, and then uses a random forest-based classification algorithm to achieve the detection of wind turbine blade icing.The first section introduces the related theories of icing of wind turbine blades and the research ideas of this paper; the second section introduces the related theories of the model based on the random forest classifier and the model assessment method selected for this paper; the third section is the data preprocessing which extracts the sensitive characteristics of early icing of wind turbine blades by analyzing the SCADA data; the fourth section is optimization and comparison of the model based on the random forest classifier with the results of other classifiers; and the last section is the conclusion. Theory and Process of Icing Icing is a physical phenomenon with a complete and specialized theory and research system.Blade icing of wind turbines is a type of atmospheric icing.The international standard ISO12494: 2017 [25] describes in detail the definition, scope, classification, principle, characteristics, and effects of such an icing.For wind turbines, atmospheric icing refers to the process of icing in the air frozen or adhered to objects exposed in the atmosphere under certain atmospheric conditions, including water droplets, rain, drizzle, snow, and other forms. There are three forms of blade icing: cloud ice, sedimentation ice and accumulation of frost.Cloud ice refers to icing condensed from sub-cooled water droplets floating in clouds; sedimentation ice refers to icing caused by freezing rain or wet snow under low temperature conditions; frost accumulation refers to the direct phase change of water vapor.The icing process usually occurs at low temperatures.Among them, cloud ice and sedimentation ice are more common in wind turbine icing, and once it occurs, it will have a serious impact on the wind turbine and cause more damage. Ice Detection Method Ice detection analysis on blades is generally composed of several parts such as physical principle analysis, icing process analysis, feature extraction, detection model establishment and result presentation.This paper adopts blade detection model construction process based on a random forest classification, as shown in Figure 1.Severe icing detection during the actual operation of the wind turbine is easily established, but automatically deicing by the wind turbine deicing system is also a challenge.However, the icing of the wind turbine blade is a slow process.In the early days of icing, the impact on the wind turbine is generally small and difficult to find.Besides, early icing will cause certain changes to the shape of the blades, which will cause water droplets in the atmosphere to stick to and freeze at the surface of the blades.Eventually, the probability of serious icing is greatly increased.The treatment of early icing is easier and has less impact on the wind turbine.It has a certain early warning effect of the occurrence of severe icing.Therefore, the detection of early icing is very important. Random Forest Classifier The random forest [26] is a machine learning algorithm first published in 2001 by Breiman, L. which combines bagging ensemble learning theory [27] proposed in 1996, with the stochastic subspace method proposed by Ho, T. in 1998 [28].This model adopts bootstrapping re-sampling technology to randomly select n samples from the original training sample set N and put it back randomly to generate a new training sample set to train a decision tree.Then, the above steps generate m decision trees to form a random forest.The classification results of new data-based upon the score formed by how many classification trees vote.Its essence is an improvement in the decision tree algorithm, with multiple decision trees merged together.The establishment of each tree depends on the independently extracted samples.Figure 2 shows the basic structure of the random forest classifier.Severe icing detection during the actual operation of the wind turbine is easily established, but automatically deicing by the wind turbine deicing system is also a challenge.However, the icing of the wind turbine blade is a slow process.In the early days of icing, the impact on the wind turbine is generally small and difficult to find.Besides, early icing will cause certain changes to the shape of the blades, which will cause water droplets in the atmosphere to stick to and freeze at the surface of the blades.Eventually, the probability of serious icing is greatly increased.The treatment of early icing is easier and has less impact on the wind turbine.It has a certain early warning effect of the occurrence of severe icing.Therefore, the detection of early icing is very important. Random Forest Classifier The random forest [26] is a machine learning algorithm first published in 2001 by Breiman, L. which combines bagging ensemble learning theory [27] proposed in 1996, with the stochastic subspace method proposed by Ho, T. in 1998 [28].This model adopts bootstrapping re-sampling technology to randomly select n samples from the original training sample set N and put it back randomly to generate a new training sample set to train a decision tree.Then, the above steps generate m decision trees to form a random forest.The classification results of new data-based upon the score formed by how many classification trees vote.Its essence is an improvement in the decision tree algorithm, with multiple decision trees merged together.The establishment of each tree depends on the independently extracted samples.Figure 2 The classification ability of a single tree may be small, but after randomly generating many decision trees, a test sample through the statistics of the classification of each tree is selected to obtain the most likely classification. 1.The steps for the basic construction process of a random forest are: Use the bootstrapping method from the original training set to select n samples randomly in m times to generate m training sets.2. For the newly generated m training sets, train m decision tree classification models.3.For a single tree, every time a new node based upon the information gain or information gain ratio or the Gini is split to select the best split method.4. Split each tree according to step 3 until the training sample is correctly classified at a certain node or reaches the maximum depth of the tree. 5. Organize the resulting multiple decision trees into the random forest classifier and the final classification results are determined by voting. The randomness of each tree corresponding to the sampling of the training set and the way in which part of the features are selected when splitting to form a new node.The random forest does not need to be pruned and almost no over fitting occurs, and have good tolerance for noise and outliers, high stability, and strong generalization ability.In addition, the random forest is suitable The classification ability of a single tree may be small, but after randomly generating many decision trees, a test sample through the statistics of the classification of each tree is selected to obtain the most likely classification. 1. The steps for the basic construction process of a random forest are: Use the bootstrapping method from the original training set to select n samples randomly in m times to generate m training sets. 2. For the newly generated m training sets, train m decision tree classification models. 3. For a single tree, every time a new node based upon the information gain or information gain ratio or the Gini is split to select the best split method.4. Split each tree according to step 3 until the training sample is correctly classified at a certain node or reaches the maximum depth of the tree. 5. Organize the resulting multiple decision trees into the random forest classifier and the final classification results are determined by voting. The randomness of each tree corresponding to the sampling of the training set and the way in which part of the features are selected when splitting to form a new node.The random forest does not need to be pruned and almost no over fitting occurs, and have good tolerance for noise and outliers, high stability, and strong generalization ability.In addition, the random forest is suitable for parallel Energies 2018, 11, 2548 5 of 15 computing, and even for large samples and high latitude data, they have the higher training speed and the achieve efficient calculation. This paper used a model based on the random forest classifier to identify early icing data from normal data to achieve the goal of predicting early icing failure, and then to determine if there would be icing failure in the next period. Model Assessment Method The confusion matrix [29] is a classical method for evaluating the results of classification models.Table 1 shows the confusion matrix representation.where: TP indicates the proportion of all actual icing samples predicted to be icing samples; TN indicates the proportion of all actual non-icing samples predicted to be non-icing samples; FP indicates the proportion of all actual non-icing samples predicted to be icing samples; FN indicates the proportion of all actual icing samples predicted to be non-icing samples. In addition, based upon the confusion matrix the precision of the test results and the recall rate assessed further to evaluate the model classification results [26]. Data Sources and Introduction In this paper, the test data are driven from the first China Industrial Big Data Innovation Competition [30], which contains two wind turbines SCADA data in a wind farm provided by Goldwind for predicting icing failures on blades.The SCADA data of each wind turbine contains 28 variables such as the time stamp, operating condition parameters, environmental parameters, and status parameters.The acquisition time was two months and with the sample size of about 580,000.Table 2 shows the statistical information of SCADA data.In addition, more detailed information of SCADA data can be seen in the Appendix A, Table A1.In addition, the organizers of the event conducted preliminary processing on the data, which removed severely frozen data and made the data not continuous; the data was also standardized, thus lost the physical meaning of the original data.Standardization means that making the mean of every variable in data is 0 and the variance is 1.The contest organizer has already set the labels for the data-icing and non-icing (due to the authority of the data owner and the supervisor, the accuracy of the data label is guaranteed, so the basis for judging whether the data is frozen or not is also credible); we only need to process the data that has been tagged. Features Extraction Some indicators in the raw data given by the contest organizers are sensitive to icing, and some indicators are almost not related to icing.So, the first step in this paper on the data is to pick out the icing-sensitive indicators from the raw data indicators.However, relying solely on these indicators does not well identify early icing data from non-icing data, this paper further processed the data and obtained some better indicators of icing and non-icing.In general, it is through the screening and supplementation of indicators to achieve better characterization of early icing with fewer features, which not only reduces the running time of the model but also gives better results.This section will introduce the process of data preprocessing, including the screening of basic features and the construction of other features, with giving some figures to make features more intuitively judgment-whether it is easier to distinguish between icing and non-icing. Extraction features, especially quantitative features [31] are very essential for the fault diagnosis of equipment.On the one hand, because the inertia of the wind turbine blade will reduce the correlation between the instantaneous power and the instantaneous wind speed, taking the average value from the data over a certain time span can reduce the inertial effect to some extent.On the other hand, in the original data, about 8 samples are collected every minute, but because the data provider has deleted some data, the sample interval time in the data is not fixed.So, the data are resampled in one-minute intervals, the specific process is as follows.According to the timestamp, the SCADA data grouped every minute for the time span, and then the mean of each group sample is taken as the new sample characteristics. where V is the average wind speed-the new sample characteristics; n is the number of wind_speed in one minute.The solutions of average power P and other new variables are the same as Equation ( 3).Then the data is filtered. 1. Filter unspecified data.In the given raw data set, the data covers normal sample data, icing sample data, and other unspecified data, according to the status tag.Unspecified data will affect the classification of normal sample data/icing sample data, due to the uncertainty of its information; it will be classified as invalid data. 2. Filter samples below 80% of full power.Taking wind turbine 21# as an example, it can be found that when the wind turbine is at more than 80% full power icing status data does not exist by comparison of the original instantaneous power-wind speed scatter plot (shown in Figure 3), and the processed average power-wind speed scatter plot (shown in Figure 4).In other words, the wind turbine power cannot reach 80% of full power after the blade's freeze.Filter the samples below 80% of full power can make it easier to identify early icing data. In following figures, the green points are in the wind turbine normal state and the red points are in the icing state.The blue lines in Figures 3 and 4 represent the dividing lines representing 80% of full power.Filter unspecified data and samples below 80% of full power and normalize the remaining data (making the scale from 0 to 1), then plot the average power and average wind speed as a scatter plot (shown in Figure 5).Filter unspecified data and samples below 80% of full power and normalize the remaining data (making the scale from 0 to 1), then plot the average power and average wind speed as a scatter plot (shown in Figure 5).Filter unspecified data and samples below 80% of full power and normalize the remaining data (making the scale from 0 to 1), then plot the average power and average wind speed as a scatter plot (shown in Figure 5).Wind turbines are devices that convert wind energy into mechanical energy and then into electrical energy, where wind speed and power are regarded as the two basic features of icing prediction.When the blades freeze, the shape and aerodynamic characteristics of the blades will change, reducing the power output.Therefore, when the wind turbine blades freeze, the relationship of the output power and the wind speed will be changed.Wind turbines are devices that convert wind energy into mechanical energy and then into electrical energy, where wind speed and power are regarded as the two basic features of icing prediction.When the blades freeze, the shape and aerodynamic characteristics of the blades will change, reducing the power output.Therefore, when the wind turbine blades freeze, the relationship of the output power and the wind speed will be changed. In the non-icing condition, the wind machine operates according to the wind turbine power characteristic curve in the normal mode (the green part of Figure 5).After the icing formation, the actual operation state of the wind turbine will deviate, and the power cannot reach the rated power.When the normal state sample data is used, the abnormal point eliminated, the power characteristic curve of the wind turbine is fitted to obtain a baseline model of the power characteristic curve [32], and then this model is used to predict the output power at the corresponding wind speed.The baseline model obtained by curve fitting is shown in Figure 6.Wind turbines are devices that convert wind energy into mechanical energy and then into electrical energy, where wind speed and power are regarded as the two basic features of icing prediction.When the blades freeze, the shape and aerodynamic characteristics of the blades will change, reducing the power output.Therefore, when the wind turbine blades freeze, the relationship of the output power and the wind speed will be changed. In non-icing condition, the wind machine operates according to the wind turbine power characteristic curve in the normal mode (the green part of Figure 5).After the icing formation, the actual operation state of the wind turbine will deviate, and the power cannot reach the rated power.When the normal state sample data is used, the abnormal point eliminated, the power characteristic curve of the wind turbine is fitted to obtain a baseline model of the power characteristic curve [32], and then this model is used to predict the output power at the corresponding wind speed.The baseline model obtained by curve fitting is shown in Figure 6.From Figure 6 the icing sample is more deviating from the baseline model than the normal sample, thus constructing another feature of icing prediction, which can distinguish then better: the degree of deviation from the output power.From Figure 6 the icing sample is more deviating from the baseline model than the normal sample, thus constructing another feature of icing prediction, which can distinguish then better: the degree of deviation from the output power. where P real is the actual measured output power and P pre is the output power estimated by the actual wind speed and power curve. After calculating the power degree by Equation ( 4), to facilitate visual observation of whether the variable is helpful for model classification, we draw a figure about relationship between the power degree and the average wind speed, as shown in Figure 7.As can be seen from Figure 7, there are more red dots (icing samples) that are distinguished from green dots (non-icing samples). cumulative process, instantaneous characteristics such as the wind speed, the power, and the degree of deviation make it difficult to characterize fully icing conditions, especially in the early icing part.Therefore, it is necessary to analyze the evolution of the icing process and extract features that can characterize icing changes to better distinguish early icing conditions and achieve early icing prediction. Figure 7. Relationship between degree of deviation and average wind speed. In the early stage of icing, the operation state of the wind turbine is similar to the normal state, and it is difficult to separate the icing state for the normal state.However, the detection of early icing conditions is a very important process, and the healthy operation of the wind turbine unit is of utmost importance.It can minimize the loss to the unit due to icing on the blades.Because icing is a cumulative process, instantaneous characteristics such as the wind speed, the power, and the degree of deviation make it difficult to characterize fully icing conditions, especially in the early icing part.Therefore, it is necessary to analyze the evolution of the icing process and extract features that can characterize icing changes to better distinguish early icing conditions and achieve early icing prediction. This paper mainly extracts features of early icing based upon the characteristics of degree of deviation.The icing process of the wind turbine contains certain periodicity, thus calling for serialization of the original data.Calculate the average rate of change ( C  ) of the degree of deviation at the corresponding time in each time segment.In the early stage of icing, the operation state of the wind turbine is similar to the normal state, and it is difficult to separate the icing state from the normal state.However, the detection of early icing conditions is a very important process, and the healthy operation of the wind turbine unit is of utmost importance.It can minimize the loss to the unit due to icing on the blades.Because icing is a cumulative process, instantaneous characteristics such as the wind speed, the power, and the degree of deviation make it difficult to characterize fully icing conditions, especially in the early icing part.Therefore, it is necessary to analyze the evolution of the icing process and extract features that can characterize icing changes to better distinguish early icing conditions and achieve early icing prediction. This paper mainly extracts features of early icing based upon the characteristics of degree of deviation.The icing process of the wind turbine contains certain periodicity, thus calling for serialization of the original data.Calculate the average rate of change (∆C) of the degree of deviation at the corresponding time in each time segment. where ∆C represents the average rate of change of the current degree of deviation; C t represents the current degree of deviation; C t 1 represents the degree of deviation from the previous moment; t − t 1 represents the time span (t is the current time after digitization and t 1 is the previous time after digitization).Then, according to the sliding window method, take ten minutes as the window length and one minute as the moving step length to obtain the maximum value maxC to the degree of deviation and the cumulative value sum∆C of the ∆C within 10 min before the current time. First, this paper selects two basic features from the 28-dimensional features in the given data, and then adds four additional features based upon the mechanism of the wind turbine operation and icing.Finally, Table 3 represents the six groups of icing prediction features obtained. Classification Model Optimization This paper mainly adjusts two important parameters of the model based on the random forest classifier to optimize the model: the number of trees and the maximum depth of the tree.Divide 70% of sample data of wind turbine 15# into the training set, 30% of sample data into the test set, adjust the number of decision tree and maximum depth of a tree in the random forest classifier, and then calculate the output from the model.The confusion matrix is chosen as the evaluation index, and the calculation results are shown in Table 4. From Table 4, random forest model parameters selections are: the number of trees 20, and the maximum depth 25. Test Results After the optimization of the model based on the random forest classifier in the previous section, the parameters of the model based on the random forest classifier are determined. The next four groups of tests are about the different classification results of the model based on the random forest classifier between data of wind turbines 15# and 21#.The train and test set details and the classification results from each test are as follows.In addition, in the following tables, the indicator, the running time, refers to the total time the model takes to training and predicts the data on the experimental computer. In the Test No. 1, 70% of the sample data of the wind turbine 15# was divided into training sets, and 30% of the sample data was divided into a test set.The results are shown in Table 5.Consequently, the classification results between different classification models are also compared.This paper selects the logistic regression classifier, the GBDT (Gradient Boosting Decision Tree) classifier and the random forest classifier for comparison.In these tests, 70% of the sample data of the wind turbine 15# is the training set, and 30% of the sample data is set as the test set. Test No.5 used a logistic regression classification model.After the optimization, the classification threshold was set to 0.86, as results shown in Table 9 demonstrate.The GBDT classification model was used in the Test No. 6.In addition, after the model optimization, the classification result is presented in Table 10.The precision and recall of the six tests computed separately, and the obtained results are shown in Table 11.As we all known, for a classification model, when both precision and recall have higher values at the same time without considering other factors, the model is thought to have a better performance.Through the comparison of test results we draw, the following summary is concluded: 1. Considering precision and recall at the same time, the results of the model based on the random forest classifier are better than other models (as the result of Table 11 shown), so that the model based on the random forest classifier has high accuracy in ice detection of wind turbine blades and can identify the early failure.Considering the running time of these models, the model based on the random forest classifier shorter running time than the logistic regression and the GBDT classification models (as the results of Tests No. 5 and No. 6), so that the model based on the random forest classifier has more efficient calculation ability. Conclusions To detect wind turbine blade icing, a model based on the random forest classifier was proposed.The model with high accuracy and good generalization ability was verified by the data of the China Industrial Big Data Innovation Competition. 1. The features extracted in this paper well reflect the characteristics of icing failure.Two basic variables of wind speed and output power from SCADA data and more instantaneous and statistical features extracted from a power curve can be used to characterize the early icing on wind turbine blades.The various models selected in this paper have good results on this data set.So, it has high practical application value for ice detection in wind farms, which, as reflected in the effective prevention of severe ice formation in the blades, increase of wind farm's profit, and reduction of safety accident. 2. The model based on the random forest classifier has very high accuracy and good generalization ability for ice detection on a wind turbine's blades, which is used to identify the early icing. Compared with other classification models, the model based on the random forest classifier has higher accuracy and more efficiency in terms of computing capabilities, making it more suitable for the practical application of ice detection. Figure 1 . Figure 1.Flow chart of model construction.Figure 1. Flow chart of model construction. Figure 1 . Figure 1.Flow chart of model construction.Figure 1. Flow chart of model construction. shows the basic structure of the random forest classifier. Figure 2 . Figure 2. Basic structure of random forest classifier. Figure 2 . Figure 2. Basic structure of random forest classifier. Figure 5 . Figure 5. Average power-wind speed with removing more than 80% full power data scatter plot. Figure 5 . Figure 5. Average power-wind speed with removing more than 80% full power data scatter plot. Figure 7 . Figure 7. Relationship between degree of deviation and average wind speed. 2 . The trained model based on the random forest classifier still performs well on a new test set (as the results of Tests No. 3 and No. 4 shown), so that the model has good generalization ability, thus it has a strong adaptability to the new sample data.3. The data of wind turbine 15# is more than the data of 21#.From the results of these tests (as the results of Tests No. 1, No. 2, No. 3, and No. 4), when only focusing on precision and recall without considering the running time, increasing the sample size of the training set can improve the classification performance of the model.4. Table 2 . Statistical information of supervisory control and data acquisition (SCADA) data. Table 3 . Features of ice detection on wind turbine blades. Table 4 . Results of the model based on random forest classifier optimization. Table 5 . Result of Test No. 1 (Running time: 27.0 s).70% of the sample data of the wind turbine 21# divided into training sets, and 30% of the sample data is the test set, as shown in Table6. Table 6 . Result of Test No. 2 (Running time: 14.2 s).takes all the sample data of wind turbine 21# into the training set and the sample data of 15# wind turbine into test set, as shown in Table7. Table 7 . Result of Test No. 3 (Running time: 38.1 s).Test No. 4 takes all the sample data of wind turbine 15# into the training set and the sample data of wind turbine 21# into the test set, as shown in Table8. Table 11 . Precision and recall tests.
8,622.6
2018-09-25T00:00:00.000
[ "Environmental Science", "Engineering" ]
Biologically plausible information propagation in a complementary metal-oxide semiconductor integrate-and-fire artificial neuron circuit with memristive synapses Neuromorphic circuits based on spikes are currently envisioned as a viable option to achieve brain-like computation capabilities in specific electronic implementations while limiting power dissipation given their ability to mimic energy-efficient bioinspired mechanisms. While several network architectures have been developed to embed in hardware the bioinspired learning rules found in the biological brain, such as spike timing-dependent plasticity, it is still unclear if hardware spiking neural network architectures can handle and transfer information akin to biological networks. In this work, we investigate the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bioinspired experiments have been reproduced by linking the biological probability of release with the artificial synapse conductance. Mutual information and surprise have been chosen as metrics to evidence how, for different values of synaptic weights, an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information propagation and analysis. Introduction Neuromorphic technologies have been designed to support large-scale spiking neural networks (SNNs) encompassing bioinspired mechanisms. Unlike conventional artificial intelligence systems, these networks base their activity on the transfer of binary units (spikes) through synaptic contacts. These latter in turn can undergo persistent changes of their strength upon specific patterns of stimulation. The modifications expressed by synaptic contacts following the induction of long-term plasticity [1,2] can be reliably reproduced by memristors [3][4][5], which can be designed to change their conductance according to their past activity [6]. In this respect, recent advancements in memristive device technology development have brought them closer to full integration in standard complementary metal-oxide semiconductor (CMOS) platforms, which is per se a tough challenge, as these devices must fulfill very stringent requirements for integration with current integrated circuits. Among these requirements are integration densities of up to 1 gigabyte mm −2 , writing voltages <3 V, switching energy <10 pJ, switching time <10 ns, writing endurance >10 10 cycles (or full potentiation/depression cycles), dynamic range >10, and low conductance fluctuations over time if no bias is applied (<10% for >10 years) [7]. Notably, some memristive devices have fulfilled such stringent criteria [7,8], but they still exhibit high manufacturing costs despite the simplicity of individual memristive cells due to the need for additional elements (series transistor, selector, or resistor) and to specific beyond-CMOS back-end-of-line interconnects. Still, these devices are gradually triggering the interest of the semiconductor industry and are currently considered front-runners in the race to realize a CMOS-compatible cost-effective synaptic element for hardware SNNs. Interestingly, several network architectures have been developed to embed in hardware the bioinspired learning rules that are required to exploit SNN functionalities, such as spike timing-dependent plasticity, rate-based plasticity, and the Bienenstock-Cooper-Munro learning rule [6]. Nevertheless, while a significant amount of work has been published in this domain, it is still unclear if currently proposed neuromorphic hardware architectures for SNNs have the capacity to handle and transfer information in a way that resembles what happens in the corresponding biological networks. In a neuronal microcircuit, in fact, the information is exchanged between neurons in the form of input spike series that are conveyed as output temporal spike series. Therefore, the amount of transferred information can be estimated by looking at the input-output relationship, which is computed by analyzing the stimulus patterns and the neural responses. This dependency can be formalized in several ways, like tuning [9,10], gain [11], and selectivity curves [12,13], and these approaches allow us to provide quantitative estimates of the information content independently from the neural code. The language employed by neurons to communicate can be cracked by adopting parameters of communication and information theory [14]. Among these, mutual information (MI) has been already adapted to neuroscience to estimate the information transmitted by circuits [15], neurons [16], or single synapses [17] without specific knowledge of neural code semantics. MI is directly derived from the response and noise entropy [18], which are correlated to the variability of responses to different inputs or to the same input. Given this premise, the calculation of MI allows us to evaluate the capacity of a neuronal system to separate different inputs, and thus transmitting information [19,20]. For this reason, MI has been used consistently in neuroscience to show the modalities of information propagation in biological neural networks. On the other hand, much of the effort in neuromorphic electronics has been devoted to the design, development, and implementation of artificial CMOS [21] or memristive [22] neurons and either CMOS or memristive synapses [23][24][25] in circuits that embed specific learning rules and electrophysiological properties, paying less or no attention to the overall performance of the system under investigation from the standpoint of information transmission. In fact, understanding whether the currently proposed artificial SNNs can at least qualitatively replicate the extreme efficiency with which biological networks handle and transfer information represents an important step toward the development of brain-inspired and ultra-low-power artificial processing systems. In this work, we provide for the first time an in-depth analysis of the information transmission in an SNN that encompasses CMOS leaky integrate-and-fire (LIF) neurons and memristive synapses by focusing on how MI is transmitted through the network. Specifically, we focus on a simplified network, with the neuron circuit mimicking a cerebellar granule cell (GC), found to be the optimal benchmark to calculate MI [26]. In fact, in the case of the cerebellar GC, the input-output combination is particularly convenient, given the small number of dendrites (four) and the limited amount of inputs received, compared to the thousands of contacts received, for instance, by cortical and hippocampal pyramidal neurons. In addition, besides the few dendrites, GCs when activated respond with a limited number of spikes (typically two or less [27]) confined in a narrow time window regulated by synaptic inhibition [28]. This peculiarity reduces the complexity of calculations and suggests the use of this microcircuit as a model to investigate changes in the transmission properties by internal or external agents. We compare how MI changes not only with specific stimuli and input patterns, but also how it evolves with changes induced by altering synaptic strength (i.e. upon learning), finding a striking qualitative resemblance with the results found experimentally in biological networks [29,30] and in simulations with biologically realistic neurons [26]. This paper is organized as follows. In section 2, we illustrate the methods used to compute specific quantities related to information propagation. In section 3, we report the details of the synaptic device used in this study, clarifying how the latter were characterized and modeled; the details of the electronic neuron model are given as well. In section 4, we provide the details of the proposed artificial network and of the analogies and differences as compared to its biological counterpart. In section 5, results are reported and discussed. Conclusions follow. Methods Information theory has been extensively used in neuroscience to estimate the amount of information transmitted within neuronal circuits [14,16,20,26], where a set of input stimuli can be correlated with output responses to estimate the information conveyed by neurons. The level of correlation primarily depends on the input variability, which, in turn, is expanded by the number of afferent fibers. In the central nervous system, there is a large variability in the number of input synapses a neuron can receive: from a few units (cerebellar GCs [29]) to hundreds of thousands (200 000 in cerebellar Purkinje cells [30]). In terms of information transfer, only neurons with limited fan-in connections can be efficiently analyzed, avoiding the explosion of combinations according to the input space size. Following our recent work [31], we have simulated an artificial architecture composed of individual neurons with only four synaptic inputs, and the level of correlation was estimated by first dividing neuronal responses in temporal bins, which were digitized in dependence on the presence of a spike. This discretization allowed to convert a spike train into a binary word of length N = T/∆t (where ∆t is the temporal bin and T is the spike train duration) containing only digital labels (where '0' means no spike and '1' means spike). Neurons respond to input stimuli with a variety of binary words generating neuronal vocabulary that can be explored by varying input stimuli. The larger the vocabulary is, the richer is the conveyed information. However, efficient communication is ensured by a correlation between input stimuli and output words. In information theory, two factors determine the amount of information a neuron conveys about its inputs, namely the response entropy (i.e. the neuronal vocabulary size) and the noise entropy (i.e. the reliability of responses when stimuli are given). The quantity that considers these two factors simultaneously by subtracting the noise entropy from the response entropy is Shannon MI, which is measured in bits and can be calculated through the following equation: where r and s are the response and the stimulus pattern, respectively; p(r) and p(s) are the probabilities that r and s occur within a single acquisition. Finally, p(r|s) is the probability of obtaining the response pattern r given the stimulus pattern s. MI is intrinsically an average property of all inputs, and it can be interesting to decompose MI into a single stimulus contribution (stimulus specific surprise (SSS)) or even a single spike contribution (surprise per spike (SpS)). These two quantities can be computed as: Experimental issues associated to high data dimensionality limit the estimation of all the probabilities in the MI formula. Estimating the conditional entropy requires determining the response probability given any input stimulus. If the neural response shows sufficiently low variability, the response probability can be assessed with a tractable amount of data [26,31]. Synaptic devices and experiments According to the configuration adopted for biological experiments [31] and simulations with biologically realistic neurons [26], we investigated how information is propagated in a cerebellar GC-like artificial CMOS neuron with four memristor-based synaptic inputs. In this respect, we run circuit simulations using Cadence Virtuoso software, in which the response of the artificial CMOS neuron was abstracted by using a Verilog-A behavioral description of its constituent building blocks (as specified in section 3.2), while the characteristics of the memristive elements (i.e. the artificial synapses) were carefully reproduced by means of a compact model developed internally, i.e. the UNIMORE resistive random access memory (RRAM) compact model [32]. The latter is a physics-based compact model supported by the results of advanced multiscale simulations [33] that has been shown to reproduce both the quasi-static and dynamic behavior of different memristor technologies with a single set of parameters [34] and considers the intrinsic device stochastic response, thermal effects, and random telegraph noise [35]. Specifically, the memristive elements adopted in this study are commercially available C-doped self-directed channel (SDC) memristors by Knowm [36], available in a dual in-line package. These devices were chosen because, to the best of the authors' knowledge, they are the only commercially available packaged RRAM devices to date. This choice allows us to show that MI propagates through an SNN with CMOS LIF neurons and memristive synapses akin to what happens in biological networks, and that such behavior can be achieved with available commercial-grade RRAM devices, requiring no specific advancements in technology development. As shown in figure 1(a), the SDC memristor consists of a stack composed of W/Ge 2 Se 3 /Ag/Ge 2 Se 3 /SnSe/ Ge 2 Se 3 :C/W, where Ge 2 Se 3 :C is the active layer [36]. During fabrication, the three layers below the top electrode are mixed and form the Ag source [36]. The SnSe layer acts as a barrier to avoid Ag saturation in the active layer and is responsible for the production of Sn ions and their migration into the active layer during the initial operation of the device (typically addressed as 'forming'), which promotes Ag agglomeration at specific sites [36]. The details of the mechanism at the basis of the resistive switching in these devices are available in [36]. To fully capture the behavior of these devices in circuit simulations, we carefully calibrated the parameters of the UNIMORE RRAM compact model against experimental data, as elucidated in figure 1. The electrical measurements were performed using the Keithley 4200-SCS. To analyze and then model the behavior of the memristors, we performed a sequence of quasi-static I-V measurements by applying voltages sweeping from −0.8 to 0.4 V with a current compliance enforced to 10 µA by the Keithley 4200-SCS. These measurements drive the device to a low resistive state (LRS) with a SET operation (V > 0) and to a high resistive state (HRS) with an ensuing RESET operation (V < 0). Results are shown in figure 1(b) (red traces) and reveal that the RESET curves are characterized by an abrupt transition from the LRS to the HRS and a strong cycle-to-cycle variability of the switching voltage, while the SET operation is associated with a more predictable and gradual transition from HRS to LRS. Then, to experimentally evaluate the synaptic functionality of the memristors (i.e. the capability to respond to spike-like voltage stimuli rather than to quasi-static voltage sweeps), we designed a suitable pulsed voltage sequence ( figure 1(c)), which gradually drives the device resistance toward higher or lower resistance (or, equivalently, conductance) states. In this experiment, a 10 kΩ resistor was connected in series with the device to prevent accidental current overshoots because Keithley 4200-SCS does not support the enforcement of current compliance when performing pulsed tests. (The series resistor can then be removed in the actual circuit implementation.) The device was initially driven in LRS by means of 20 rectangular pulses (V = 0.6 V; T = 100 µs, initial set). Then, long-term depression (LTD) and long-term potentiation (LTP) were obtained by applying trains of 20 depression pulses (V = −0.2 V; T = 10 µs) followed by 20 potentiation pulses (V = 0.55 V; T = 30 µs). To evaluate the transition smoothness, each potentiation or depression pulse is followed by a small reading pulse (V READ = 50 mV; T READ = 50 µs) that is used to retrieve the evolution of the resistance values during LTD and LTP. Figure 1(c) reports the resistance evolution for 15 identical depression-potentiation cycles, revealing that a smooth and reproducible synaptic analog behavior is achievable with these devices. Although SDC memristors are ion-conducting devices that change their resistance due to the movement of Ag + ions into the device structure [36], their behavior is well replicated (figures 1(b) and (d), black traces) by the modulation of an equivalent conducting filament (CF) barrier (figure 1(e)) [32][33][34][35], which is the typical behavior of filamentary memristive devices. The barrier thickness (x in figure 1(e)) is in fact directly correlated to the memristor conductance, which represents the synaptic strength. Further details of the compact model and the extracted parameters for this technology are reported in [6]. Figure 2(a) shows the model of the LIF neuron [6] supporting a rate-dependent plasticity rule on the synaptic memristive devices that was designed and simulated in this work. In this neuron model, the input terminal (see neuron input in figure 2(a)) is kept at virtual ground, and input spikes from presynaptic neurons are integrated into a capacitor. When the voltage across the capacitor passes a threshold, an output spike is generated at the neuron output, and after a predefined delay (i.e. T spike delay in figure 2(a)), the capacitor is discharged to reset the system to its initial state. The rate at which the capacitor charges depends both on the input spikes' rate and the presynaptic strength. Because the neuron is leaky, in the absence of input spikes the capacitor discharges with an appropriate time constant. This feature mirrors the finding that biological neurons have a leaky membrane, which contributes to depolarization when no spikes are fed at their inputs. The effect of the time constant value in the LIF model was already evaluated in [37], in which it is clearly shown that the presence of leakage provides enhanced noise robustness of SNNs while decreasing The synaptic plasticity mechanism implemented in the neuron model is purely rate based and depends only on the rate of the presynaptic stimulation. Thus, a high (low) rate of presynaptic stimulation leads to the potentiation (depression) of the associated synapse. In the adopted neuron model, this learning rule is implemented by appropriately designing the shape of the spike (see figure 2(c)), so that each presynaptic spike results in a small potentiation of the associated synaptic memristor device, and by introducing a back spike, which results in a small synaptic depression; see figure 2(c). It is worth noting that this back spike only serves the purpose of properly implementing the rate-based learning rule and is not related to the firing activity of the postsynaptic neuron and does not appear at its output. Neuron model and simulations When firing the back spike from its input terminal, the postsynaptic neuron also outputs a Dep and Dep control signals that are connected to the gate of two MOSFET devices (see figures 2(a) and (c)), which disconnect the synapses from their presynaptic neurons and connect their bottom electrodes to ground; see figure 2(c). When the neuron fires the back spike, the propagation of information from the presynaptic neurons is therefore temporarily disabled. The occurrence of simultaneous presynaptic spikes and back spikes is minimized by modeling the time interval between the back spikes as a random variable following a Poisson distribution (i.e. λ = 1 s was used in the simulations) and by designing a back spike with a short duration. Because the back spikes do not propagate any information, their shape can be designed with some degree of flexibility and can be adjusted as required. The back spike used in the simulation is shown in figure 2(c) and has a pulse width of 300 µs, which is much shorter than the presynaptic spike. The mean time interval between successive back spikes determines the characteristic of the implemented rate-based learning rule. As shown in figures 2(b), a stimulation rate ν 0 exists at which potentiation and depression effects balance out, leading to no average synaptic strength change . Presynaptic stimulation rates higher than ν 0 result in a net synapse potentiation, while lower stimulation rates result in a net depression, as shown in figure 2(b). Although the spikes used in this work were designed to provide a system response in a similar timescale to that of biological neurons and to be compatible with the employed memristor technology, it is worth noting that the pulse shape and the parameters of the neuron circuit (e.g. threshold, integrator time constant) can be scaled appropriately to adapt the system response to satisfy possible application requirements, highlighting the flexibility of the electronic implementation of the bioinspired neural network. Simulated network To understand whether the artificial SNN features information transmission properties akin to those found in biological systems, we implemented a circuit that resembles the GC morphology, shown schematically in figure 3(a), as well as the spike digitization principle used in [26,31]. GCs are typically studied because they constitute more than half of the neurons in the brain, and in particular, they present an exceptionally low number of synapses (four on average) [38,39], which constitutes ground for a slender electronic implementation. In neuroscience experiments, the stimulation is typically performed by applying spike trains through the mossy fibers (MFs) [40], and the experiment time is divided into temporal bins, where the presence of a spike is coded as logic 1 (0 otherwise). Figure 3(b) reports the schematic of the implemented artificial neuron, with the related memristor-based synapses emulating the structure on the left. Each synapse may receive up to four spikes over time, i.e. the experiment time during which the inputs are delivered to the network is composed of four temporal bins. In [26,31], time bins of 10 and 6 ms have been used to stimulate the input and digitize the output train, respectively. Due to the technological constraints of our memristors and design choices, spikes with a longer duration are needed, which lead to define time bins of 50 and 10 ms for input and output, respectively, however with no loss of generality. As in [31,36], we used four time bins on four inputs for the stimulation (as in figure 3(b)), giving 2 N bin ·N input = 2 4·4 = 65536 possible input combinations and with a total stimulation period of 50 ms × 4 = 200 ms. Spike stimulations (10 ms long) are applied at random time (jitter) within each time bin (50 ms long), consistently with the idea of digitization of random input spike trains and with the need to introduce in an otherwise deterministic artificial network the stochastic features observed in the biological counterpart. Specifically, in vitro experiments on GCs consist in applying specific (coded) stimuli through the neuron's MFs and repeating the experiments several times for each stimulus to sample the intrinsic neuron variability, which leads to a stochastic output. To reproduce the same stochastic response of the neuron using a deterministic neuron model, in circuit simulations each stimulus is delivered with a Poisson distributed delay (jitter) inside each time bin, therefore mimicking that the stimuli are provided by four other presynaptic neurons. In this study, we aimed at quantifying how theoretical quantities related to information propagation through the network are affected by learning (i.e. plasticity on synaptic weights) to confirm analogies between biological and artificial frameworks. To do so, it is imperative to understand properly how synaptic strength is represented in the two frameworks. Indeed, although in biological experiments the synaptic efficacy (weight) is measured in terms of release probability p, a stochastic parameter quantifying the synaptic strength [26], in memristor-based neuromorphic networks the weight is represented by the memristor conductance. In fact, in biological synapses, the efficacy is increased (decreased) by applying LTP (LTD) theta burst trains, which, in our approach, will result in an increased (decreased) memristor conductance. Therefore, in this study, we simulated the system for different values of memristor conductance, and we looked at how MI, SSS, and SpS are affected by synaptic plasticity. To keep consistency with what was observed in [26], in which the release probability was equal for all four MF synapses at the GC (i.e. any permutation of the four inputs was equivalent), we reduced the number of different stimuli from 65 536 (all possible input combinations) to 3876, making the four synaptic inputs equal. Also, to repeat simulations for different values of memristor conductance while trying to follow bioinspired protocols, the memristor conductance values used in simulations (and related to p) have been increased in-between consecutive simulations by applying specifically designed LTP theta burst (as depicted in figure 4(a)) and consequently updating the synaptic weights based on the conductance variations ( figure 4(b)). For each synaptic weight, the 3876 stimuli are then delivered 15 times through the inputs and ranked on the basis of their relative SpS. As the goal of this study was to understand how information propagates through an artificial network when synaptic strength values are in a fixed configuration, we excluded possible influence of the synaptic weight variations induced by the input transmission by disabling the plasticity mechanism in the Verilog-A model of the memristor during the application of input spike trains, leaving it enabled only during the application of the theta bursts in figure 4. This implies that the results concerning the information transfer analysis, reported in the following section, can be considered valid regardless of the specific device employed to represent the synaptic weights. Naturally, such a device needs to show potentiation and depression capabilities to fulfill the role of a synaptic element in an SNN. Nevertheless, information propagation through the network will show the features discussed in the following regardless of the technological specifications and of the peculiar learning features of the synaptic device. Results and discussion It is now possible to look at the effect of synaptic plasticity on a spiking neuron with memristive synapses by analyzing the key quantities related to information transfer, such as entropy, MI, and surprise when stimulating the network, as described in section 4. Information transfer analysis Shannon MI provides a mathematical framework to quantify the amount of information transmitted by a neuron during neural stimulation. Because our aim was to investigate whether the envisaged neuromorphic architecture could reliably reproduce neuronal performances, we explored the dependencies of MI on synaptic efficacy (i.e. memristor conductance). In analogy with [26,31], the reduced number of inputs allowed us to calculate MI, and, as explained in section 2 and shown in figure 3, we have first digitized spike trains, and then a controlled set of stimuli S was chosen. Second, responses r were detected when stimuli with known a priori probabilities p(s) were repeatedly presented. Once all the data were collected, the corresponding joint probabilities p(r|s) and the probability distribution of responses averaged over the stimuli p(r) were estimated. MI was computed with equation (1), and because in biological systems its value has been shown to change according to variations of the release probability [26], we investigated the relationship between MI and the memristor conductance, which in our assumption was the equivalent of the synaptic efficacy. Figure 5 shows the correlation between the calculated MI and the memristor conductance values. The overall information transfer is enhanced upon an increase in synaptic efficacy (memristor conductance) in accordance with the expectations. Furthermore, at visual inspection, the dependence of MI on the memristor conductance revealed a good correlation with the corresponding p-MI curve (p; release probability) obtained with both real biological and simulated neurons (see figures 2(b) and 3(b) in [26]). These results, besides the demonstration of the validity of this approach in mimicking neuronal information transfer, also indirectly support the close relationship between the release probability (p) and memristive conductance. We were also interested in identifying stimuli that were best encoded by the electronic neuron. The stimulus specific contribution to the MI (SSS; equation (2)) and the SpS (equation (3)) were therefore computed, allowing to identify the most informative set of stimuli ( figure 6). Specifically, for a given value of synaptic strength, the network was stimulated with the 3876 inputs, and the latter were then ranked by descending SpS (black curves in figure 6). The same procedure has been replicated for different memristor conductance values (2.6, 6.2, and 13.6 µS), and the results are reported in figures 6(b) and (c). We initially focused on the results obtained when using the lowest memristor conductance value. As shown in figure 6(c), we identified the stimuli with the highest and lowest SpS, respectively (i.e. the blue and red markers on the bottom black curve in figure 6(c)). We then tracked how these specific stimuli changed their ranking when the simulations were repeated after synaptic potentiation (achieved by means of theta bursts, as explained in section 4). Both markers moved in qualitative agreement with what was reported in [26], which is conveniently reported also in figure 6(a). Although the stimuli with the highest and lowest SpS at the lowest memristor conductance were not coincident with those found in [26] at the lowestp value, the qualitative trend was found to be the same. In addition, we verified that the same trend is obtained when tracking exactly these stimuli in our simulations, as reported in figure 6(b). Both cases confirmed the expected trends, revealing the dependability of an artificial memristor-based neuro-synaptic circuit in quantifying the information content of a specific spike train given a determined network strength. Discussion The results shown thus far confirm the similarities between a neuromorphic microcircuit composed by a neuron with a limited number of synapses endowed with a rate-based learning rule and its biological counterpart. Bioinspired experiments have been reproduced by assuming a dependency between the release probability and the conductance of an electronic synapse. Three parameters, namely MI, SSS, and SpS, have been computed for different synaptic weights, with the aim to analyze the ability of the implemented neuron to retrieve and quantify information content from a specific stimulus based on the synaptic strength and input sparseness. Despite the differences between the neuromorphic and the biological neuron, like (a) the noise/variability level, (b) the stochastic mechanisms underlying the opening of ion channels, or (c) the stochastic processes involved in the neurotransmitter release process, these results demonstrate from an information transmission perspective that artificial neurons can be adopted as elements performing complex computational tasks, such as those performed by biological neurons. This proof of principle is a first milestone in the development of advanced neuronal networks with performance compatible with brain circuits, given their capability to compute sparse and temporally uncorrelated information. Furthermore, differently from conventional hardware, neuromorphic electronic circuits [41] can be designed to operate with a limited power consumption in multiple time domains according to circuit architectures. These advantages, deriving from an electronic implementation of biologically plausible SNNs (e.g. multiple timescales and reduced area and power consumption), prove remarkably useful for different applications. Conclusions In this work, we investigated the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bioinspired experiments have been reproduced by linking the biological probability of releasep with the artificial synapse conductance. MI, SSS, and SpS have been computed for different synaptic weights, with the aim of analyzing the ability of the implemented neuron to retrieve and quantify information content from a specific stimulus based on the synaptic strength. The results highlight that an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information analysis. Advantages deriving from an electronic implementation (e.g. timescale and area) provide a remarkably useful tool for different applications. Data availability statement The data cannot be made publicly available upon publication because they are not available in a format that is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are available upon reasonable request from the authors.
6,994.4
2023-04-21T00:00:00.000
[ "Computer Science", "Biology" ]
A Node Influence Based Label Propagation Algorithm for Community Detection in Networks Label propagation algorithm (LPA) is an extremely fast community detection method and is widely used in large scale networks. In spite of the advantages of LPA, the issue of its poor stability has not yet been well addressed. We propose a novel node influence based label propagation algorithm for community detection (NIBLPA), which improves the performance of LPA by improving the node orders of label updating and the mechanism of label choosing when more than one label is contained by the maximum number of nodes. NIBLPA can get more stable results than LPA since it avoids the complete randomness of LPA. The experimental results on both synthetic and real networks demonstrate that NIBLPA maintains the efficiency of the traditional LPA algorithm, and, at the same time, it has a superior performance to some representative methods. Introduction In recent years, complex networks have been widely used in many fields, such as social networks, World Wide Web networks, scientist cooperation networks, literature networks, protein interaction networks, and communication networks [1,2]. Extensive studies have shown that complex networks have the property of communities (modules or clusters), within which the interconnections are close, but between which the associations are sparse. This property reflects the extremely common and important topology structure of complex networks and it is very important for understanding the structure and function of complex networks. A great number of community detection algorithms have been proposed in recent decades, including modularity optimization algorithms [3][4][5], spectral clustering algorithms [6][7][8], hierarchical partition algorithms [9,10], label propagation algorithms (LPA) [11,12], and information theory based algorithms [13]. Among them, LPA is by far one of the fastest community detection algorithms. The complexity of LPA algorithm is nearly linear time, and the design of the algorithm is simple, all of which make LPA algorithm receive quite a lot of attention from numerous scholars [14][15][16][17]. However, it still has a number of shortcomings; for example, the community detection results are unstable. In this paper, we propose a novel node influence based label propagation algorithm for community detection in networks (NIBLPA), improving the performance of the traditional LPA algorithm by fixing the node sequence of label updating and changing the label choosing mechanism when more than one label is contained by the maximum number of nodes. Firstly, NIBLPA calculates the node influence value of each node as the importance measure of nodes on the networks and fixes the nodes updating sequence in the descending order of node influence value; secondly, NIBLPA processes the label propagation repeatedly until the community structure of networks is detected. During each label updating process, when more than one label returned with the maximum number of nodes, instead of randomly selecting one label, we introduce the label influence into label computing formula to reselect the label from the set of labels with the same maximum number of nodes to improve the stability. Finally, NIBLPA divides all nodes with the same label into a community. Extensive experimental studies, by using various networks, demonstrate that our algorithm NIBLPA can get better community detection results compared with the state-of-the-art methods. 2 The Scientific World Journal The rest of this paper is organized as follows. Section 2 introduces the related works including the traditional label propagation algorithm and the -shell decomposition method. In Section 3, we introduce the main idea and the detailed process of our algorithm. The experimental results on various networks in Section 4 confirm the effectiveness of the algorithm. The conclusion is given in Section 5. Related Work A complex network can be modeled as a graph = ( , ), where = {V 1 , V 2 , . . . , V } is the set of nodes, = { 1 , 2 , . . . , } represents the edges between nodes, and and represent the number of nodes and edges in the network, respectively. Each edge in has a pair of nodes in corresponding. The label of V is denoted as . ( ) represents the neighborhood set of V and is the degree of node . Label Propagation Algorithm for Community Detection in Networks. In 2007, Raghavan et al. [11] applied the label propagation algorithm (LPA) to community detection, and the main idea of LPA is to use the network structure as the guide to detect community structures. LPA starts by giving each node a unique label, such as integers and letters, and in every iteration, each node changes its label to the one carried by the largest number of its neighbors. If more than one label is contained by the same maximum number of its neighbors, then randomly select one from them. In this repeated process, the dense groups of nodes change their different labels into the same label and nodes with the same label will be grouped into the same community. The following equation is the formula of label updating: where ( ) represents the set of neighbors of V with label . For a weighted graph , the weight of the edge between V and V is denoted as and the label updating formula is changed as follows: ( However, the algorithm cannot guarantee the convergence after several iterations. When the algorithm takes synchronize updating of the node labels (during the th iteration, the node adopts its label only based on the labels of its neighbors at the ( − 1)th iteration), oscillations will occur in bipartite or nearly bipartite graph. As shown in Figure 1, the labels on the nodes oscillate between and in a bipartite graph. Therefore, Raghavan et al. [11] proposed asynchronous updating where node in the th iteration updates its label based on a portion of labels at the th iteration of its neighbors which have already been updated in the current iteration and another part of labels at the ( − 1)th iteration which are not yet updated in the current iteration to avoid the oscillation of labels. The design of label propagation algorithm is simple and easy to be understood. The process of the algorithm is presented in Algorithm 1. In large networks with a huge number of nodes, each time the network may have different divisions because of the randomness of LPA algorithm. Among the solutions, it is difficult to determine which is the optimal. So the stability issue of LPA is necessary to be settled. The -Shell Decomposition Method. There are many measures we usually use to calculate the node importance, such as degree centrality [21], clustering coefficient centrality [22], and betweenness centrality [23]. Degree and clustering coefficient of nodes can only characterize the local information of networks. The complexity of computing betweenness is very high due to the need to calculate the shortest path. Kitsak et al. [24] pointed out that nodes with large -shell value are very important for spreading dynamics on networks. A -shell is a maximal connected subgraph of in which every vertex's degree is at least . The -shell value of node , denoted by ( ), indicates that node belongs to a -shell but not to any ( + 1)-shell. The -shell decomposition method is often used to identify the core and periphery of networks. It starts by removing all nodes with only one link, until no such nodes remain and assigns them to the 1-shell. In the same manner, it recursively removes all nodes with degree 2 (or less), creating the 2-shell. The process continues, increasing until all nodes in the network have been assigned to a shell. The shells with higher indices lie in the network core. Theshell decomposition method can be efficiently implemented with the linear time complexity of ( ), where is the number of edges in the network. The -shell decomposition method is shown in Figure 2. It is a simple network which can be divided into three different shells. Our Method Although asynchronous updating method can avoid oscillation of labels, there still are many limitations. As nodes are not updated simultaneously, the updating order of nodes plays a crucial impact on the stability and the quality of the results. The randomness of LPA in selecting one label when more than one label contained by the maximum number of nodes also makes the results unstable. The Scientific World Journal We analyze traditional LPA on a toy sample network in Figure 3 [25]. There are two communities in the network, The numbers inside the nodes represent their labels. Assuming that V 1 , V 2 , and V 3 have already shared the same label 2, while V 4 , V 5 , and V 6 still have unique labels. If we update V 4 first and randomly choose label 2 as its new label, then update V 6 before V 5 . As a consequence, all nodes are classified into the same community. On the other hand, if node V 4 chooses label 6 and then updates node V 5 before V 6 , the output will correspond with the right communities. Seen from the above analysis, LPA is very sensitive to the node updating order and the label choosing method. In this section we propose solutions to overcome the issues discussed above to improve the traditional LPA algorithm. The Basic Idea. In the new algorithm, we choose the asynchronous updating method to avoid oscillation of labels in Figure 1. But the randomly determined label updating order of nodes affects the stability of the algorithm. We should order the nodes based on their importance for the network and the more important nodes should be updated earlier. A node with a big -shell value indicates that it is located in the core of the network. However, in a network, there are too many nodes with the same -shell value and we cannot rank the node effectively. In general, in a network a node with more connections to the neighbors located in the core of the network is more important for the network. Inspired by these previous studies, we propose a novel centrality measure by considering both the -shell value and degree of node itself and its neighbor's -shell values. The node influence of node is defined as follows: where is a tunable parameter from 0 to 1, which is used to adjust the effect of its neighbors on the centrality of node . We choose node influence value as the measure of node importance, so we arrange nodes in the descending order of node influence value. The fixed node updating sequence makes the algorithm more stable. The other random factor causing the instability of LPA is that when the number of labels with maximum nodes is more than one, the algorithm randomly selects one of the labels to assign to the node. Instead of randomly selecting one of the labels contained by the maximum nodes, we improve the label updating formula using the information of the label influence. The label influence of label on node is computed as follows: The new formula of label updating is changed as follows: where max denotes the set of labels that are simultaneously contained by the maximum nodes. When multiple labels are simultaneously contained by the maximum nodes, we recalculate the value of the labels contained by the greatest number of nodes according to (5) and choose the label with the maximum value to assign to node . The Steps of NIBLPA Algorithm. The main steps of NIBLPA include initialization, iteration, and community division. Then NIBLPA can be described as Algorithm 2. We implement NIBLPA on the toy sample network in Figure 3 with = 1. The decimals outside the nodes are the node influence value. Using our method on this network, the node updating sequence is fixed as rank and by their node IDs). The label propagation process is shown in Figure 4. Firstly, we update the label of node V 1 . We label V 1 with a set of tuples ( , , LI( )), where is a label contained by its neighbor, and represents the number of its neighbors having the label , and LI( ) is an optional value recalculated by (5) when multiple labels are contained by the maximum neighbors. As shown in Figure 4(a), V 1 has three neighbors and they all have different labels with each other, and the set of tuples is {(2, 1, 1.833), (3, 1, 1.667), (4, 1, 1.667)}. So we choose label 2 as its new label. Then, node V 3 is the next. After the label updating of V 1 , there are two neighbors of V 3 that share label 2 and only one contains label 6, so we relabel V 3 with label 2 as shown in Figure 4(b). The next label propagations of V 4 and V 6 are consistent with V 1 and V 3 . Now only V 2 and V 5 are not updated and, as shown in Figure 4(c), all of their neighbors contain the same labels with themselves, respectively, so we do not need to relabel them. After only one iteration using this method, we get the final solution that contains two communities exactly the same with the ground truth. Since there is no randomness, the outcome is deterministic and perfect. Time Complexity. The time complexity of the algorithm is estimated below. is the number of nodes, and is the number of edges. (1) The time complexity of initialization for all nodes: ( ). (2) The time complexity of calculating the node influence value of all nodes: ( ). The Scientific World Journal (1) Initialization: assign a unique label to each node in the network, (0) = . (2) Calculate the node influence value for each node and arrange nodes in descending order of NI storing the results in the vector . The time complexity of ranking the nodes in descending order of NI: ( log ( )). (3) Each iteration of label propagation consists of two parts: (1) the time complexity of normal label updating: ( ); (2) the time complexity of recalculating the labels based on (5) if necessary: ( ). (4) The time complexity of assigning the nodes with the same label to a community: ( ). Phases (3) are repeated, so the time complexity of the whole algorithm is 2 × ( ) + (2 × + 1) × ( ) + ( log( )), where is the number of iterations and it is a small integer. Experimental Studies This section evaluates the effectiveness and the efficiency of our algorithm. We compare the performance of NIBLPA with LPA, KBLPA, and CNM. Where KBLPA is an improved LPA algorithm changing the node updating sequence by descending order of -shell value. All the simulations are carried out in a desktop PC with Pentium Core2 Duo 2.8 GHz processor and 3.25 GB memory under Windows 7 OS. We implement our algorithm in Microsoft Visual Studio 2008 environment. Datasets. In this section, we choose two types of synthetic and eight real networks to make experiments. According to the generation rules of Clique-Ring networks, we construct four different size Clique-Ring networks. The parameters are shown in Table 1. [27,28] are currently the most commonly used synthetic networks in community detection. It can generate networks based on users' need by changing the following parameters in Table 2. LFR Benchmark Networks. LFR benchmark networks We generate six groups of LFR benchmark networks and all the networks share the common parameters of max = 50. Each group contains nine networks with mu ranging from 0.1 to 0.9 and they also share parameters , , min , and max , respectively. The other parameters are set to the default values. The details are shown in Table 3. Real Networks. We also make experiments on eight well known real networks, including Zachary's karate club networks, Dolphins social networks, and American College Football networks. The detailed information of each network is shown in Table 4. Evaluation Criteria. In this paper, we use modularity ( ) [2], -measure [29], and normalized mutual information (NMI) [30] as the evaluation criteria which are currently widely used in measuring the performance of network clustering algorithms. Computing -measure and NMI needs to know the true community structure of the network, while the modularity does not. For synthetic networks, since the ground truth of the community structure has been known, we use both -measure and NMI on Clique-Ring networks and LFR benchmark networks to evaluate the results of community detection. While the underlying class labels of most real networks are unknown, we can only adopt the modularity as the evaluation criteria on partial real networks and use both NMI and modularity on others with known community structure. The maximum degree The exponent for the degree distribution The exponent for community size distribution The mixing parameter for the topology min The minimum for the community sizes max The maximum for the community sizes Modularity Consider the following: where represents the number of edges in the network; is the adjacency matrix of the network, if node and node are directly connected, = 1; otherwise, = 0; and , respectively, denote the label of node and node , if = , then ( , ) = 1, else ( , ) = 0. The Scientific World Journal 7 -Measure Consider the following: where precision and recall are written as (8), respectively, is the set of node pairs ( , ), where nodes and belong to the same classes in the ground truth, and is the set of node pairs that belong to the same clusters generated by the evaluated algorithm. Then ∩ represents the intersection of node pairs of the ground truth and the clustering result. Normalized Mutual Information (NMI) Consider the following: where represents the number of nodes in the network, represents a community detection result generated by the evaluated algorithm, and represents the ground truth community structure. Experimental Results and Analysis. In this section, the synthetic and real networks are used to test the effectiveness of NIBLPA comparing with traditional LPA, KBLPA, and CNM. Where LPA and KBLPA are processed 100 times and the average value is used as the results because of the randomness of these algorithms. We compare the stability of the algorithms by analyzing the fluctuation range of all the results. Table 5 shows the comparative results of the four algorithms on four different Clique-Ring networks, and for each instance, the best results are presented in boldface. The -measure and NMI of LPA and KBLPA are in the form of average value ± the maximum difference between one result and the average value. The Experiments on Clique-Ring Networks. It can be seen from Table 5 that in the Clique-Ring networks which have special structure, NIBLPA can exactly detect the correct communities and CNM gets the right community structure on the first three networks. But on network C4, the result of CNM is much worse than others because modularity has the resolution limit problem. While the average -measure of KBLPA algorithm is the lowest among LPA, KBLPA, and NIBLPA on the four networks and the average NMI of KBLPA is the lowest on most of the four networks except C4. These results illustrate that the fixed node sequence descending by the -shell value at each step of label propagation cannot get good results. The instability of KBLPA is caused by the randomness of selecting label when multiple labels are simultaneously contained by the greatest number of nodes. The Experiments on LFR Benchmark Networks. The twelve figures in Figure 6 are the NMI and -measure of the four algorithms on six groups of LFR benchmark networks (N1∼N6). The abscissa represents the parameter from 0.1 to 0.9. The ordinate in the left figures is the NMI of the results and the ordinate in the right figures is themeasure. The twelve figures in Figure 6 show that with the increase of , the network structure is more and more complex and the four algorithms cannot be effective to detect the community structure. When mu is especially larger than 0.5, the NMI and -measure decrease quickly. But generally, the performance of NIBLPA is better than the other three algorithms. Although NIBLPA does not guarantee to get the best performance, it can return stable, unique, and satisfied results. It can also be seen in Figure 6 that the fluctuation range of NMI and -measure of LPA algorithm is large. KBLPA is also relatively stable, but its results are worse than LPA and NIBLPA. On these complex networks, CNM algorithm cannot detect the network structure effectively and it generally gets less number of communities than the truth. The Experiments on Different Sizes of Networks. In order to compare the time efficiency of the algorithms, we generate 10 LFR benchmark networks, the size of which is from 1,000 to 10,000, and the other parameters are the same ( = 10, max = 50, min = 10, max = 50, and = 0.1). The time consumption of the four algorithms on the 10 LFR benchmark networks is shown in Figure 7 From Figure 7, it is observed that the four algorithms use more and more time with the increase of the size of networks and CNM uses the longest time. When the number of nodes is larger than 5000, CNM cannot get the community structure because of the limit of computer memory. From Figure 7(b), one can note that when the number of nodes is greater than 7000, the time consumption of NIBLPA is less than LPA. To some extent, we can say NIBLPA is more suitable for community detection on large scale networks. The Experiments on Real Networks. The eight realworld networks shown in Table 4 are commonly employed in the community detection literature and the first four networks have known ground truth community structures. So we compare the modularity and normalized mutual information NMI on the first four networks and only compare the modularity on the last four networks. Table 6 shows the experimental results on the eight real networks, and for each instance, the best and NMI are presented in boldface. It can be seen from Table 6 that in all the real networks besides R7(Blog) and R8(PGP), the modularity of NIBLPA is higher than the other three algorithms. Simultaneously, the NMI of NIBLPA on the first four networks is the best. The stability of KBLPA is better than LPA, but the modularity and NMI of KBLPA are worse than LPA on almost all of the networks. On the large size of PGP-network, CNM cannot detect the community structure. In general, NIBLPA can get better and stable results than the other three algorithms. Instance Analysis. We compare the community structure detected by NIBLPA when NMI achieves the maximum with the true community structure of Dolphins. Figure 8(b) is a community detection result of NIBLPA on Dolphins. Comparing these two figures, the division of DN63 and SN90 based on NIBLPA is inconsistent with the real structure. From the topology structure of Dolphins, we can see that DN63 has two adjacent nodes and they, respectively, belong to the two communities; DN63 has five neighbors, NIBLPA algorithm assigns it to the community which its most neighbors belong to. The modularity of Dolphins real community structure is lower than the result of NIBLPA, which draws a conclusion that the community division of NIBLPA is a reasonable result. Parameter Selection. There is only one parameter in NIBLPA algorithm, tunable parameter . In order to analyze the impact of the parameter, we run NIBLPA with different values of on synthetic networks and compare NMI to analyze the effect of the parameter on the algorithm. In this way, we can investigate that under which the NIBLPA can achieve the best results. We generate five LFR benchmark networks with ranging from 10 to 50 and all the networks share the common parameters of = 1000, max = 50, min = 10, max = 50, and = 0.1. Figure 9 shows the results of NIBLPA on these networks. As it can be seen in Figure 9, under different parameter , the value of NMI changed a lot. However, for each network, there is an optimal under which the NIBLPA method can achieve the largest NMI. Moreover, on each network, the first extreme large value is generally the best result. Conclusion This paper presents a node influence based label propagation algorithm for community detection in networks. The algorithm firstly calculates the node influence value for each node and ranks the node in the descending order of node influence value. During each label updating process, when more than one label is contained by the maximum number of nodes, we introduce the label influence value into the formula of label updating to improve the stability. After the algorithm converges, nodes with the same label are divided into a community. This algorithm maintains the advantages of the original LPA algorithm. Moreover, it can get the stable community detection results by avoiding the randomness of label propagation. By experimental studies on synthetic and real networks, we demonstrate that the proposed algorithm has better performance than some of the current representative algorithms.
5,864.6
2014-06-04T00:00:00.000
[ "Computer Science" ]
Young’s Modulus Measurement of Metal Wires Using FBG Sensor A novel Young’s modulus measurement scheme based on fiber Bragg gratings (FBG) is proposed and demonstrated experimentally. In our method, a universal formula relating the Bragg wavelength shift to Young’s modulus is derived and metal wires are loaded strain by using the static stretching method. The Young’s modulus of copper wires, aluminum wires, nickel wires, and tungsten wires are separately measured. Experimental results show that the FBG sensor exhibits high measurement accuracy, and the measurement errors relative to the nominal value is less than 1.0%. The feasibility of the FBG test method is confirmed by comparing it with the traditional charge coupled device (CCD) imaging method. The proposed method could find the potential application in the material selection, especially in the field that the size of metal wires is very small and the strain gauges cannot be qualified. Introduction Fiber Bragg grating (FBG) sensor has been used in a large number of sensing applications due to its high sensitivity, light weight, small size, immunity to external electromagnetic disturbance, and the ability to function in harsh environment [1]. The basic principle of FBG sensor is based on the fact that the Bragg wavelength shifts with the change of external environment parameters due to the thermo-optic effect and strain effect. FBG sensor exhibits all of the benefits associated with other optical fiber sensors, especially their ability to be multiplexed [2,3]. Many different FBG sensors have been developed for measuring strain [4], temperature [5,6], pressure [7], refractive index [8,9], curvature [10], pressure [11], and shock stress [12]. Young's modulus is significant for physical parameter of solid material, which describes the physical capacity of the solid material resistance to deformation. Measuring accurately Young's modulus of solid materials is crucial to the material selection for civil engineering, machine design, and development of new materials. Current methods for measurement of Young's modulus can be divided into the optical method and the electrical method [13,14]. Optical lever, charge coupled device (CCD) imaging system, and position sensitive detector (PSD) are often used in optical methods, and strain gauge and Hall position sensors are mainly applied in electrical methods at present. Recently, engineering measurement for Young's modulus change based on 3-point bending test was reported. In their measurements, Young's modulus is determined by the deformation of the specimen by using three laser displacement meters with a precision of 1 m. The strains at 30% and 5% of the maximum load are used for calculating Young's modulus [15]. However, this method requires a relatively complex system. In our previous work, we have demonstrated measuring Young's modulus of metal beams by using fiber Bragg gratings (FGBs) based on the three-point bend testing method [16]. In this paper, we propose and demonstrate experimentally a high-accuracy Young's modulus measurement of metal wires by using the FBG sensor based on the static stretching method. Besides the different testing methods, another important difference is that the test object is a small metal wire with the diameter in the magnitude of 0.1 mm, in which the strain gauges cannot be qualified. In the experiment, Young's moduli of four metal wires are measured by the proposed scheme. The experimental results with FBG sensor exhibits high measurement accuracy, and the measurement errors relative to the nominal value are less than 1.0%. The feasibility of the FBG test method is also confirmed by comparing it with the traditional CCD imaging method. Experimental setup and measurement principle The scheme of experiment is shown in Fig. 1. The light from an amplified spontaneous emission (ASE) broadband light source (20 dBm, 1525 nm-1565 nm) is launched into the FBG via an optical circulator. And then, the reflected spectrum of FBG which includes the tensile deformation information of the wire is recorded by an optical spectrum analyzer (OSA, YOKOGAWA, AQ6370C) with the resolution of 0.02 nm. Because the diameter of the wire is only 0.1 mm order of magnitude, the conventional FBG attached method is unsuitable to our measurement [11]. In order to guarantee the strain of the wire is fully transferred to the FBG as much as possible, two ends of FBG are fixed on the wire by fusion splicing using two heat-melt tubes (HMTs) with a commercial splicer (Fujikura FSM-60S). One end of the wire to be measured is fixed, and the other end hangs a tray for adding or reducing the scales to realize static stretching. It is well known that the Bragg wavelength of FBG is satisfied as [16] where B  is the Bragg wavelength, eff n is the effective refractive index of the core of FBG, and  is the period of FBG. According to this equation, it is known that the Bragg wavelength shifts when the period or the effective refractive index changes. All the measurements are carried out in an air-conditioned laboratory where the temperature is set at 25 ℃. Consequently, the impact of temperature change on the Bragg wavelength shift is negligible. Then the Bragg wavelength shifts with the axial strain of FBG can be expressed as [16] 2 eff 12 12 11 where 11 P and 12 P are the Pockels coefficients of the photoelastic tensor, of which the values are 0.12 and 0.27, respectively [18]. The Poisson coefficient  of optical fiber is typically 0.17. And the typically strain sensitivity z k is 0.784 [19]. zB  is the strain loaded on the FBG. It is obvious that the relative shift of Bragg wavelength is linearly related to the axial strain based on (3). When the axial stress is imposed to the wire by adding scales in the tray, the tensile elongation of the wire will transmit to the FBG. Therefore, the axial strain of the wire can be obtained by measuring the relative Bragg wavelength shift. According to Hooke's law, when the wire encounters elastic deformation by external force, the system satisfies / where S , z  , and E are the cross-sectional area, axial strain, and Young's modulus of the wire, respectively. d is the diameter of the wire, and F is the axial force applied on the wire. It must be pointed out that the gravity force of scales is shared by the wire and the FBG when the diameter of testing metal wires is comparable to FBG. And the axial force applied on the FBG is half of the gravity force. In other words, the axial force F in (4) is approximately equal to the half of the scales gravity force. Because the two ends of FBG are fixed on the wire by the HMTs, it is obvious that the tensile elongations of the FBG and the wire between the two HMTs are equal. It indicates that the strain loaded on FBG is equal to that on the wire, i.e. zB z    . Therefore, Young's modulus of the wire can be expressed as follows: In the experiment, once the ratio (or its reciprocal) of applied axial force F and the shift of Bragg wavelength B   is measured, the Young's modulus of the wire can be calculated. Young's modulus measurement of copper wires The FBG reflected spectra shift with the change of the applied axial force zB F without wire is measured firstly. The laboratory temperature is kept at 25 controlled by an air conditioner during the ℃ whole experimental process. Therefore, the effect of temperature on the shift of Bragg wavelength B   can be negligible. According to Fig. 2 Young's modulus measurement of copper wire is then measured, and the feasibility and repeatability of the method is also experimentally validated. In the experiments, a section of 1 m long copper wire with a diameter of 0.18 mm is used. The nominal Young's modulus in data sheet is 110.00 GPa. The Bragg wavelength of the FBG is 1549.934 nm at room temperature of 25 , reflectivity is 92.5%, and ℃ spectral width (3 dB) is about 0.18 nm. The ratio of applied axial force F (half of the scales gravity force) and the shift of Bragg wavelength B   are measured by applying axial force F from 0 N to 1.86 N with a step about 0.30 N. The results are plotted in Fig. 3. The measured FBG reflected spectra show a uniform spacing change with an increase in the applied axial force [see Fig. 3(a)], which well agrees with the theoretical analysis. To reduce the measurement error, we take the average of elongation and restoration direction (by adding and reducing scales) results as the final results. The averaged Bragg wavelength shifts with the loaded axial force are shown in Fig. 3(b). Obviously, the Bragg wavelength shifts B   is proportional to the applied axial force F. A linear fit of the data gives the slope of 0.349, and the uncertainty is 0.012. Then Young's modulus of the copper wire can be obtained by substituting the slope value into (4), which is 110.35 GPa. The relative error between our measured value and the nominal Young's modulus is 0.32%. As a comparison, Young's modulus is measured by a CCD imaging method (ZW-YM-1, the magnification of reading microscope is 25 times, the division value is 0.05 mm, and the line pair of CCD is 420 lines/mm) under the same conditions, and the measured Young's modulus of the copper wire is 113.68 GPa, and the relative error to nominal value is 3.34%. The relative error of two methods is 3.02%, which indicates that the FBG test method is feasible. In order to evaluate the stability and repeatability of the proposed method, we perform 10 times of experimental measurements of copper wires by adding or reducing scales in the tray. The relative errors distribution is plotted in Fig. 4. The solid dots in Fig. 4 represent the average value of adding scales process and reducing scales procedure. It can be seen from Fig. 4 that the mean relative errors measured by the FBG sensor is 0.41%, and the least value can reach up to 0.27%. In the following experimental results, all the measured FBG reflected spectra shifts with the change of the applied axial force are corresponding to elongation direction, and the results of Bragg wavelength shifts with the loaded axial force are the average value of the two direction's results. Measuring Young's modulus of other metal wire In order to verify that the proposed scheme can be used to measure Young's modulus of various metal wires, we separately measured the Young's modulus of aluminum wires, nickel wires, and tungsten wires by using the proposed FBG method. In the experiments, different axial force ranges were loaded for different metal wires to ensure enough wavelength shift of FBG. The tungsten wire had least deformation property. And thus, the maximum axial force ranging from 0 N to 3.60 N was loaded on the tungsten wire in the experiment. The nominal Young's modulus of the aluminum wires, nickel wires, and tungsten wires in data sheets are 69.00 Gpa, 210.00 Gpa, and 340.00 GPa, respectively. The measured results are shown in Fig. 5. FBG reflection spectra in Figs. 5(a), 5(b), and 5(c) correspond to aluminum wires, nickel wires, and tungsten wires, respectively. Corresponding Bragg wavelengths of the FBG response to the variation of the loaded axial force are shown in Fig. 5(d). The measurement of FBG reflected spectra for the three kinds of wires also show a uniform spacing change with an increase in the axial force applied, which well agrees with the theoretical analysis. The slopes of the Bragg wavelength response curves for aluminum wire, nickel wire, and tungsten wire are 0.61 [red dotted line in Fig. 5(d)], 0.31 [green dotted line in Fig. 5(d)], and 0.12 [blue dotted line in Fig. 5(d) It must be pointed out that relatively expensive OSA is used for FBG Bragg wavelength demodulation in our experiments, but it can be replaced by other lower cost interrogation systems such as fiber Fabry-Pérot interferometers [20], wavelength division coupler [21], or unbalanced Mach-Zehnder fiber interferometer methods [12], which would make the FBG sensor more suitable for engineering application in the material selection. Conclusions In conclusion, a universal formula based on FBG sensor has been derived to measure Young's modulus of various metal wires. Young's modulus of copper wires, aluminum wires, nickel wires, and tungsten wires are measured by using the FBG sensor according to the formula. Compared with the CCD imaging method, the FBG method shows higher precision. All the relative errors measured by the FBG sensor to the nominal values are less than 1.0%. The stability and repeatability of the proposed method have also been verified. The proposed scheme with its excellent performances may find an extensive application in the material selection with relevance in the fields of civil engineering, machine design, and development of new materials.
2,987.6
2019-01-16T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Contribution of the Potassium Channels KV1.3 and KCa3.1 to Smooth Muscle Cell Proliferation in Growing Collateral Arteries. Collateral artery growth (arteriogenesis) involves the proliferation of vascular endothelial cells (ECs) and smooth muscle cells (SMCs). Whereas the proliferation of ECs is directly related to shear stress, the driving force for arteriogenesis, little is known about the mechanisms of SMC proliferation. Here we investigated the functional relevance of the potassium channels KV1.3 and KCa3.1 for SMC proliferation in arteriogenesis. Employing a murine hindlimb model of arteriogenesis, we found that blocking KV1.3 with PAP-1 or KCa3.1. with TRAM-34, both interfered with reperfusion recovery after femoral artery ligation as shown by Laser-Doppler Imaging. However, only treatment with PAP-1 resulted in a reduced SMC proliferation. qRT-PCR results revealed an impaired downregulation of α smooth muscle-actin (αSM-actin) and a repressed expression of fibroblast growth factor receptor 1 (Fgfr1) and platelet derived growth factor receptor b (Pdgfrb) in growing collaterals in vivo and in primary murine arterial SMCs in vitro under KV1.3. blockade, but not when KCa3.1 was blocked. Moreover, treatment with PAP-1 impaired the mRNA expression of the cell cycle regulator early growth response-1 (Egr1) in vivo and in vitro. Together, these data indicate that KV1.3 but not KCa3.1 contributes to SMC proliferation in arteriogenesis. Introduction Arteriogenesis, which is defined as the growth of pre-existing arteriolar connections into functional arteries compensating for the loss of an artery due to occlusion [1], particularly involves the proliferation of vascular endothelial cells (ECs) and smooth muscle cells (SMCs). The driving force for arteriogenesis is increased fluid shear stress [2,3]. This mechanical force, which can be sensed directly by ECs, has recently been shown to be linked to local activation of collateral ECs. By mediating the release of extracellular RNA (eRNA) from ECs, eRNA promotes the binding of vascular endothelial growth factor A (VEGFA) to VEGF receptor 2 (VEGFR2) [4], thereby promoting local vascular EC proliferation Bavarian Animal Care and Use Committee (ethical approval code ROB-55.2-1-54-2532-73-12 and ROB-55.2Vet-2532.Vet_02-17-99) and carried out according to the guidelines of the German law for protection of animal life. Mice at the age of 6 to 10 weeks were anesthetized with a combination of 0.5 mg/kg medetomidine (Pfister Pharma), 5 mg/kg midazolam (Ratiopharm GmbH), and 0.05 mg/kg fentanyl (CuraMED Pharma). Arteriogenesis was induced by right femoral artery ligation (FAL, occlusion (occ)), whereas the left femoral artery was sham operated (Figure 1) as previously described in [30]. Photographs of superficial collateral arteries in mouse adductor muscles. Photographs were taken 7 days after induction of arteriogenesis by femoral artery ligation (left picture) or sham operation (right picture). Mice were perfused with latex to better visualize collateral arteries. Pre-existing collaterals appear very fine and straight (arrows, right picture). Seven days after induction of arteriogenesis, grown collateral arteries show a typical corkscrew formation with increased vascular caliber size (arrows, left picture). Scale bar 5 mm To block potassium channels, mice were treated either with the selective K V 1.3 channel blocker (5-(4-phenoxybutoxy)psoralen (PAP-1, 40 mg/kg/d, intraperitoneally (i.p), Sigma-Aldrich) [31], or the selective K Ca 3.1 channel blocker TRAM-34 (120 mg/kg/d, i.p., Alomone Labs) [27], dissolved in peanut oil, at doses previously described [27,31]. The treatments started 4 h before the surgical procedure. Moreover, to uphold constant blood levels of the blockers, mice received two doses per day, one in the morning and one in the afternoon. When mice were treated with BrdU (Sigma-Aldrich), they received a single dose (1.25 mg/d dissolved in phosphate buffered saline (PBS), i.p.) starting directly after the surgical procedure. Laser Doppler Perfusion Measurements and Tissue Sampling The laser Doppler perfusion measurements were performed as described in [4]. In brief, hindlimb perfusion was measured using the laser Doppler imaging technique (Moor LDI2-IR, LDI 5061 and Moor Software 3.01, Moor Instruments) under temperature-controlled conditions (37 • C), and perfusion was calculated by right to left (occlusion (occ) to sham) flux ratios. Prior to tissue sampling for (immuno-) histology, mice were perfused with an adenosine buffer (1% adenosine, 5% bovine serum albumin (BSA), both from Sigma-Aldrich dissolved in PBS, PAN Biotech, pH 7.4) for maximal vasodilation followed by perfusion with 3% paraformaldehyde (PFA, Merck) dissolved in PBS, pH 7.4, for cryoconservation, or 4% PFA for paraffin embedding [2]. For qRT-PCR analyses, mice were perfused with latex flexible compound (Chicago Latex) to visualize superficial collateral arteries (see also Figure 1) for dissection. After isolation, superficial collateral arteries were snap frozen on dry ice and stored at −80 • C until further investigations [9]. Cell Culture Mouse primary artery smooth muscle cells (catalog number C57-6081, CellBiologics) were cultured in a SMC growth medium (SMCGM, CellBiologics) containing insulin and the growth factors fibroblast growth factor 2 (FGF-2) and epidermal growth factor (EGF) together with 20% fetal calf serum (FCS, PAN). For serum starvation, cells were cultured in Dulbecco's modified Eagle´s medium (DMEM, Thermo Fisher Scientific) with 1% FCS for 24 h. Thereafter, negative controls were stimulated with medium containing 2% FCS, positive controls with 10% FCS. Histology, Immunohistology, Proliferation Assay, and Immunocytochemistry Giemsa staining on paraffin fixed tissue samples was performed according to standard procedures, and slices were analyzed using an Axioskop 40 microscope (Carl Zeiss AG). BrdU staining of paraffin fixed tissue sections was performed with a BrdU detection kit (BD Pharmingen) according to the manufacturer´s procedure using the same microscope for evaluating the tissue sections. To investigate the proliferation of mouse primary artery SMCs, a BrdU proliferation assay kit (Roche) was used according to the manufacturer´s instructions. In brief, mouse primary artery SMCs were seeded in a 96-well plate overnight, and after serum starvation in DMEM containing 1% FCS for 24 h, the mouse primary artery SMCs were cultured in DMEM with 10% FCS and treated with or without PAP-1 or TRAM-34, respectively, together with 10 mM BrdU. Cell proliferation was assessed by colorimetry with an Infinite F200 ELISA reader (TECAN). For immunofluorescence staining, cryofixed tissue sections (10 µm) were stained with a rabbit anti-K V 1.3 (catalog number APC-101) or a rabbit anti-K Ca 3.1 (catalog number APC-064) antibody (both from Alomone Labs) followed by a goat anti-rabbit IgG Alexa fluor 488-conjugated antibody (catalog number 711-545-153, Jackson ImmunoResearch) together with a Cy3-conjugated mouse anti-αSM-actin antibody (catalog number C6198, Sigma-Aldrich) and an Alexa fluor 647-conjugated rat anti-CD31 antibody (catalog number 102515, BioLegend), followed by DAPI counter staining (catalog number 62248, Thermo Fisher Scientific). Images were taken with an Axio Imager 2 fluorescence microscope equipped with and an Axion ICc 5 camera and Axiovert software (Carl Zeiss) or using a LSM 880 confocal laser scanning microscope equipped with an Airycan module (Carl Zeiss) with ZEN black software for imaging acquisition. Imaging analysis of K V 1.3 or K Ca 3.1 expression in αSM-actin positive SMCs or CD31 positive ECs was performed using the ZEN blue software. For the colocalization anaylsis, the ZEN colocalization tool was used (Carl Zeiss AG). The three-dimensional (3D) projection surface reconstruction of the images where done by using the Imaris software (Bitplane). Statistical Analyses Statistical analyses were performed using the GraphPad software PRISM6. All data are stated as means ± SEM. Results were tested for normality and statistical analyses were performed as specified in the figure legends. Results were considered to be statistically significant at p ≤ 0.05. K V 1.3 and K Ca 3.1 are Localized in Collateral Arteries Employing a murine hindlimb model of arteriogenesis, we investigated whether the potassium channel K V 1.3 or K Ca 3.1, respectively, were expressed in adductor collateral arteries. Immunofluorescence imaging revealed that K V 1.3 and K Ca 3.1 labelling strongly colocalized with αSM-actin, a marker for SMCs, but weakly with CD 31, which is a marker for ECs (Figures 2 and 3). Tissue sections were stained with an antibody against K Ca 3.1 (green), together with the SMC marker αSM-actin (red), the EC marker CD31 (grey), and DAPI (blue); (b,c) Scatterplots showing the colocalization analysis, (left lower panel) represents pixels that have low intensity levels in both channels, green and red (b), or green and gray (c). Quadrant 4 (lower left bottom) represents pixels that are referred to as background and are not taken into consideration for colocalization analysis. Quadrant 1 represents pixels that have high green intensities and low red intensities and Quadrant 2 represents pixels that have high red intensities and low green intensities. Quadrant 3 represents pixels with high intensity levels in both green and red (b) or green and gray (c). These pixels are considered to be colocalized. Bright field image is also displayed. Scatterplots showing the colocalization analysis. Quadrant 4 (left lower left panel) represents pixels that have low intensity levels in both channels, green and red (b) or green and grey (c), and these pixels are referred to as background and are not taken into consideration for colocalization analysis. Quadrant 1 represent pixels that have high green intensities and low red intensities and Quadrant 2 represents pixels that have high red intensities and low green intensities. Quadrant 3 represents pixels with high intensity levels in both green and red in (b) and green and grey in (c). These pixels are considered to be colocalized. Scale bar 20 µm. Bright field image is also displayed. (c) 3D projection surface rendering is showing the localization of the K V 1.3 with the labelling CD 31 and αSM-actin on the panel (c) right lower position. Blockade of K V 1.3 But Not of K Ca 3.1 Impaired Arteriogenesis by Inhibiting Collateral SMC Proliferation To investigate the functional relevance of the potassium channels for arteriogenesis, the K V 1.3 channel was blocked with PAP-1, and the K Ca 3.1 channel with TRAM-34. The laser Doppler perfusion measurements revealed that both treatments significantly interfered with reperfusion recovery after femoral artery ligation (Figure 4). To quantify the effects of channel blockade on vascular cell proliferation, we performed immunohistochemical analyses of the proliferation marker BrdU in transversal sections of collateral arteries at day 7 after induction of arteriogenesis. The results showed that both treatment with the K V 1.3 blocker PAP-1 and with K Ca 3.1 blocker TRAM-34 did not interfered with EC proliferation in growing collaterals. However, the PAP-1 treatment significantly reduced SMC proliferation, an effect that was not observed when mice were treated with TRAM-34 (Figure 5a-c). During the transition from the synthetic to the proliferative phase, the mRNA expression level of αSM-actin has been shown to be downregulated 12h after induction of arteriogenesis [9], and confirmed in the present study by qRT-PCR analyses (Figure 5d). Interestingly, in TRAM-34 treated mice, the expression level of αSM-actin was comparable to that of the control mice at 12 h after induction of arteriogenesis, however, it was significantly increased in PAP-1 treated mice at the same point in time (Figure 5e). K V 1.3 and K Ca 3.1 Blockade Inhibits Mouse Primary Artery SMCs Proliferation In Vitro To gain further insights into the role of the potassium channels on SMC proliferation, we performed in vitro investigations on mouse primary artery SMCs. Immunocytological analyses showed that K V 1.3, as well as K Ca 3.1, are localized perinuclear in mouse primary artery SMCs. Somehow, weaker signals were seen in the cytoplasm and at the cytoplasmic membrane ( Figure 6). To analyze the effects of K V 1.3 and K Ca 3.1 blockade on SMC proliferation in in vitro mouse, primary artery SMCs were treated with different concentrations of PAP-1 (0.1, 1, and 5 µM) or TRAM-34 (10, 100, and 500 µM), respectively. Interestingly, in in vitro, both PAP-1 and TRAM-34 treatments interfered with SMC proliferation, as shown by the BrdU incorporation assay (Figure 7). K V 1.3 Blockade Repressed the Expression of FGFR-1, PDGFR-ß, and Egr1 in Mouse Primary Artery SMCs In Vitro and During Arteriogenesis In Vivo Receptor tyrosine kinases such as FGFR-1 and PDGFR-ß are well described for their relevance in SMC proliferation. Our qRT-PCR results on the expression level of Fgfr1 and Pdgfrb provided evidence that treatment of mouse primary artery SMCs with the K V 1.3 channel blocker PAP-1 significantly interfered with the expression of both growth factor receptors, whereas the treatment with the K Ca 3.1 channel blocker TRAM-34 showed no significant influence (Figure 8a). Moreover, in collateral arteries 12 h after induction of arteriogenesis, a significant downregulation was evident for both Fgfr1 and Pdgfrb when K V 1.3 was blocked with PAP-1, while treatment with the K Ca 3.1 blocker TRAM-34 showed no significant effect (Figure 8b). To further investigate the relevance of the K V 1.3 potassium channel for SMC proliferation in vitro and during arteriogenesis in vivo, qRT-PCR analyses were performed on the cell cycle regulator Egr1. Our results evidenced that blocking K V 1.3 with PAP-1 in vitro, as well during arteriogenesis in vivo, significantly interfered with the mRNA expression of Egr1 (Figure 8c,d). Discussion The process of arteriogenesis mainly involves the proliferation of ECs and SMCs. Whereas the mechanisms relevant for EC proliferation are relatively well defined, little is known about the mechanism involved in SMC proliferation. Using a murine hindlimb model of arteriogenesis, here, we report that the voltage-gated potassium channel K V 1.3, but not the Ca 2+ -gated potassium channel K Ca 3.1, is of importance for SMC proliferation in collateral arteries. Selectively blocking K V 1.3 with PAP-1 resulted in a reduced perfusion recovery (Figure 4), which was associated with reduced numbers of proliferating SMCs ( Figure 5). More in-depth in vivo and in vitro studies demonstrated a role for K V 1.3 in the expression of the tyrosine kinase receptors Fgfr1 and Pdgfrb, as well as of the transcriptional regulator Egr1 (Figure 8), all relevant for proper SMC proliferation in arteriogenesis. Elevated shear stress is the driving force for arteriogenesis [2,3]. This mechanical stress can be sensed by ECs but not by SMCs. The mechanisms how this mechanical force is translated into biochemical signals resulting in endothelial proliferation have been described in [4]. However, little is known about the mechanisms triggering SMC proliferation in arteriogenesis. To address this point, we decided to study the relevance of the potassium channels K v 1.3 and K Ca 3.1, which have been shown to play a role in smooth muscle proliferation in other experimental settings and processes [20,21]. Our immunohistological investigations demonstrated that both K v 1.3 and K Ca 3.1 are mainly localized in SMCs of collateral arteries of murine hindlimbs (Figures 2 and 3). To investigate the relevance of K V 1.3 for arteriogenesis, we performed blocking studies employing PAP-1, described as a selective K V 1.3 blocker [31]. The laser Doppler perfusion measurements evidenced a significant reduction in perfusion recovery when mice were treated with PAP-1 ( Figure 4). Moreover, our histological results showed a significant reduction of proliferating SMCs but not ECs in growing collateral arteries ( Figure 5). Proliferating SMCs are characterized by a reduced expression of the contractile marker αSM-actin a [34], which has been demonstrated by our group for the process of arteriogenesis [9] and confirmed in the present study ( Figure 5). Blocking K V 1.3 during collateral artery growth, however, interfered with the downregulation of αSM-actin ( Figure 5). To investigate whether the reduced proliferation rate of SMCs in arteriogenesis was directly related to K V 1.3 blockade in SMCs, but not in other cells such as leukocytes, which also play an important role in arteriogenesis [18,35,36], we analyzed the proliferative behavior of primary murine SMCs under K V 1.3 blockade in vitro. Our results revealed a correlation between the concentration of PAP-1 in culture medium and the inhibition of mouse primary artery SMC proliferation (Figure 7), attributing K V 1.3 with a role in SMC proliferation. Together, our data suggest a direct correlation between the blockade of the potassium channel K V 1.3 and the inhibition of SMC proliferation during the process of arteriogenesis. Our data are in line with results from Cidad et al., who demonstrated an inhibition of femoral artery SMC proliferation when K v 1.3 was blocked pharmacologically with PAP-1 or with Margatoxin or when K v 1.3 was knocked down by siRNA treatment [22]. Previous results have shown that SMC proliferation in arteriogenesis is dependent on the activation of FGFR-1. Already in 2003, we had demonstrated that FGFR-1, which was expressed in SMCs but not in ECs, was upregulated in the early phase of arteriogenesis, i.e., within the first 24 h after induction of collateral artery growth by femoral artery ligation, and that blocking this tyrosine kinase receptor with polyanetholsulfonic acid (PAS) interfered with the process of arteriogenesis [10]. A parallel study showed that a combined treatment of rodents with the cognate ligand of FGFR-1, namely FGF-2, and the cognate ligand of PDGFR-β, namely PDGF-BB, significantly promoted the process of arteriogenesis [6]. In that study, an upregulation of PDGF receptors by FGF-2 was suggested and was later confirmed by Zhang et al. [37]. PDGF-BB has been described as a potent inducer of the synthetic phenotype of a SMC and has been shown to act synergistically with FGF-2 to induce the downregulation of contractile genes such as αSM-actin during vascular SMC proliferation [38]. In particular, it has been demonstrated that PDGF-BB activates FGFR-1 via engaging PDGFR-β, thereby mediating the downregulation of αSM-actin and smooth muscle 22α (SM22-α) expression. The PDGFR-β/PDGF-BB and FGFR-1/FGF-2 signaling pathways have also been effectively described to promote the upregulation of the transcription factor Egr1 [8], the expression of which was regulated in opposition to that of contractile genes, and which we have found to mediate cell cycle progression in arteriogenesis [9]. In the present study, we found that treatment of mice with the K v 1.3 blocker PAP-1 during the process of arteriogenesis resulted in a downregulation of Fgfr1, Pdgfrb, and Egr1 (Figure 8), whereas αSM-actin was upregulated ( Figure 5). Accordingly, all genes are regulated in the opposite way as described for proper arteriogenesis [9,10]. Although one could speculate that the impaired expression of Fgfr1 and/or Pdgfrb could be responsible for the impaired expression of their downstream genes, i.e., Egr1 and αSM-actin, this is somehow unlikely as all genes show a hampered expression at the same point of time. Therefore, we wondered if another factor could be involved in K v 1.3 mediated gene expression. Interestingly, in silico analyses (data not shown) revealed several binding sites for the transcription factor specificity protein 1 (Sp1) in the promoter regions of Fgfr1, Pdgfrb, and Egr1. However, whether Sp1 is indeed involved in potassium channel K v 1.3 mediated gene expression remains to be determined by further studies. Our data indicate that K v 1.3 plays a major role in SMC proliferation, especially in the process of arteriogenesis, by influencing signal transduction cascades associated with the expression of the growth factor receptors Fgfr1 and Pdgfrb and their downstream genes being involved in phenotype switch and cell cycle regulation. In contrast to our findings regarding PAP-1 administration, treatment of mice with the K Ca 3.1 selective blocker TRAM-34 did not influence vascular SMC proliferation or differential gene expression in growing collaterals in vivo ( Figures 5 and 8). The laser Doppler perfusion measurements, however, evidenced a reduced perfusion recovery after femoral artery ligation ( Figure 4). Interestingly, K Ca 3.1 has been shown to be upregulated by fluid shear stress [39], the driving force for arteriogenesis [2,3]. Moreover, it has been demonstrated by blocking studies employing TRAM-34 in vivo, that K Ca 3.1 plays a role in EC proliferation during angiogenesis (Grigic, Eichler 2005) and in SMC proliferation, e.g., during atherogenesis [27]. Our study, however, revealed that K Ca 3.1 is not involved in EC or in SMC proliferation in collateral artery growth ( Figure 5). Of course, one could argue that the dose of TRAM-34 used in the present study was not high enough to block K Ca 3.1 in vivo, but identical dosages were shown to be effective in hampering vascular cell proliferation in a model of intima hyperplasia [25] and atherosclerotic lesions in mice [27]. Together, these data indicate that the mechanisms of SMC proliferation in the different pathophysiological situations are diverging. Indeed, it has been shown by Bi et al. in vitro [20] that K Ca 3.1 mediated SMC proliferation blocked by TRAM-34 was not associated with any change in expression of Pdgfrb, supporting the data of the present investigations ( Figure 8). As our laser Doppler perfusion measurements revealed a reduced perfusion recovery upon femoral artery ligation (Figure 4), which was not associated with a reduced collateral artery cell proliferation ( Figure 5), we hypothesize that K Ca 3.1 could overtake a function in EDHF-mediated collateral vasodilation, a well described function of this potassium channel [29,40]. However, further studies are necessary to prove this hypothesis. A similar effect on reduced perfusion recovery upon femoral artery ligation has been described for nitric oxide synthase 3 (NOS3)-deficient mice, also attributing nitric oxide a role in vasodilation during arteriogenesis [41]. In terms of K Ca 3.1, it could be interesting to know that the Ca 2+ -channel transient receptor potential cation channel, subfamily V, member 4 (TRPV4) has previously been shown to play a role in arteriogenesis by promoting vascular cell proliferation [42]. TRPV4 is described as shear stress sensitive channel which plays an important role in the regulation of vascular tone by modulating intracellular Ca 2+ levels [43]. However, TRPV4 has also been shown to promote collateral artery growth in several animal models [42,44,45]. It has been suggested that upon activation of this receptor, a first increase in intracellular Ca 2+ levels could result in EDHF-mediated vasodilation, whilst a prolonged raise could activate transcription factors causing vascular cell proliferation [42,45]. It is tempting to speculate that K Ca 3.1 is involved in this EDHF mediated vasodilation, but further studies are necessary to investigate this assumption. Conclusions From our investigations, we conclude that the potassium channel K v 1.3, but not K Ca 3.1, contributes to SMC proliferation in arteriogenesis by controlling the expression of growth factor receptors, as well as their downstream genes relevant for phenotype switch and cell cycle progression.
5,125.4
2020-04-01T00:00:00.000
[ "Medicine", "Biology" ]
Oncolytic Vaccinia Virus Augments T Cell Factor 1-Positive Stem-like CD8+ T Cells, Which Underlies the Efficacy of Anti-PD-1 Combination Immunotherapy Oncolytic virotherapy has garnered attention as an antigen-agnostic therapeutic cancer vaccine that induces cancer-specific T cell responses without additional antigen loading. As anticancer immune responses are compromised by a lack of antigenicity and chronic immunosuppressive microenvironments, an effective immuno-oncology modality that converts cold tumors into hot tumors is crucial. To evaluate the immune-activating characteristics of oncolytic vaccinia virus (VACV; JX-594, pexastimogene devacirepvec), diverse murine syngeneic cancer models with different tissue types and immune microenvironments were used. Intratumorally administered mJX-594, a murine variant of JX-594, potently increased CD8+ T cells, including antigen-specific cancer CD8+ T cells, and decreased immunosuppressive cells irrespective of tissue type or therapeutic efficacy. Remodeling of tumors into inflamed ones by mJX-594 led to a response to combined anti-PD-1 treatment, but not to mJX-594 or anti-PD-1 monotherapy. mJX-594 treatment increased T cell factor 1-positive stem-like T cells among cancer-specific CD8+ T cells, and anti-PD-1 combination treatment further increased proliferation of these cells, which was important for therapeutic efficacy. The presence of functional cancer-specific CD8+ T cells in the spleen and bone marrow for an extended period, which proliferated upon encountering cancer antigen-loaded splenic dendritic cells, further indicated that long-term durable anticancer immunity was elicited by oncolytic VACV. Introduction The efficacy of oncolytic virotherapy for cancer was originally thought to be dependent on selective lysis of infected cancer cells; therefore, efficient cancer targeting and extensive intratumoral virus replication and propagation were considered limiting factors for cancer treatment [1]. However, subsequent studies have shown that oncolytic viruses (OVs) effectively induce immunogenic cancer cell death and release tumor-associated antigens (TAAs), suggesting that OVs evoke systemic immune responses against cancer antigens [2,3]. Therefore, OVs have been proposed as ideal agents for activating cancer-specific T cell responses, converting them into immunologically hot tumors, which is a prerequisite for the currently prevailing immune checkpoint blockade (ICB) therapies, such as anti-programmed cell death protein 1 (PD-1)-blocking antibodies, and eventually synergizing them with these antibodies [4,5]. Indeed, oncolytic herpes simplex virus, which was the first United States Food and Drug Administration-approved treatment for advanced melanoma, showed LLC (Lewis lung carcinoma, murine lung cancer cells of C57BL/6 origin), MB49 (mouse bladder-49, murine bladder cancer cells of BALB/c origin), MC38 (murine carcinoma-38, murine colon cancer cells of C57BL/6 origin), CT26 (colon tumor-26, murine colon cancer cells of BALB/c origin), and 4T1 (murine metastatic triple-negative breast cancer cells of BALB/c origin) cancer cell lines were obtained from the American Type Culture Collection (ATCC) (Manassas, VA, USA). As a cancer-specific immune response model, we established an LLC cell line expressing ovalbumin (OVA) (LLC-OVA) by transfecting the gene encoding cytosolic whole OVA protein into the LLC cell line using a lentiviral expression vector. Cancer cell lines were maintained in a complete culture medium consisting of either RPMI 1640 or Dulbecco's modified Eagle's medium (DMEM), 10% v/v fetal bovine serum (FBS; Gibco-BRL, Gaithersburg, MD, USA), and 1% v/v antibiotic/antimycotic under sterile conditions at 37 • C in a 5% v/v CO 2 atmosphere. Oncolytic Virus mJX-594, a mouse variant of JX-594, was propagated and provided by SillaJen, Inc. (Seoul, Korea). mJX-594 is a Western Reserve strain of VACV encoding murine GM-CSF in the vaccinia TK gene locus under the control of the p7.5 promoter. To amplify the virus, the host cell HeLa was infected with the virus at 0.02 MOI (multiplicity of infection) for 48 h and the infected cells were lysed in hypotonic lysis buffer. The infected cell lysate was filtered to eliminate host cell debris and concentrated by 36% sucrose cushion centrifugation. The virus was stored at −80 • C. Tumor Models and Treatment Regimens To generate tumor models, 2 × 10 5 cancer cells were implanted by subcutaneous injection into the right flank fat pads of wild-type C57BL/6 and BALB/c mice. When the tumor volumes reached 50-60 mm 3 , mice with size-matched tumors were randomly assigned to the experimental groups, followed by intratumoral injection of either vehicle (phosphate-buffered saline, PBS), 1 × 10 7 , or 5 × 10 7 plaque-forming units (pfu) of mJX-594, three or four times at 3-day intervals. Vehicle and virus were prepared in a volume of 40 µL per tumor burden for one mouse. Tumor growth was monitored every 2-3 days until the end of the experiment (tumor volume ≤ 1500 mm 3 ). Tumor volumes were calculated as (width × width × length)/2. To assess the proliferative activity of cancer antigen-specific memory CD8 + T cells, LLC-OVA cancer-bearing mice were intratumorally treated with either vehicle or 5 × 10 7 pfu mJX-594 on days 0, 3, and 6, and primary tumors were resected 25 days after the first mJX-594 treatment to improve survival. Three days before analysis, 3 × 10 6 splenic DCs loaded with either vehicle or OVA 357-364 peptide (SIINFEKL) were injected intravenously into the mice. Thereafter, the mice were twice administered 10 mg/kg of EdU (5-ethynyl-2deoxyuridine; Invitrogen, Carlsbad, CA, USA) on 2 consecutive days. DCs were prepared by incubation with 100 nM peptide for 2 h at 37 • C just prior to injection. Twenty-four hours after the second EdU labeling, EdU incorporation was measured by staining the cells isolated from the mice using a Click-iT™ EdU Flow Cytometry Assay Kit (Invitrogen) according to the manufacturer's instructions. Statistical Analysis Statistical analysis was performed using GraphPad Prism (version 8.0; GraphPad Software Inc., San Diego, CA, USA). Group comparisons of tumor growth were carried out by two-way analysis of variance (ANOVA) with Bonferroni correction. For analyzing flow cytometry data, FlowJo software (version 10.8.1; FlowJo) was used. The two-tailed unpaired t-test was used to evaluate differences in immunogenicity between groups. Data are expressed as means with standard error of the mean (SEM), and p < 0.05 was taken to indicate statistical significance. Differential Treatment Efficacy of mJX-594 in Syngeneic Murine Cancer Models For better evaluation of the immunological characteristics of oncolytic VACV, we analyzed the treatment efficacy of intratumoral administration of mJX-594 using several well-characterized murine cancer cells, including orthotopically transplanted cancers (4T1, EMT6 and B16-F10) and subcutaneously transplanted cancers (LLC, LLC-OVA, MC38, CT26, MB49 and RENCA). Two different doses of mJX-594 (high dose: 5 × 10 7 pfu, low dose: 2 × 10 7 pfu) were administered intratumorally, three times at 3-day intervals, starting when the primary tumor mass reached 50-60 mm 3 ( Figure 1A). According to the therapeutic efficacy of mJX-594, murine cancer cells were classified into the following subgroups: more than half of the tumors were in complete remission (CR) after high-dose treatment (EMT6 breast cancer); tumor regression was evident after low-dose treatment and some tumors achieved CR through high-dose treatment (B16-F10 melanoma, RENCA renal cell carcinoma, and MB49 bladder cancer); dose-dependent tumor regression but CR was not achieved through high-dose treatment (LLC non-small-cell lung cancer and MC38 colon cancer); and there was no efficacy of low-or high-dose treatment (CT26 colon cancer and 4T1 breast cancer) ( Figure 1, Supplementary Figure S1). MC38, CT26, MB49 and RENCA). Two different doses of mJX-594 (high dose: 5 × 10 7 pfu, low dose: 2 × 10 7 pfu) were administered intratumorally, three times at 3-day intervals, starting when the primary tumor mass reached 50-60 mm 3 ( Figure 1A). According to the therapeutic efficacy of mJX-594, murine cancer cells were classified into the following subgroups: more than half of the tumors were in complete remission (CR) after high-dose treatment (EMT6 breast cancer); tumor regression was evident after low-dose treatment and some tumors achieved CR through high-dose treatment (B16-F10 melanoma, RENCA renal cell carcinoma, and MB49 bladder cancer); dose-dependent tumor regression but CR was not achieved through high-dose treatment (LLC non-small-cell lung cancer and MC38 colon cancer); and there was no efficacy of low-or high-dose treatment (CT26 colon cancer and 4T1 breast cancer) ( Figure 1, Supplementary Figure S1). Marked CD8 + T Cell Recruitment in the Tumor following mJX-594 Treatment To elucidate the differential efficacy and characteristic mechanisms of action of mJX-594 in murine cancer models, we performed multi-color flow cytometry analysis of tumorinfiltrating leukocytes. Both the innate and adaptive arms of immune cells were analyzed 3 days after the last injection of mJX-594 ( Figure 2A). The relative percentages of each subset of immune cells among tumor-infiltrating leukocytes (CD45 + ), and the absolute numbers of each subset per milligram of tumor tissue, were determined. All of the murine cancers used in this study contained high proportions (>50%) of immunosuppressive myeloid cells, such as myeloid-derived suppressor cells (MDSCs) and tumor-associated macrophages, and the relative percentages of MDSCs and macrophages differed among cancer cells ( Figure 2D,E). T cells were the minority population among tumor-infiltrating leukocytes. Moreover, compared to human cancers, all of the murine cancers in this study were immune desert or immune-excluded phenotypes, with few cells in or around the tumor tissue ( Figure 2C,E). At 3 days following the third inoculation of mJX-594, the total CD3 + T cell numbers in the tumor tissue increased by at least fourfold, irrespective of therapeutic efficacy in vivo, even with low-dose treatment ( Figure 2B and Supplementary Figure S2). After high-dose treatment, the relative percentage of CD3 + T cells among CD45 + leukocytes increased by 2.1-7.8 fold, and the cell numbers per milligram of tumor tissue increased by 2.6-26.0 fold ( Figure 2B and Supplementary Figure S2). The increase in intratumoral CD8 + T cells was more dramatic. The relative percentage of CD8 + T cells among CD45 + leukocytes increased by 3.1-19.6 fold, and the cell numbers per milligram of tumor tissue increased by 2.5-72.0 fold ( Figure 2B and Supplementary Figure S2). A marked increase in the CD8 + T cell population in the tumor was a key advantage of oncolytic VACV treatment. There was an increase in CD8 + T cells in all of the cancers tested, irrespective of clinical efficacy, tissue type, or sensitivity to ICB. Regarding the immunosuppressive cells recruited by the tumor, each cancer cell had a different immunosuppressive population (typically myeloid cells, such as MDSCs and tumor-associated macrophages, and regulatory T cells) [24]. All of the cancers tested exhibited decreases in the absolute numbers and relative percentages of immunosuppressive myeloid cells after mJX-594 treatment, irrespective of therapeutic efficacy ( Figure 2 and Supplementary Figure S2). A dramatic increase in intratumoral CD8 + T cells and a decrease in myeloid cells were confirmed by immunofluorescence microscopy. Although we cannot accurately predict whether a cancer will respond to mJX-594 therapeutically based on their tumor immune profiles before treatment, we know that mJX-594 treatment markedly increases intratumoral CD3 + T cells and (more profoundly) CD8 + T cells, and simultaneously decreases immunosuppressive cells such as MDSCs and macrophages in the TME. In summary, oncolytic VACV potently activates and recruits T cells, especially CD8 + T cells, in the tumor and renders tumor immune microenvironments more favorable for T cell responses when combined with ICB. Cancer-Specific CD8 + T Cells as Well as Vaccinia Virus-Specific CD8 + T Cells Are Efficiently Activated and Recruited to Tumor Tissue As the administration of VACV equates to an induced viral infection, which triggers effective antiviral cytotoxic T cell responses, we postulated that the increased T cell responses, especially CD8 + T cell responses, seen following intratumoral mJX-594 treatment may have included anticancer-specific CD8 + T cell responses. To pinpoint the cancer antigen-specific CD8 + T cell response, we used the LLC-OVA cancer model, which expresses cytosolic a "whole sequence" OVA protein that serves as a cancer neo-antigen. We evaluated neoantigen (OVA)-specific CD8 + T cell responses using the K b -SIINFEKL dextramer. The VACV-specific CD8 + T cell response was evaluated using the K b -TSYKFESV tetramer, which detected an epitope from B8R encoding a secreted protein with homology to the interferon gamma receptor [25]. mJX-594 (5 × 10 7 pfu) was administered intratumorally, three times at 3-day intervals, starting when the primary LLC-OVA tumor reached 50-60 mm 3 ( Figure 1A). Tumor-infiltrating lymphocytes were analyzed on day 3 after the last virus administration (i.e., 9 days after the first virus treatment), during which the CD8 + T cell response was maximal. mJX-594 treatment increased OVA-specific CD8 + T cell expression, as well as VACV-specific CD8 + T cell expression, more than 10-fold in terms of the percentage and 30-50-fold in terms of the number of cells per milligram of tumor tissue, compared to vehicle treatment ( Figure 3A-C). These results show that mJX-594 effectively infected and destroyed cancer cells, and released cancer antigens in an immunogenic man-ner, thus efficiently activating and recruiting cancer antigen-specific CD8 + T cells into the tumor tissue. Regarding the immunosuppressive cells recruited by the tumor, each cancer cell had a different immunosuppressive population (typically myeloid cells, such as MDSCs and tumor-associated macrophages, and regulatory T cells) [24]. All of the cancers tested exhibited decreases in the absolute numbers and relative percentages of immunosuppressive myeloid cells after mJX-594 treatment, irrespective of therapeutic efficacy ( Figure 2 and Supplementary Figure S2). A dramatic increase in intratumoral CD8 + T cells and a decrease in myeloid cells were confirmed by immunofluorescence microscopy. Although we cannot accurately predict whether a cancer will respond to mJX-594 therapeutically based on their tumor immune profiles before treatment, we know that cell response was maximal. mJX-594 treatment increased OVA-specific CD8 T cell expres-sion, as well as VACV-specific CD8 + T cell expression, more than 10-fold in terms of the percentage and 30-50-fold in terms of the number of cells per milligram of tumor tissue, compared to vehicle treatment ( Figure 3A-C). These results show that mJX-594 effectively infected and destroyed cancer cells, and released cancer antigens in an immunogenic manner, thus efficiently activating and recruiting cancer antigen-specific CD8 + T cells into the tumor tissue. Anti-PD-1 Combination Therapy Increases Treatment Efficacy and Intratumoral Infiltration of T Cells in Two Murine Cancer Models As mJX-594 markedly increased intratumoral CD8 + T cells in all syngeneic murine cancer cells tested irrespective of their therapeutic efficacy, mouse strain, or tumor immune microenvironment before inoculation, and also increased cancer antigen-specific CD8 + T cells in the tumor, we considered mJX-594 to be among the most powerful reagents for converting immune desert or immune-excluded cancers into inflamed ones, thus making them susceptible to ICB therapies, such as anti-PD-1 and anti-programmed death-ligand 1 (PD-L1) therapies. We tested 4T1 cancer cells, which are resistant to anti-PD-1/anti-PD-L1 antibody therapy [24], and found that they did not respond to mJX-594 monotherapy at a high dose, but nevertheless exhibited an increase of intratumoral CD8 + T cells (by 6.7-fold) following mJX-594 treatment ( Figure 2E and Supplementary Figure S2). Single therapy, of either high-dose mJX-594 or anti-PD-1 antibody, did not have significant tumorreducing effects in 4T1-bearing mice. Combined treatment with mJX-594 and anti-PD-1 had anti-cancer effects on the murine triple-negative breast cancer 4T1 (Supplementary Figure S3A-C). These results showed that mJX-594 treatment sensitizes one of the most resistant murine cancer cells to anti-PD-1 antibody therapy. To confirm the synergistic effect of mJX-594 and anti-PD-1 antibody, we used the MC38 murine colon cancer model, and combined low-dose mJX-594 and anti-PD-1 antibody therapy. Low-dose mJX-594 single therapy achieved 37% inhibition, while anti-PD-1 single therapy did not achieve a significant reduction. Combination therapy achieved tumor growth inhibition of 69% ( Figure 4A-C). Increased numbers of CD8 + T cells in the tumor tissue, as induced by mJX-594 treatment, were confirmed by immunofluorescence microscopy on days 9 and 15 of the first mJX-594 injection ( Figure 4A). Vehicle treatment was associated with few CD8 + T cells in tumor tissue, and anti-PD-1 antibody single therapy slightly increased CD8 + T cell expression. However, single therapy of mJX-594 dramatically increased CD8 + T cell infiltration into the tumor tissue, and combined anti-PD-1 antibody further increased these cell populations ( Figure 4D,E). A dramatic reduction of the CD11b + myeloid population, the majority of which comprised MDSCs ( Figure 2E), by mJX-594 single therapy and combination therapy with anti-PD-1 antibody was observed and anti-PD-1 single therapy did not significantly decrease these cell populations ( Figure 4D,E). Reduction of the CD31 area ( Figure 4D,E) reflected the well-known tumor-associated endothelial cell damage induced by VACV [26]. Among Cancer Antigen-Specific T Cells, TCF1 + Stem-like CD8 + T Cells Are Increased by mJX-594 and Further Increased by Anti-PD-1 Antibody Combination Treatment As the production of TCF1 + stem-like CD8 + T cells is related to protective immunity following cancer vaccination [16], the prognosis of the patient [12], and putative future memory T cells [27], we evaluated whether stem-like CD8 + T cells were increased by mJX-594 treatment. We inoculated LLC-OVA cancer cells into the subcutaneous region and applied 5 × 10 7 Pfu mJX-594 intratumorally three times at 3-day intervals, starting when the primary tumor mass reached 50-60 mm 3 . Anti-PD-1 antibody was intravenously injected, starting on day 3 of mJX-594 inoculation, at 3-day intervals a total of three times. Tumor growth was inhibited by 55% by high-dose treatment of mJX-594 but was not inhibited by anti-PD-1 antibody monotherapy. A synergistic tumor-inhibiting effect of the combined treatment was also seen ( Figure 5A). Tumor-infiltrating leukocytes and splenocytes were analyzed 3 days after the last injection of mJX-594 (day 9 after the first injection). The mJX-594 treatment was highly effective for CD8 + T cell differentiation. KLRG1 + CD8 + T cells were markedly increased by mJX-594 treatment and further increased by anti-PD-1 combination treatment in both the tumor and spleen ( Figure 5B-D). Anti-PD-1 monotherapy did not appreciably increase effector cell expression. Both KLRG1 + IL-7Rα − short-lived effector cells and KLRG1 + IL-7Rα + memory precursor effector cells were increased ( Figure 5B,C). GZMB + CD8 + T cells were also prominently increased by mJX-594 treatment and further increased by anti-PD-1 combination treatment in the spleen ( Figure 5E,F). TCF-1 + PD-1 + stem-like CD8 + T cells were also increased by mJX-594 treatment in the tumor and the spleen 9 days after the injection and further increased by anti-PD-1 combination treatment in the spleen ( Figure 5H-J). On day 30 following the last injection (36 days after the first injection), increased percentages of TCF-1 + PD-1 + stem-like CD8 + T cells were noted in both the spleen and bone marrow ( Figure 5K-N). K b -SIINFEKL + OVA-specific CD8 + T cells in the tumor were increased by mJX-594 and further increased by combined treatment involving anti-PD-1 (Supplementary Figure S4B). Among K b -SIINFEKL + CD8 + T cells, KLRG1 + effector T cells were also increased in both the tumor and spleen by mJX-594 treatment (Supplementary Figure S4C,D). TCF-1 + PD-1 + stem-like K b -SIINFEKL + CD8 + T cells were also increased by mJX-594 treatment in the tumor and spleen on day 9 (Supplementary Figure S4E,F). On day 36 after the first injection, increased percentages of TCF-1 + PD-1 + stem-like K b -SIINFEKL + CD8 + T cells were noted in both the spleen and bone marrow (Supplementary Figure S4G,H). (E) Percentage of area infiltrated with leukocytes relative to the whole tumor, calculated using ImageJ software on days 9 and 15 after the first mJX-594 treatment. * p < 0.05. Two-tailed Student's t-test was used for the analysis. applied 5 × 10 7 Pfu mJX-594 intratumorally three times at 3-day intervals, starting when the primary tumor mass reached 50-60 mm 3 . Anti-PD-1 antibody was intravenously injected, starting on day 3 of mJX-594 inoculation, at 3-day intervals a total of three times. Tumor growth was inhibited by 55% by high-dose treatment of mJX-594 but was not inhibited by anti-PD-1 antibody monotherapy. A synergistic tumor-inhibiting effect of the combined treatment was also seen ( Figure 5A). Cancer Neoantigen-Specific CD8 + T Cells That Survive for Extended Periods in the Spleen and Bone Marrow Proliferate in Response to Antigen-Loaded DCs In Situ We showed that TCF-1 + PD-1 + stem-like CD8 + T cell expression was increased in terms of both total and cancer neoantigen-specific CD8 + cells on day 9 after the first viral infection, and it remained elevated on day 36. To confirm the functional capacity of the cancer-specific stem-like/memory-like T cells in the lymphoid organs in situ following mJX-594-mediated cancer treatment, we utilized the LLC-OVA cancer model. We subcutaneously inoculated LLC-OVA cancer cells into mice, followed by intratumoral treatment with 5 × 10 7 pfu of mJX-594 (five times at 3-day intervals starting when the primary tumor mass reached 50-60 mm 3 ). To detect functional stem-like/memory-like T cells in specific organs, we used antigen-loaded splenic DCs for in situ activation of local antigen-specific T cells. As systematically injected DCs localize in the bone marrow and secondary lymphoid organs [28], we loaded SIINFEKL peptide (OVA peptides 357-364) onto splenic DCs, which were then injected intravenously. Proliferating K b -SIINFEKL-specific CD8 + T cells were evaluated using a CD45 congenic marker, K b -SIINFEKL dextramer, and the EdU click chemistry method [29] on days 39 and 60 ( Figure 6) after the first mJX-594 injection. We found that cancer neoantigen-specific T cells in the spleen and bone marrow proliferated in response to SIINFEKL-loaded splenic DCs ( Figure 6). These results indicated that cancer neoantigen-specific T cells were present in the secondary lymphoid organs and bone marrow for a protracted period and were also functional (as they responded to antigen-loaded splenic DCs). Discussion In this study, we evaluated the immune-modulating characteristics of oncolytic VACV using murine syngeneic cancer models. We found that oncolytic VACV Pexa-vec (mJX-594) potently increased CD8 + T cells, including cancer antigen-specific CD8 + T cells, and decreased immunosuppressive cells in tumors irrespective of therapeutic efficacy (Figure 2). These findings indicate that mJX-594 was highly effective at transforming immune desert and immune-excluded tumors into inflamed ones and modulating the immunosuppressive TME. These typical assets of mJX-594 were ideal for combination therapy involving anti-PD-1/anti-PD-L1, which only targets inflamed tumors in which treat- Figure 6. Long-lived cancer neoantigen-specific CD8 + T cells proliferated in response to antigenloaded dendritic cells in situ. (A) Schematic representation of the experimental schedule. LLC-OVA cells were subcutaneously implanted into C57BL/6 mice. When the tumor volume reached 50-60 mm 3 , the mice were intratumorally injected with 5 × 10 7 pfu mJX-594 five times at 3-day intervals. Intraperitoneal injection of 3 × 10 6 splenic dendritic cells (sDCs) that had been loaded with either vehicle or 100 nM SIINFEKL on day 36 or day 57 of the first mJX-594 injection. In vivo proliferation of cancer neoantigen-specific memory-like CD8 + T cells in response to antigen-loaded sDCs was measured by incorporation of EdU (10 mg/kg) after labeling the cells twice (once per day on day 1 and day 2 before analysis) and analyzed by K b -SIINFEKL dextramer staining and Click-iT™ EdU Flow Cytometry Assay Kit (K b -SIINFEKL + EdU + ). (B,C) Representative flowcytometric plot (B) and absolute number (C) of proliferating EdU + cells among K b -SIINFEKL dextramer-positive cells. Pooled data from two experiments are shown (n = 4-6 per group). * p < 0.05, ** p < 0.005. BM, bone marrow; EdU, 5-ethynyl-2 -deoxyuridine; i.p., intraperitoneal; LLC-OVA, LLC expressing OVA; OVA, ovalbumin; pfu, plaque-forming units; sDC, splenic dendritic cells. Discussion In this study, we evaluated the immune-modulating characteristics of oncolytic VACV using murine syngeneic cancer models. We found that oncolytic VACV Pexa-vec (mJX-594) potently increased CD8 + T cells, including cancer antigen-specific CD8 + T cells, and de-creased immunosuppressive cells in tumors irrespective of therapeutic efficacy (Figure 2). These findings indicate that mJX-594 was highly effective at transforming immune desert and immune-excluded tumors into inflamed ones and modulating the immunosuppressive TME. These typical assets of mJX-594 were ideal for combination therapy involving anti-PD-1/anti-PD-L1, which only targets inflamed tumors in which treatment reinvigorated immune exhausted T cells in the TME [30]. Indeed, mJX-594 led to a response to combination anti-PD-1 blocking antibody treatment in one of the most resistant murine cancer cell lines, which did not respond to mJX-594 single therapy or anti-PD-1 single therapy (Supplementary Figure S3) [24]. In the TME and cancer-bearing hosts, the functional capacity of cancer-specific effector CD8 + T cells was compromised by the expression of several negative immune checkpoint molecules on their surfaces, which led to their exhaustion [8,9]. Among these dysfunctional cancer-infiltrating T cells, which are quite similar to the exhausted cells seen in chronic viral infection, some subpopulations were still functional, where the number of these subpopulations correlated with the overall survival of several cancer types of human cancers [12,14]. The cells in these subpopulations proliferated following anti-PD-1 therapy and differentiated into effector T cells that controlled the tumor and maintained homeostasis, thus preventing further tumor progression [12,31]. The phenotype of these cells was TCF1 + PD-1 + , which are referred to as either stem-like or progenitor-exhausted T cells [32]. The proliferation of stem-like T cells is positively correlated with the clinical outcomes and overall survival of patients [12,14,33], and a recent study showed that the efficacy of neoantigen cancer vaccines was correlated with stem-like T cell generation [16]. In this study, mJX-594 treatment was quite efficient for increasing cancer antigen-specific stem-like CD8 + T cells in the tumor tissue and secondary lymphoid tissue, such as the spleen and bone marrow. Moreover, the quantity of these stem-like T cells correlated with the therapeutic efficacy of mJX-594 and was further increased by (and correlated with) combination therapy involving anti-PD-1 ( Figure 5 and Supplementary Figure S4). Thus, mJX-594 acted as an antigen-agnostic vaccine platform, which could activate and expand the population of cancer-specific stem-like CD8 + T cells in association with the therapeutic efficacy of this treatment modality. In acute viral infection, TCF1 + T cells expressed following infection serve as early memory precursor cells, which eventually differentiate into permanent memory T cells [27]. Memory T cell generation is the ultimate goal of cancer immunotherapy and therapeutic cancer vaccines for the prevention of recurrence of cancer in the future [34]. However, as in chronic viral infection, this ordinary differentiation process involving the formation of permanent memory T cells following activation is compromised in cancer [34,35]. In this study, mJX-594 treatment helped normalize the TME, which allowed for differentiation of functional effector and TCF1 + stem-like CD8 + T cells. In addition, we observed an increase in the expression of cancer-specific TCF1 + stem-like CD8 + T cells following mJX-594 treatment of the spleen and bone marrow for an extended period. In murine and human tumor models, stem-like T cells were found in the TME [12,14,31], and some reports have indicated that tertiary lymphoid structures within the tumor tissue are the main sites for these T cells [36]. In chronic viral infection, stem-like CD8 + T cells are evident in the secondary lymphoid organs and respond to anti-PD-1 antibody by proliferating and differentiating into effector cells [11,37]. Regarding the location of permanent memory T cells, the bone marrow and secondary lymphoid organs are the sites of the central memory T cells, whereas the target tissues are effector memory T cells and the recently characterized resident memory T cells [38][39][40]. The existence of cancer-specific stem-like CD8 + T cells in the secondary lymphoid organs noted in this study is relevant to these previous reports, and in terms of long-term survival of functional cancer antigen-specific T cells in the spleen and bone marrow, which proliferated upon encountering cancer antigen-loaded splenic DCs 2 months after cancer treatment (Figure 6), indicating that mJX-594 treatment elicits long-term anticancer immune responses. Conclusions Intratumorally administered oncolytic VACV (mJX-594, a murine variant of JX-594, Pexa-vec) potently increased CD8 + T cell proliferation, including of cancer antigen-specific CD8 + T cells, and decreased immunosuppressive cells irrespective of tissue type or therapeutic efficacy. Remodeling of tumors into inflamed ones by mJX-594 led to a response to combined anti-PD-1 treatment but not to mJX-594 or anti-PD-1 monotherapy. Among cancer-specific CD8 + T cells, mJX-594 treatment increased the expression of TCF1 + stem-like T cells, while anti-PD-1 combination treatment further increased their expression, which was important for therapeutic efficacy. The presence of functional cancer-specific CD8 + T cells in the spleen and bone marrow for an extended period, where these cells proliferated upon encountering cancer antigen-loaded splenic DCs, further indicates that long-term anti-cancer immunity is elicited by oncolytic VACV.
6,595.4
2022-03-30T00:00:00.000
[ "Biology", "Medicine" ]
MARVEL analysis of high-resolution rovibrational spectra of 13 C 16 O 2 A set of empirical rovibrational energy levels, obtained through the MARVEL (measured active rotational-vibrational energy levels) procedure, is presented for the 13 C 16 O 2 isotopologue of carbon dioxide. This procedure begins with the collection and analysis of experimental rovibrational transitions from the literature, allowing for a comprehensive review of the literature on the high-resolution spectroscopy of 13 C 16 O 2 , which is also presented. A total of 60 sources out of more than 750 checked provided 14,101 uniquely measured and assigned rovibrational transitions in the wavenumber range of 579 – 13,735 cm (cid:1) 1 . This is followed by a weighted least-squares refinement yielding the energy levels of the states involved in the measured transitions. Altogether 6318 empirical rovibrational energies have been determined for 13 C 16 O 2 . Finally, estimates have been given for the uncertainties of the empirical energies, based on the experimental uncertainties of the transitions. The detailed analysis of the lines and the spectroscopic network built from them, as well as the uncertainty estimates, all serve to pinpoint possible errors in the experimental data, such as typos, misassignment of quantum numbers, and misidentifications. Errors found in the literature data were corrected before including them in the final MARVEL dataset and analysis. | INTRODUCTION Carbon dioxide is a well-known trace species in the Earth's atmosphere, the recent increase in its concentration is associated with human activity, and it is involved in climate change. 1 Most of the spectral regions corresponding to the main isotopologue, 12 C 16 O 2 , are optically thick, meaning that increases in atmospheric concentration lead only to logarithmic increase in the radiative forcing associated with the so-called greenhouse effect (see, e.g., Reference 2).In the atmosphere of Earth, about 1.1 % of CO 2 is in the form of the 13 C 16 O 2 isotopologue. 3The corresponding spectral lines are not optically thick, increasing the importance of this isotopologue as a greenhouse gas. While 13 C is only a minor constituent in the Earth's atmosphere, the 13 C to 12 C abundance ratio is known to vary significantly in the Universe.There are observations which suggest that at places the ratio might be as high as one third. 4The detection of CO 2 in the atmospheres of hot Jupiter exoplanets, 5 including a recent one on WASP-39b by the James Webb Space Telescope, 6 have all been made using low-to medium-resolution transit spectroscopy, which cannot distinguish between different isotopologues.Thus, these observations are unable to provide information on isotopic ratios.However, highresolution, cross-correlation studies performed from the ground have been shown to be capable of distinguishing different carbon isotopologues; for example, a pioneering study of CO of exoplanet TYC 8998-760-1 b suggested that the abundance of 13 C was more than 30 % of that of 12 C. 7 High-resolution studies of exoplanet spectra require especially accurate line positions.There are line lists available for hot CO 2 , [8][9][10][11] but these are generally not of sufficient accuracy for use in crosscorrelation studies for high-temperature objects.The most practical means of improving the accuracy of theoretical line lists is the introduction of experimental/empirical energy-level data, facilitating the development of improved line positions.Supplying empirical rovibrational energy levels for 13 C 16 O 2 was one of the motivations of the present project. The MARVEL (measured active rotational-vibrational energy levels) [12][13][14][15] procedure, based on the theory of spectroscopic networks (SN), [16][17][18] is able to provide such highly accurate empirical energy levels.For this, we need a dataset of experimental line positions, which we created for the carbon dioxide isotopologue 16 O 13 C 16 O (636 in HITRAN parlance).MARVEL determines the SN representing all interconnecting rotational-vibrational energy levels and, based on an inversion process, yields empirical energy levels with appropriate uncertainties.Besides providing these data, a MARVEL analysis is able to identify incorrect quantum number assignments, overly optimistic uncertainty values, mistaken attributions, and many other types of errors.The empirical rovibrational energies can be used to check and improve existing theoretical models, as well as line lists, for example those generated within the ExoMol project. 19,20The joint utilization of the best empirical and theoretical data provides both completeness and the most accurate predictions of transition frequencies, see Ref- | The MARVEL procedure A MARVEL project begins by gathering, analyzing, and validating assigned lines of high-resolution spectra.Attributes of each line, besides their position, include a unique label for the upper and lower states, a measurement uncertainty value, and a unique tag identifier.The lines are then used to build a spectroscopic network, whereby each state corresponds to a node, and the nodes are linked by the observed transitions. 16This representation of the spectroscopic measurement results allows the determination of empirical energy-level values, together with their uncertainty estimates.The transitions form a well-connected network, with most transitions linked to the ground state via various paths.However, this connection is not always possible using experimental data alone.The missing lines may result in fragmentation of the principal component(s) of the SN.As a result, consistency of the lines of the floating components with the rest of the data cannot be established.This is the reason why such lines remain unvalidated at the end of a MARVEL analysis. Since MARVEL is not based on a particular quantum-chemical model, it will "validate" forbidden or incorrect transitions when they are not in conflict with the rest of the data.For this reason, it is important to check for such transitions while building up the MARVEL input dataset. | Quantum numbers and selection rules There are two conventions in general use for assigning quantum numbers to the vibrational states of the linear molecule CO 2 .The standard, so-called Herzberg notation is based on the harmonic oscillator (HO) picture and uses four vibrational quantum numbers, (ṽ 1 , ṽl 2 2 , ṽ3 ), where ṽ1 , ṽ2 , and ṽ3 describe the symmetric stretch, bend, and antisymmetric stretch of the molecule, respectively, while l2 denotes the angular momentum associated with the bending mode and can take values of ṽ2 , ṽ2 À 2, ṽ2 À 4, …, 1 or 0. Complications induced by the well-known Fermi resonance between the ν 1 and 2ν 2 states of CO 2 led to the introduction of the so-called AFGL (air force geophysics laboratory) notation, 3,23,24 which is adopted here. The AFGL notation groups the vibrational states into Fermi polyads and uses five vibrational quantum numbers, (v 1 v 2 ℓ 2 v 3 r), where r is the Fermi-resonance ranking index.In this notation, the Fermi polyads are determined by v 1 , ℓ 2 , and v 3 ; 25 v 2 is always equal to ℓ 2 , and r takes values from 1 to v 1 +1.To the best of our knowledge, there is no unambiguous conversion between the Herzberg and the AFGL conventions; hence, we used data from multiple datasets to match the Herzberg notation to AFGL.Since the first release of the HITRAN database, 23 it has been emphasized that when using older notations the order of some energy levels can change from one CO 2 isotopologue to the other, as shown by the work of Amat and Pimbert. 26 addition to the vibrational quantum numbers, there are two further quantum numbers required to label the rovibrational states of CO 2 : quantum number J, describing the overall rotation of the molecule, which takes values of J ≥ ℓ 2 , and parity, p, for which we use the rotationless parity denoted by e and f. 27 Unlike the vibrational quantum numbers, J and p are rigorously conserved (they are exact quantum numbers).For our MARVEL procedure we employ the AFGL notation and each rovibrational state has the label (J v 1 v 2 ℓ 2 v 3 r p). States with ℓ 2 ¼ 0 all have parity e.In principle, states with ℓ 2 > 0 can be both e and f, but due to the Pauli principle, half the rotational levels are missing; to be present, ðJ þ v 3 þ ℓ 2 þ pÞ, where p ¼ 0 for e and 1 for f states, must be even. It is standard to use the point group D ∞h labels to denote levels. In this notation Σ, Π, Δ, … represent ℓ 2 ¼ 0, 1,2, …, and the other symmetry labels are included as: When building up the MARVEL input file, we tested for incorrectly labelled transitions in the dataset to ensure the correctness of all the lines.As part of this procedure, obeyance of dipole selection rules were checked.For the 13 C 16 O 2 isotopologue, they include vibrational, rovibrational, and rotational, selection rules. 25These selection rules, as well as the Pauli-principle constraint, were all used to verify the labels of the experimental transitions obtained from the literature. | Resonances in CO 2 Various types of resonances affect the infrared spectra of carbon dioxide, such as Fermi, Coriolis, and ℓ-type resonances. 25,28ese resonance effects complicate the energy-level labeling due to occurrences of overlapping transitions, contributing to the complexity of the spectral patterns.Figure 1 shows the considerable effects Fermi-type resonances have on the experimental spectrum. The MARVEL procedure is able to find discrepancies in the energy-level labels and can find transitions misassigned due to various types of resonances.Nevertheless, a comparison with the assignments present in the NASA Ames-2021 variational line list 29 and the effective Hamiltonian Carbon Dioxide Spectroscopic Databank (CDSD) line list 30 was performed.In accordance with recent studies that analyzed some of these effects, 31,32 our study verifies many of the reassignments made. We also make further reassignments, not suggested previously (vide infra). | Beat frequencies 4][35][36][37][38][39][40][41][42][43][44] A beat frequency is the result of mixing two frequencies.This kind of measurement is often made using the heterodyne measurement technique, which gives very accurate results.However, unlike absolute frequencies, which form the input of the standard MARVEL procedure, beat frequencies do not correspond to a specific line position and a pair of lower and upper states.Beat-frequency measurements connect four energy levels using just one frequency, which represents the difference between two transition frequencies.In an absolute frequency measurement, the frequency (ν) represents the difference between two energy levels, ν / E 2 À E 1 .For a beat-frequency measurement, the frequency (ν b ) corresponds to the difference between four energy levels, 15]45 Initial tests, which included beat frequencies as extra data in the MARVEL process, were found to lead to ill-conditioned matrices even when the four levels involved in the measured beats were well determined by the standard SN.Illconditioning occurs when the beat-frequency measurements have a lower uncertainty than the standard measurements determining the energy levels involved.Of course, this is precisely the case when beat-frequency data are of real interest. | Data collection The present study started by collecting and analyzing literature sources that discuss high-resolution rovibrational spectra of carbon dioxide.More than 750 sources were analyzed and given a tag, using Distribution of the Fermi ranking index, r, across the experimental spectral region covered, illustrating the overlapping of transitions. T A B L E 1 Experimental sources of 13 C 16 O 2 rovibrational transitions, the wavenumber range they span, numbers of lines, uncertaintity information and the labelling scheme used in each source.Data from four sources [47][48][49][50] were excluded from our final dataset.Reasons for their exclusion is given in Section 3.4. | Dataset construction First, we constructed a dataset comprising the most reliable data, that is, self-consistent data which also have low experimental uncertainty. This master dataset included nearly 70% of our gathered data, the remaining 30% included conflicting data, as well, which had to be analyzed carefully, line by line.For this purpose, we developed a code that automates the MARVEL input procedure and detects lines that produce conflicts with the master dataset.Such lines are referred to as "bad lines".A bad line is not necessarily incorrect, it simply shows the lack of self-consistency in the SN assembled, and the problem could be due to errors present in other lines. Our first attempts to use this code produced over a thousand bad lines.These lines were then carefully analyzed in order to minimize their number.A large portion of the bad lines were due to misassignments, the rest were due to typos, illegal transitions, and possible misidentification of the isotopologue or molecule. After reducing the number of bad lines to only 36, which were excluded from the final calculations, we began analyzing the uncertainties suggested by MARVEL.At that point, our SN contained floating components that contained around 300 transitions.44 lines from CDSD-2019 30 were used to link the floating components with the main component, reducing the number of lines in the floating components to less than 100.The remaining floating components are too fragmentary to link to the main network, we would need to include many more semiempirical lines than was deemed to be reasonable.The lines from CDSD-2019 were given an uncertainty of 0.0005 cm À1 . Our final dataset contains rovibrational transitions collated from sources given in Table 1.Of the 20,754 experimental transitions gath- we were able to determine absolute energies for 6318 of them. The 13 C 16 O 2 lines we have gathered cover the region from 579 to 13,735 cm À1 .Figure 2 illustrates the distribution of the collected data, using two vertical axes to help appreciate the amount of experimental data acquired compared to HITRAN2020. 51 | Comments on literature sources used 67Hahn: 52 This source provides the same bands twice in two sets of tables, with the second set of tables switching the assigned branch. Our analysis shows that the first table (Table 1) provides the correct branch.No data from other tables were taken. 68ObRaHaMc: 53 Provides two sets of transitions, recorded at two different laboratories.The two datasets were included with distinct tags. 78BaLiDeRa: 56 This source uses Herzberg's notation.While updating the notation to AFGL, we had to split the 0311e-0310e band into 11112e-11102e and 30001e-11102e.This could be an assignment issue or a result of the difference in notation.82BaRiSmRa: 57 Uses Herzberg's notation.While updating the notation to AFGL, we had to split the 0311e-0110e band into 11112e-01101e and 30001e-01101e; and split the 3000-0110 band into 11112e-01101e and 30001e-01101e.This could be an assignment issue or a result of the difference in notation. 00TaPeTeLe: 58 Contains two transitions connected to an energy level, 25 2 0 0 5 2 e, not present in Ames-2021. 29As this level is present in HITRAN2020 51 08PePeCa: 32 Contains two lines which we suggest to reassign and correctly assigns 45 lines misassigned in 04DiMaRoPe, 59 9 of which were already corrected in 08PeDeLiKa, 31 and also correctly assigns 34 lines misassigned in 08PeDeLiKa. 31BeDeSuBr: 60 Contains 15 lines for which we suggest reassignments. F I G U R E 2 Coverage of the transition data obtained from literature sources (see Table 1 for more details).The blue columns follow the left vertical axis, each column covers a region of 25 cm À1 .In the background, the spectrum from HITRAN2020 51 is given in orange, with the right vertical axis being the line intensity. | Comments on literature sources not used 45NiYa: 47 The claimed uncertainty of this source is 0.07 cm À1 .It covers the region 2243-2329 cm À1 , which is well covered by other, higher-resolution studies. 66GoMc: 48 The lines provided are identical to those in a previous publication from the same group. 6182EsHuVa: 49 The lines provided are identical to those in another publication. 24ZhQuReHu: 50 The data show large discrepancies with other sources.Although it covers the region 912-937 cm À1 , which is not covered by any other source, no new energy levels are determined by this source. | Relabeling of states For the sake of unifying the notation of the energy levels across the entire dataset, we had to update the labels of 5149 lines collected from 21 sources, see Table 1.During the update, we found many lines whose assignment disagreed with the rest of the dataset.To check the assignments, these lines were compared to lines present in the Ames-2021 29 line list.We propose corrections for 148 misassigned lines, 18 of which are reported with the updated assignment within experimental accuracy for the first time.These lines are listed in Table 2. | Dataset of empirical energy levels The 14,101 unique rovibrational transitions gathered yielded 6318 empirical rovibrational energy levels for 13 C 16 O 2 .Table A1 in the Appendix summarizes the vibrational bands which could be determined based on the set of measured rovibrational transitions. Figure 3 illustrates the distribution of the transitions used for the determination of each energy level.As usual, this is a heavy-tailed degree distribution.Figure 4 illustrates the uncertainty distribution of our empirical energy levels.While the overall uncertainty is satisfactory, more high-resolution measurements are needed to eliminate the outliers and to expand the data coverage. T A B L E 2 Reassignments of lines suggested in this study.The J value and parity given for the lower state.† These are duplicate lines. | Beat frequency comparisons While we were unable to extend our SN with the available beatfrequency measurements, we were able to compare the originally measured frequencies with frequencies computed using our empirical (MARVEL) energy levels.Table 3 | Comparison with line lists A comparison of our data with available line lists shows good overall agreement with both the CDSD-2019 30 and the Ames-2021 29 data.It should also be noted that our data show better agreement with CDSD-2019 than with Ames-2021.Figure 5 shows systematic differences between the Ames-2021 and the CDSD-2019 data and illustrates the good agreement between our dataset and CDSD-2019.The outliers highlighted in the figure were determined using only a single transition; we tried to find more experimental data in these regions but none was found.Figure 6 compares the energy-level coverage, as a function of J, between our dataset and that of Ames-2021.Evidently, we need a lot more experimental data for the region above the 10,000 cm À1 region. | SUMMARY This paper describes a comprehensive analysis of the highresolution, rovibrational spectroscopy literature available for the second most abundant isotopologue of carbon dioxide, 13 F I G U R E 4 Uncertainty distribution of the empirical rovibrational energy levels of this study.Our average uncertainty is 0.0024 cm À1 , with 179 outliers having an uncertainty above 0.01 cm À1 . T A B L E 3 Comparison of beat-frequency measurements from Reference 33, in MHz, with results of the present study.The validated data cover the wavenumber range 579-13,735 cm À1 . Our detailed analysis reveals (a) areas in the spectrum where there is a lack of data, (b) numerous inconsistencies in the vibrational assignment of some of the measured transitions (we report 18 possible errors for the first time), and (c) conflicting labels of higher-energy levels between experimental data 54,58 and theoretical line lists. A comparison between our energy levels and those of Ames-2021 29 and CDSD-2019 30 shows significantly better agreement with CDSD-2019, highlighting the importance of fitting theoretical models using available experimental data.Further research is being carried out in our groups to analyze more isotopologues of CO 2 .Work is also underway to explore methods of including the beat frequency data into the MARVEL analysis procedure; initial attempts to do this show that there are numerical difficulties with ill-conditioned matrices which will need to be overcome before this can be done usefully. APPENDIX T A B L E A 1 Vibrational bands of 13 erences 21 and 22 for examples.Our critical evaluation of the existing empirical line positions also helps to identify spectral regions where more high-resolution experiments are needed. ered, only 14,101 are unique.In fact, 10,665 transitions are measured only once, while there are 5 and 32 transitions measured 10 and 9 times, respectively.The principal component of our final SN contains 20,641 transitions, the other transitions form floating components.The experimentally measured transitions involve 6520 states; summarizes the results obtained and contains a comparison with the beat frequencies of Reference 33.As seen there, (a) in most cases MARVEL can reproduce the measured frequencies within the MARVEL uncertainties, computed from the uncertainties of the empirical energy levels, and (b) the MARVEL uncertainties are significantly larger than the uncertainties of the beat-frequency measurements.These observations show the importance and the utility of including beat frequency data in a spectroscopic network. C 16 O 2 . All the assigned transitions, altogether from 60 literature sources, have been extracted and verified using appropriate selection rules, F I G U R E 3 Number of transitions used for determining the energies of each state. a The beat frequency is given as the frequency of the sequence transition À the frequency of the reference transition.b MP = MARVEL predicted frequency.c MU = Uncertainty of the MARVEL predicted frequency.d The difference between the measured and the MARVEL-predicted frequency.theMARVEL algorithm, and a comparative analysis against available line lists.These extensive comparisons were performed to ensure the validity of the labelling of the states involved in the measured transitions.The conventions in use for the labelling of CO 2 lines were briefly reviewed as in several cases a conversion had to be performed.
4,912.6
2024-01-08T00:00:00.000
[ "Chemistry", "Physics" ]
Recent findings and applications of biomedical engineering for COVID-19 diagnosis: a critical review ABSTRACT COVID-19 is one of the most severe global health crises that humanity has ever faced. Researchers have restlessly focused on developing solutions for monitoring and tracing the viral culprit, SARS-CoV-2, as vital steps to break the chain of infection. Even though biomedical engineering (BME) is considered a rising field of medical sciences, it has demonstrated its pivotal role in nurturing the maturation of COVID-19 diagnostic technologies. Within a very short period of time, BME research applied to COVID-19 diagnosis has advanced with ever-increasing knowledge and inventions, especially in adapting available virus detection technologies into clinical practice and exploiting the power of interdisciplinary research to design novel diagnostic tools or improve the detection efficiency. To assist the development of BME in COVID-19 diagnosis, this review highlights the most recent diagnostic approaches and evaluates the potential of each research direction in the context of the pandemic. Introduction In December 2019, a novel coronavirus causing a severe pneumonia disease was first detected in patients in Wuhan, China and expeditiously spread throughout the world soon after. In March 2020, the World Health Organization (WHO) declared the coronavirus disease 2019 outbreak caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV -2) as a global pandemic. Within 19 months, SARS-CoV-2 has been transmitted to almost all countries in the world and has infected more than 203 million people as of August 10 th , 2021. The disease has been responsible for over 4 million deaths worldwide [1]. The pandemic has reduced global economic growth from −4.5 to −6.0% in 2020, with a partial recovery of 2.5% to 5.2% projected for 2021. Global trade is estimated to have fallen by 5.3% in 2020, but is projected to grow by 8.0% in 2021 [2]. COVID-19 is highly contagious and tends to easily transmit among close contacts via exposure to infectious respiratory fluids including very fine respiratory droplets and aerosol particles produced from breath, coughs, and sneezes. The extremely high transmission rate of COVID-19 has posed a high risk to the community and put enormous pressure on healthcare systems [3,4]. COVID-19 symptoms are typically high fever, dry cough, sore throat, and difficulty breathing that appear within 2-14 days after the incubation period and it may overlap with influenza or common cold. A pressing need has arisen to rapidly and accurately identify virus carriers to protect the public's health [5]. Diagnostics play a central role in the containment of COVID-19, as it allows for identification, isolation, and contact tracing of the virus carriers, as well as rapid implementation of measures to stop the spreading of virus [6][7][8][9][10]. Easily missed by conventional symptom screening, however, asymptomatic people may become major virus spreaders [11]. Therefore, highly sensitive and specific SARS-CoV-2 detection methods have always been great demand [12,13]. In 2020, the global market for COVID-19 diagnostic services was valued at $60.3 billion and in 2021 it was predicted to reach $84.4 billion and $195.1 billion by 2027 [14]. There are various methods for detecting COVID-19, including immunoassays, protein assays, viral plaque assays, hemagglutination assays, viral flow cytometry, etc [15][16][17][18]. Conventionally, the majority of pathogen diagnostics have been developed for laboratorybased detection, including immunoassay-based and reverse transcriptase-polymerase chain reaction (RT-PCR)-based methods [19]. However, the time-consuming conventional diagnostic procedures were soon overwhelmed by the unmatched rate of infection and hospitalization. Therefore, more and more point-of-care testing (POCT) and rapid methods have been exploited for supporting the medical decision-making process or self-health monitoring, including isothermal nucleic acid amplification technique (iNAAT) [20], clustered regularly interspaced short palindromic repeats (CRISPR)-based methods [21][22][23], biosensors, and microfluidic devices [24]. Automated artificial intelligence models have also been proposed to facilitate high-throughput and consistent diagnosis. These technologies have been developed rapidly while the pandemic is going on. Here, we highlight the most recent progress of these biomedical engineering (BME) technologies applied to COVID-19 diagnosis in this review, as well as provide insights into how the research direction in this field has shifted in response to practical demands for disease surveillance and personal healthcare. Laboratory-based immunoassay methods SARS-CoV-2 infection stimulates the humoral immune system to produce specific antibodies, including immunoglobulin A, M, and G (IgA, IgM, IgG) [25][26][27] that appear in patient blood by specific kinetics. This information has guided the development of immunoassays, which are mainly enzyme-linked immunosorbent assay (ELISA) and chemiluminescent immunoassay (CLIA), for SARS-CoV-2 detection, tracing, and seroprevalence studies ( Figure 1). ELISA was commonly used at the early stage of the pandemic as a qualitative or semi-quantitative method to detect humoral anti-SARS-CoV-2 immunoglobulins or virus antigens. It uses an immobilized SARS-CoV-2 antigen (Ag) to capture its cognate humoral antibody (Ab). The Ag-Ab binding is usually detected by a secondary Ab that was labeled with an enzyme (normally horseradish peroxidase, HRP) to catalyze a color change reaction. The titer of host immunoglobulins or virus antigens can be determined (indirect ELISA Figure 1. General COVID-19 diagnostic workflow using molecular testing (NAAT, iNAAT and immunoassay-based detection). (1) Sample collection methods; (2) Types of samples; (3) Sample processing or pre-treatment; (4) Test reaction and result reading. The methods illustrated are the most commonly used for COVID-19 diagnosis and the alternatives in each step are mostly interchangeable, except that blood samples are not used for NAAT and iNAAT techniques, and extracted RNA is not used for immunoassay detection. The image was created with BioRender.com. and direct ELISA, respectively) by measuring the colorimetric changes on a microplate reader. Among the initial attempts, Amanat et al. used purified recombinant S protein (with modifications) and its receptor-binding domain (RBD) to develop an indirect ELISA to detect IgM and IgG in serum, revealing the strongest binding reactivity for the full-length S protein and the correlation between ELISA titers and virus neutralization [28]. Peterhoff et al. developed an ELISA using SARS-CoV-2 RBD as the antibody-catching Ag to achieve high specificity (92-98%) and high specificity (99.3%) for IgA, IgG, and IgM in the serum at > 10 days after PCR positive [29]. RBD ELISA for testing IgG was found to be sufficient for COVID-19 diagnosis with high sensitivity and specificity (88% and 98%, respectively) [30]. Since IgM positive predictive value (PPV) was insufficient, IgA specificity was low, and their presence in the blood was short-lived [31][32][33], both IgM and IgA are not reliable markers for ELISA. However, combined detection of IgA/IgM/IgG was suggested as the most sensitive assay to detect SARS-CoV-2 31. Interestingly, Kyosei et al. proposed a de novo system for SARS-CoV-2 antigen detection by coupling a spike protein (S1)detecting sandwich ELISA system with thionicotinamide adenine dinucleotide (thio-NAD) cycling. By adding 10 min of thio-NAD cycling to the ELISA procedure and measuring S1 concentration using a plate reader at OD 405 , high sensitivity of 10 4 viruses per reaction can be achieved [34]. ELISA can be used in low-cost settings. It is easy to perform and automated, but it is time consuming process (2-5 h) and it can be easily contaminated. That makes another alternative, CLIA, more suitable when a faster turn-around time (1-2 h) is required. Similar to ELISA, but instead of using an enzyme, CLIA uses a luminophore to conjugate the secondary Ab, so that the specific Ag-Ab binding will trigger a light or fluorescent emission. A few studies found that ELISA sensitivity is similar or slightly better than that of CLIA in detecting humoral Ag or viral Ab [35,36]. Ma et al. used highly purified RBD to make a set of CLIA kits for detecting RBD-specific IgA/IgM/IgG, reaching 96.8-98.6% sensitivity and 92.3-99.8% specificity, also combined the IgA/IgG detection kits to boost the sensitivity and specificity to 99.1% and 100%, respectively [37]. For both ELISA and CLIA, automated, high-throughput detection systems, such as Diazyme DZ-Lite 3000 Plus (Diazyme Laboratories, USA) [38], or MAGLUMI CLIA (Snibe, China) [39] have demonstrated high sensitivity, specificity, and the capacity to process multiple samples simultaneously. It is noteworthy that a wide variety of performance between commercially available ELISA/CLIA kits was found [40][41][42][43][44], thus the need of validating the assays before use is very critical. Wu et al. showed that the combination of antibody detection and existing RT-PCR greatly enhanced SARS-CoV-2 detection, from 48.1% (RT-PCR alone) to 72.2% [45]. Based on the process of SARS-CoV-2 infection and the production of specific antibody responses, a diagnostic IgG and IgM laboratory-based immunoassay would be the most effective method for COVID-19 diagnosis. Rapid detection tests (RDTs) for POCT Even though sharing the same working principle as ELISA, an RDT is formatted into a portable cassette or dipstick to perform the test at POC or home ( Figure 1). SARS-CoV-2 RDTs can indirectly detect the virus through humoral antibodies (IgM, IgG, IgA), referred to as Ab-RDTs, or directly detect a surface antigen of the virus, referred to as Ag-RDTs. For convenient result reading, the RDT is engineered as a later flow immunoassay (LFA) device, comprised of a nitrocellulose membrane contained in a plastic housing, immobilized Ab, and labeled Ag/Ab (usually conjugated with colloidal gold). The presence of a target molecule is indicated by a color band that appears on the test line. RDT kits are inexpensive, very simple to use without training or laboratory equipment, and usually have a short time-to-result of 10-15 minutes. Therefore, RDT has become one of the most widely used methods for SARS-CoV-2 detection, especially for POCT screening and personal use. However, their uses in clinical diagnosis are limited to certain circumstances, mostly depending on the stage of disease progress, viral loads, and viral prevalence [46]. A systematic meta-analysis by Ghaffari et al. on 62 commercially available serological (antibody detecting) test kits (ELISA, CLIA, RDT) revealed a wide range of sensitivity variation (almost 0% to 100%), while most of the kits exhibited >90% specificity [47]. Noticeably, most of the worst performance was from RDT kits. It was also confirmed that these serological kits are effective in later periods of the disease progression [47]. From meta-analysis, Bastos et al. found that the overall sensitivity of serological immunoassays was significantly higher at least 3 weeks from the illness onset (69.9-98.9%) as compared to the results from the first week (13.4-50.3%) [48]. Even though these serological detection kits are not sufficiently sensitive at the early stage of infection, they are important tools to investigate one's past infection [28] and provide information on how the virus spreads in a community [49]. Ag-RDT kits are designed the same way as Ab-RDT kits, except that the targets are viral surface proteins and the virus is directly detected. Ag-RDT assays were shown to be better screening tools than Ab-RDT kits [50]. However, the performance of commercial Ag-RDT kits can be vastly different [51][52][53]. A clinical evaluation of 122 SARS-CoV-2 Ag-RDT kits with the European Conformity (CE) mark reported a wide range of performance variation that 78.7% of the kits exhibited a sensibility >75% on samples with high viral loads, and 19.8% of the kits showed a sensitivity >75% for medium viral loads [54]. With qRT-PCR as the gold standard, Ag-RDT sensitivity was significantly different between the symptomatic (80-96.52%) and asymptomatic group (37-71.43%) [50,55,56], but its positive predictive value (PPV) was higher in agreement with viral cultivability [57] and its sensitivity was only slightly lower than the qRT-PCR as long as the virus isolated from the sample that was cultivable [58]. The sensitivity of an Ag-RDT kit was found to be dramatically reduced from 86.5% to 53.8% after 7 days of illness onset [59]. Since low viral loads (Ct > 30) are linked to low viral culture positivity or infectivity [60,61], the proper use of Ag-RDT kits is to detect infectious cases. Nevertheless, for screening mixed symptomatic and asymptomatic groups, serial testing with the minimal 3-day interval of between tests can also increase the sensitivity of Ag-RDT to over 98% [58]. Also, for community screening, the short sample-to-answer time and the repeat testing of Ag-RDTs were demonstrated to be more important than the sensitivity [62]. Nucleic acid amplification testing (NAAT) methods NAAT, or specifically, quantitative RT-PCR (qRT-PCR) was among the earliest diagnostic tools developed for the detection of SARS-CoV -2 from the available sequence data shared from China ( Figure 1). A few days after the initial SARS-CoV-2 outbreak, a full genomic sequence of the virus isolated from a patient from Wuhan was released and deposited on GenBank (accession number MN908947.3). It was the first genomic data for the design of primers for qRT-PCR by researchers from China, France, the USA, Japan, Germany, Hong Kong, and Thailand. These protocols were later compiled and made available online through WHO [63]. In order to facilitate the rapid sharing of SARS-CoV-2 sequences, a data-sharing service hosted by the Global Initiative on Sharing All Influenza Data (GISAID) was introduced (https://www.epicov. org). Numerous efforts have been made by scientists worldwide to optimize qRT-PCR procedures and produce commercial SARS-CoV-2 diagnostic kits to support disease surveillance at hospitals, healthcare centers, and in the community. As instructed by the protocols published in the WHO guideline, various SARS-CoV-2 genomic targets, including structural genes, N (nucleocapsid), RdRp (RNA polymerase), S (spike protein), E (envelope protein), Orf1ab (replication complex), and a non-structural gene, nsp14 [63], were used for the amplification. According to Corman et al., qRT-PCR protocols with the E gene or RdRp gene were shown to produce the best results with a limit of detection (LOD) of 3.2 to 5.2 RNA copies per reaction [64]. A comparative study was also reported the high analytical sensitivity of qRT-PCR using Corman E gene and CDC N1 primer-probe sets (LOD = 6 RNA copies per reaction) [65]. However, quickly after the initial outbreak, growing evidence showed that the mutations occurred in the SARS-CoV-2 genome were prone to significantly reducing the sensitivity of available qRT-PCR procedures [66][67][68][69]. Based on 31,421 genomic sequences shared on GISAID as of July 23 rd , 2020, Wang et al. found that virtually all the recommended primer sites have undergone mutations and the N gene primers and probes covered most of the mutated spots [70]. Later evidence of mutations of E and N genes hinted at the escape of the virus from qRT-PCR detections [71][72][73]. Interestingly, the most common mutation was found to be cytosine-to-uracil type, which was caused by a strong mRNA editing mechanism catalyzed by apolipoprotein B mRNA-editing enzyme, catalytic polypeptide-like (APOBEC) cytidine deaminase during its involvement in the innate immune host response [74,75]. These findings emphasized the need of developing more multiplex assays for COVID-19 diagnosis. As the first commercial multiplex qRT-PCR for SARS-CoV-2, QIAstat-SARS, while targeting both E and RdRp genes, achieved a LOD of 1 RNA copy per µl and very high percent agreement (97%) with WHO RT-PCR assay [76]. Moreover, in silico analysis of PCR performance with known virus variants was highly recommended for proper adjustments of the optimal cycle threshold depending on the changes in the amplification curve [77]. When the co-infection of SARS-CoV-2 and influenza viruses has become more frequent and increase the risk of severity and mortality of COVID-19 patients [78], multiplex qRT-PCR assays for the simultaneous detection of SARS-CoV-2 and influenza A/B also become necessary [79,80]. Studies have identified the presence of SARS-CoV-2 in the respiratory tract (sputum, nose, bronchoalveolar lavage fluid (BALF) [81], nasopharynx and oropharynx [82], etc.), gastrointestinal tract (stool, anal swab [83], etc.), even in the retina [84], olfactory mucosa [85] and brain [86] of COVID-19 patients. However, only a limited number of specimen types can be used for qRT-PCR detection. Based on the studies on qRT-PCR sensitivity and specificity varied in specimen type, nasopharyngeal (NP) swab has widely been used upper respiratory tract specimen, sputum is for the lower respiratory tract sampling [81,, 87,, 88], while oropharyngeal (OP) swab was not recommended due to its lower positive rate. However, the disadvantage of using NP swabs is the discomfort of the testes and the risk of complications, such as broken swabs or nasal bleeding [89]. Alternatively, a combined nasal/OP swab can be used to provide excellent sensitivity while releasing the stress of NP swab shortages [90]. Later recognized, but with the high sensitivity and specificity of saliva-based qRT-PCR (84.2-95.2% and 98.9%, respectively), saliva has become an appealing noninvasive alternative to NP swabs because it is easy and painless for self-sampling, child-friendly, and safer for healthcare workers [91][92][93][94][95]. It was even proposed as the gold-standard sample for COVID-19 diagnosis [96] and it was shown practically to perform similarly to NP swab-based RT-PCR [97]. Even though qRT-PCR is usually considered the gold standard for COVID-19 diagnosis, it has shown several critical limitations in practice as the results of some pre-analytical and analytical vulnerabilities, including erroneous sampling, low assay accuracy, unaware mutations, lack of understanding of viral load kinetics [98,99]. In early reports, the immature development of NAAT technology for SARS-CoV-2 detection can be blamed for the moderate clinical sensitivity of qRT-PCR assays (71-82.2%) [100][101][102][103]. However, after more than a year of intensive development and optimization, the clinical performance of qRT-PCR has not been markedly improved [104], especially for screening in population-based and hospital settings [105,106]. Modifications to the qRT-PCR procedure have been proposed and demonstrated to improve the overall capacity, reduce turn-around time, cost, or adapt the system to POCT settings, such as employing patient-collected swabs and saline gargles [107] or saliva [108,, 109], unextracted clinical samples [110][111][112][113][114], or portable miniature PCR workstations [115][116][117]. Noticeably, a novel approach to PCR, namely viability RT-qPCR, employed platinum chloride to treat NP swab samples and prevent the amplification of SARS-CoV-2 RNA in free form or from virions with damaged capsids, thus detecting the only RNA associated with intact virions, indicating infectivity [118]. This method is a suitable tool to ascertain one's infectiousness without the need to perform virus culture, avoid false positives caused by contaminated RNA from the environment and identify noninfectious, prolonged RNA shedding from patients. Strategy-wise, pooling or group testing was also suggested as an attractive low-cost tactic to screen a large population with low virus prevalence, preserving 69-93% of the cost without reducing the detection efficiency [119,120]. Alternatively, serial testing has been demonstrated to be effective in improving the clinical sensitivity of qRT-PCR to above 90% [58,100,103]. The remaining challenge to qRT-PCR is the limited capacity to precisely process a large number of samples simultaneously [98]. While automated, high-throughput qRT-PCR systems such as cobas 6800/8800 (Roche Molecular Systems, USA) [121], Alinity m2000 (Abbott Molecular, USA) [122], GeneXpert Xpress (Cepheid, USA) [123], etc., can partly resolve the analytical and capacity problem [124], they cannot help reduce human-related errors in pre-analytical steps [98]. Also, most of these systems either require high investment or have lowthroughput capacity. Therefore, further improvements are still needed to adapt PCR to the pandemic circumstances. Isothermal nucleic acid amplification testing (iNAAT) methods iNAATs are alternatives to conventional PCR and are usually designed for POCT diagnosis, in which the nucleic acid amplification is performed at a constant temperature by avoiding thermal denaturing of the double-strand DNA (dsDNA) template ( Figure 1). Among the iNAAT methods, loop-mediated isothermal amplification (LAMP), developed by Notomi et al. (2000) [125], has been the most frequently used one. This method utilizes a DNA polymerase with strand displacement activity (usually Bst DNA polymerase) and 4-6 primers that recognize 6-8 distinct regions on the target DNA sequence. The whole process requires only incubation at 60-65°C for less than 1 hour, producing 10 9 copies of a target sequence. Amplified products can be conveniently visualized with various dyes, such as phenol red, hydroxy naphthol blue, leuco crystal violet (LCV), SyBr Green, or by coupling the reaction with an LFA strip. The addition of reverse transcriptase to the LAMP assay (RT-LAMP) allowed for the detection of viral RNA at LOD of 5-10 copies per reaction, even without RNA extraction [126]. Most reports achieved the clinical sensitivity and specificity of RT-LAMP within 75-100% and 98.7-100%, respectively, while the LOD ranged from 1 to 304 copies per reaction [127][128][129][130][131][132][133][134]. Another advantage that makes LAMP fit for POCT is the use of lyophilized reagents without sacrificing quality [135], which expands the kit shelf-life to years at 4°C or several weeks at room temperature [136,137]. Nevertheless, LAMP performance is heavily dependent on its custom design and might not yet be comparable to qRT-PCR in some cases, as it was reported to be reliable up to the viral load equivalent of Ct (cycle threshold) < 30 [131], which was in line with the observations from other groups [138,139]. Another iNAAT option for SARS-CoV-2 diagnosis is recombinase polymerase amplification (RPA), which was invented by Piepenburg et al. [140]. In this method, Bsu DNA polymerase I (large fragment) is used to extend the 3 termini of two oligonucleotides (primers). The strand hybridization of primer-ssDNA is mediated by a recombinase (T4 uvsX) and other proteins (T4 gp32, T4 uvsY). RPA normally operates at 37-42°C and takes only 10 minutes to complete the amplification. The fast process, the smaller number of primers required, the low working temperature, and the versatility of targeting multiple sequences simultaneously have made RPA an excellent alternative to PCR and LAMP. Based on RT-RPA, Xia et al. introduced an one-pot, 30-min WEPEAR (whole-course encapsulated procedure for exponential amplification from RNA) procedure for simultaneously detecting N and S genes of SARS-CoV-2 at the LOD of 1 RNA copy per reaction [141]. A clinical evaluation of RT-RPA for SARS-CoV-2 detection showed the sensitivity, specificity, and LOD of 98%, 100%, and 7.7 RNA copies/µl, respectively, which was comparable to qRT-PCR (5 copies/µl) [142]. Nicking enzyme-assisted amplification reaction (NEAR), or nicking enzyme-assisted amplification (NEAA) relies on a nicking endonuclease (NE, such as Nt.BstNBI, Nb.BtsI, and Nb.BsrDI), in addition to a strand-displacing DNA polymerase (Bst DNA polymerase) [143]. NEAR circumvents the need for a thermal denaturing dsDNA template by using NE to recognize a specific dsDNA sequence covered by the primer region and introduce a nick site on one strand, exposing its 3 end for elongation. A typical NEAR takes 15-30 minutes at 54-58°C to complete and is extremely efficient in target amplification. However, NEAR is not as popular as LAMP or RPA due to its tendency to produce nonspecific products [144]. Despite that disadvantage, NEAR was soon adopted into a commercial diagnostic tool, the ID Now™ system (Abbott, USA), in which various diseases can be detected within 5 minutes directly from clinical samples. Even though ID Now™ is widely used in the USA, contradicting evaluations of its performance for SARS-CoV-2 diagnosis have been reported. Most clinical reports showed 54.8-94% positive agreement between ID Now™ and qRT-PCR based platforms [145][146][147][148][149][150] and some performance may be caused by errors in specimen preparation or improper handling of the machine. In addition, Tu et al. reported that the high diagnostic value from this system can be achieved with symptomatic patients [151]. The diagnostic value of iNAATs is usually compared to qRT-PCR or other conventional diagnostic methods, thus it is difficult to justify the relative performance of iNAATs to each other. So far, only a few studies have directly compared iNAATs for detecting a specific target. Tran et al. found that RT-LAMP is superior to the other two iNAATs that utilize Bst DNA polymerase for detecting SARS-CoV-2, cross-priming amplification (CPA), and polymerase spiral reaction (PSR) with a 20-40 times lower LOD value [135]. Naveen and colleagues showed that the LOD of RT-LAMP was equal or one order of magnitude lower than that of RT-RPA in detecting two ginger-infecting viruses [152] and cardamom vein clearing virus [153]. These data support the conclusion that LAMP is currently the most suitable iNAAT for SARS-CoV-2 diagnosis. With the recent demonstrations of using alternative specimens, including saliva [154,155], and exhaled breath samples (by a face mask-based collector) [156], RT-LAMP has been transformed to adapt better to the POC diagnostic settings. CRISPR-based diagnostics CRISPR and CRISPR-associated proteins (Cas) systems are prokaryotic RNA-mediated immune systems that prevent bacteriophage infection and plasmid transfer [157][158][159]. CRISPR is divided into two classes, Class 1, which includes groups I, III, IV, and Class 2, which includes groups II, V, and VI, and further categorized into more than 30 subgroups [160]. In which, Cas9 (formerly Csn1) represents subgroup II-A and is the most widely used Cas nuclease for genome editing in a wide range of organisms and cell types [161][162][163][164]. The method has been used as an antimicrobial agent for the removal of bacterial pathogens [165][166][167] and viruses including HIV-1 [168,169], human papillomavirus [170], hepatitis B virus [170,171], and SARS-Cov-2 [172]. In Class 2, there is also a nuclease in the VA subgroup, Cas12a (formerly Cpf1) isolated from Francisella novicida. It has a different mechanism of action than Cas9, with the ability to use a single crRNA molecule to find the target sequence and cut the target sequence at two staggering sites. In addition, Cas12a also exhibits collateral nuclease/cleavage activity, which is capable of cutting nonspecific single-stranded DNA fragments immediately upon binding to the target sequence [173]. These features make Cas12a a more favorable tool for application in the specific detection of DNA/RNA sequences. Other types of Cas nucleases are also beginning to be exploited for nucleic acid detection purposes, including Cas13a (formerly known as C2c2, belonging to subgroup VI-A) and Cas13b (formerly known as C2c6, belonging to group VI-B1) [174]. With their ability to recognize RNA, Cas13a and Cas13b were used for the first time in RNA editing [175,176]. Using their nonspecific cleavage RNAse activity of single-stranded RNA, these two nucleases and Cas12a were used respectively in the nucleic acid detection kits including Specific High Sensitivity Enzymatic Reporter UnLOCKing (SHERLOCK) [177], SHERLOCK v2 [178] and DNA endonucleasetargeted CRISPR trans reporter (DETECTR) [173]. So far, SHERLOCK has been reported to be able to detect different pathogens at ng or pg concentration of DNA or RNA, such as ZIKA virus with a titer as low as 2.1 attomolar (aM) from clinical samples containing Escherichia coli and Pseudomonas aeruginosa [179]. Additionally, the CRISPR-Cas13a-based system was shown to identify single nucleotide polymorphisms in humans as well as to discriminate between the antibiotic-resistant strains of Klebsiella pneumoniae with high specificity [179]. SHERLOCK v2 was developed for multiplex detection of nucleic acid in a single reaction chamber at a concentration range of attomolar (aM) of the target. Integrating SHERLOCK v2 signal amplification with LFA, the SHERLOCK v2 paper-based test can detect as low as 2 aM of a nucleic acid target (acyltransferase gene) after 90 min, with less background and increased signal intensity [21]. The SHERLOCK system was further modified into miSHERLOCK (minimally instrumented SHERLOCK) as a low-cost, selfcontained, POCT device that used crude saliva samples and required less than 1 hour of sampleto-answer time [180]. Despite the excellent efficiency, the number of reports available for CRISPR-based diagnostics (CRISPR-Dx) for viruses, bacteria, mutations, and SNPs is still limited [181]. Figure 2 depicts the workflow for real-time CRISPR-Cas13a based detection of SARS-CoV-2 from clinical samples. Broughton et al. developed a CRISPR-Cas12based LFA for the detection of SARS-CoV-2 in less than 40 min [182]. In this work, they tested 36 patients of SARS-CoV-2 and 42 patients with other respiratory infections. It was found that CRISPR-Cas12-based performed on par with the RT-PCR assay as it reached 100% positive and negative predictive agreement. Similarly, Ding et al. developed an All-In-One Dual CRISPR-Cas12a (AIOD-CRISPR) assay for the detection of SARS-CoV-2 [182]. They targeted the nucleoprotein encoding gene and found the results were consistent with the RT-PCR assay. This CRISPRbased tool was inexpensive to produce and required only 20 minutes of time-to-result using clinical samples [182]. In another attempt, Chen et al. coupled LAMP and CRISPR-Cas12a for the Figure 2. CRISPR-Cas13a based detection of SARS-CoV-2. Nasopharyngeal and oropharyngeal specimens are collected via sterile swabs. The collected sample is then diluted in an appropriate buffer, followed by a few heating steps. Sample heating steps release ssRNA from the virus and facilitate the deactivation of nuclease if any is present in the sample. Following the heating step, the viral RNA is subjected to RT-RPA for the amplification of target sequences in the form of cDNAs, which are in turn transcribed by T7 RNA polymerase. The accumulated amplification products of the targeted RNA sequence are provided for Cas13a-based detection assay. Cas13a recognizes T7-transcribed RNA sequences if appropriate guide RNA (gRNA) is presented. This leads to the activation of Cas13a and displays its nonspecific RNAase activity, resulting in the nonspecific cleavage of the fluorophore-ssRNA-quencher complex. The florescence emitted by the fluorophore can be quantified via spectroscopy, indicating the concentration of the ssRNA template. Alternatively, the cleaved reporter molecule can be detected via paper LFA. The image was created with BioRender. com. rapid diagnostic of SARS-CoV-2 [183]. With the help of smart phone and 3-D printing equipment, the virus has been detected by the naked eye, which was a great advantage for POCT. RNA of SARS-CoV-2 has been detected within 40 minutes, with a high sensitivity of 20 SARS-CoV-2 RNA copies per sample. Additionally, Huang et al. developed a CRISPR-Cas12a-gRNA complex and fluorescent probes to detect nucleic acid produced by RT-PCR or RT-RPA [184]. It was found that with the aid of CRISPR-Cas12 system, SARS-CoV -2 was detected within 50 minutes, with the LOD of 2 copies per nasal swab. More recently, Li et al. established a CRISPR-based LFA for POCT of SARS-CoV-2 that can detect 10-100 virus RNA copies/μL from clinical samples [185]. The system was further improved by developing easy-readout and sensitive enhanced (ERASE) strips to reach a LOD of 1 copy/μL. The method was then used for testing 649 clinical samples, achieving 90.67% positive predictive agreement and 99.21% negative predictive agreement. Similarly, Yang et al. used Cas13a to couple with a universal autonomous enzyme-free hybridization chain reaction (HCR) by designing a cleavage reporter assay [186]. Once Cas13a found target sequences, it triggered the downstream HCR circuits. They designed three guide RNAs (gRNAs) for targeting SARS-CoV-2 S, N, and Orf1ab genes and succeeded in detecting the target sequences within 1 h at attomolar level sensitivity [186]. Even though CRISPR systems are mainly used for genome editing, this growing evidence has demonstrated their value in boosting the performance of iNAAT detections, making iNAATs more suitable to POC settings. Microfluidic devices and biosensors for SARS-CoV-2 diagnostics Microfluidics is an exponentially growing field of engineering and has shown a rather large number of applications in a wide range of areas like rapid diagnostics, biomedical therapy, organ culture, 3D culture, in vitro toxicity testing, nucleic acid extraction, and amplification, drug delivery, single-cell analysis, and many more [171,[187][188][189][190][191][192]. This technique is based on the precise manipulations of micro-scale fluids in micro-channels. It has been widely used and have shown number of distinctive advantages, including rapid sample processing, assay controllability, portability, millimeter-scale design, multi-tasking capability, lowvolume assay, and low-cost requirements, in comparison to other conventional platforms. Particularly, microfluidic devices have demonstrated high practical and diagnostic values in the field of rapid, POC pathogen detection, such as assays targeting parasites and viruses [193][194][195][196]. Isolation of nucleic acids is the critical step in a NAAT/iNAAT workflow, but can also be timeconsuming, costly, and tedious. The product quality and efficiency of the isolation step can be inconsistent between batches or labs. Therefore, as aforementioned, automatic, high-throughput extraction and detection devices can facilitate the whole diagnostic procedure, from obtaining the clinical sample to reading results. Brassard et al. designed a microfluidic device for the extraction of DNA from blood samples which helped reduce the time and chemical expense for the extraction [197]. Similarly, Geissler et al. established a microfluidic device for performing the whole process of bacteria identification for E. coli O157: H7 from cell lysis, multiplex PCR amplification, to on-chip hybridization with fluorescent gene markers [198]. More recently, Sullivan et al. [199] designed microfluidic devices for the purification of nucleic acids directly from blood samples using isotachophoresis (ITP), which was directly used for POCTs. Qiu et al. introduced a fully disposable heat capillary tube without an electric supply for DNA amplification, in which PCR reagents were repeatedly passed through different temperature zones [200]. The device allowed a single-step nucleic acid dipstick assay for visualizing DNA amplification by the naked eye. It achieved the sensitivity of 1.0 TCID 50 /mL for detecting H1N1 within 35 minutes and was suitable for instrument-free diagnosis in remote areas [200]. Under the burden of COVID-19 pandemic, the combination of microfluidics and the available diagnostic methods has provided timely upgrades to the available diagnostic procedures. A semiautomatic high-throughput microfluidic device was developed for measuring in parallel the anti-SARS-CoV-2 IgG/IgM levels in 50 serum samples and achieved a sensitivity of 95% with a specificity of 91% [201]. An Opto-microfluidic sensing platform based on gold nanospikes was developed for the detection of antibodies in 1 μL of human plasma within 30 minutes. This label-free platform reached a relatively low LOD of 0.08 ng/mL for serological testing of anti-SARS-CoV-2 antibodies presented in diluted blood plasma samples [202]. Another highly sensitive and specific portable microfluidic immunoassay system was engineered for on-site and simultaneous detection of IgG/ IgM/Antigen of SARS-CoV-2 within 15 minutes [203]. Lately, Ramachandran et al. designed an electric field-driven microfluidic device for CRISPR-based detection of SARS-CoV-2 within 35 minutes from contrived and clinical nasopharyngeal swab samples [204]. Besides microfluidic devices, an urgent need has been arisen for POC diagnosis of COVID-19 that has motivated the invention of a portable, low-cost biosensors, especially electrochemical immunosensors. Mavrikou et al. utilized membraneengineered mammalian cells electroinserted with SARS-CoV-2 Spike S1 antibody to detect the binding of SARS-CoV-2 onto the membrane via measuring changes in membrane potential [205]. The results were obtained within 3 minutes with 93% accuracy as compared to RT-PCR [205]. SARS-CoV-2 nucleocapsid protein (N) can be alternatively detected by its cognate antibody grafted on a gold-coated microcantilever surface at the LOD of 100 viral copies/mL or 0.71 ng/ml [206]. A portable, disposable electrochemical sensor made from molecularly imprinted polymers (MIPs) was capable of detecting SARS-CoV-2 N protein with a LOD of 15 fM [207]. Relying on changes in the volatile organic compounds (VOCs) in exhaled human breath to indicate SARS-CoV-2 presence, a portable electronic nose (GeNose C19) was fabricated with a metal oxide semiconductor gas sensor array and supported by machine learning models to detect SARS-CoV-2, up to a sensitivity and specificity of 86-94% and 88-95%, respectively [208]. So far, graphene has been demonstrated to be an excellent material for developing biosensors, which has shown its high conductivity, stability, and specific surface area. However, due to its lack of reactive chemical groups, it is usually functionalized by nanoparticles. For example, in order to develop LEAD (Low-cost Electrochemical Advanced Diagnostic) system, Lima and colleagues first treated a graphite pencil electrode (GPE) with glutaraldehyde solution, then coated GPE with AuNPs functionalized with cys, and finally mixed it with a solution consisting of N-(3-dimethylaminopropyl)-N-ethylcarbodiimide hydrochloride (EDC), N-hydroxysuccinimide (NHS), and human angiotensin-converting enzyme 2 (ACE2) receptor, enabling the immobilization of ACE2 on GPE surface. This device can directly capture SARS-CoV-2 in clinical samples (saliva and NP swab stored in VTM) and detect at least 229 fg/ml of S protein by measuring the signal suppression of a redox probe [Fe(CN) 6 ] −3/−4 upon S protein -ACE2 binding [209]. It can be manufactured for only $1.5, requires 6.5 minutes of sample-to-answer time, and displays 100% sensitivity and specificity using saliva specimen [209]. Alternatively, Nguyen et al. functionalized graphene with 1-pyrenebutyric acid N-hydroxysuccinimide ester (PBASE) for immobilizing anti-SARS-CoV-2 spike RBD antibody [210]. SARS-CoV-2 in artificial saliva, down to 3.75 fg/ml, was recognized upon binding to the immobilized antibody by observing changes in graphene's phononic response via Raman spectroscopy [210]. A crumpled graphene field-effect transistor (FET)-based biosensor immobilized with N-and S-protein antibodies was shown to detect N and S proteins at extremely low LOD (1 aM), surpassing ELISA sensitivity [211]. Other than electrochemical immunosensors, a SARS-CoV-2 biosensor can be integrated with microfluidic devices and iNAAT technologies as well. A face-mask-integrated SARS-CoV-2 sensor was made to collect breath-generating viruses accumulated under the mask and detect their RNAs by activating lyophilized Cas12a SHERLOCK reagents that has embedded on a paper-based microfluidic device [212]. This portable, personal testing device inherited from previous discoveries on breath sampling technologies, paper-based biosensor, and LFA for visualized monitoring of the results, allowing for a LOD of 500 IVT (in vitro transcribed) RNA copies per reaction [212]. Altogether, microfluidic devices, and biosensors have shown a great potential in adapting lab-based pathogen diagnostics to POC and low-cost settings while maintaining detection efficiency. However, most of these products and procedures have not been validated in a large-scale clinical trials to confirm their practical uses. Artificial intelligence-assisted diagnostics The ability to make fast and accurate decisions has been a vital factor affecting the capacity of COVID-19 diagnostic systems to cope with extremely high testing volumes. With limited clinical sensitivity of qRT-PCR demonstrated at the initial stage of the pandemic, chest computed tomography (CT) and chest X-ray (CXR) was shown to efficiently support qRT-PCR and improve the overall accuracy. Compared to PCR, chest CT is easy to perform, faster, more standardized, and consistent as most of the COVID-19 patients exhibit typical radiographic features, including ground-glass opacity (GGO), crazy-paving pattern, pleural effusions, and consolidation [213]. Moreover, a chest CT scan can be used to assess the severity of symptomatic patients [214]. However, chest CT has a major drawback of relatively low specificity (25-80%) [215], causing misinterpretation of the infections caused by other pathogens, and thus cannot be used as a ground truth. The use of artificial intelligence (AI) is a promising approach to solve this problem, reducing the workload for radiologists, and improving the overall accuracy of radiography-based diagnosis. An online processing strategy was exploited by Saba's group by developing six models (two machine learning (ML) models, two transfer learning (TL) models, and two deep learning (DL) models) for classifying COVID-19 (CoP) and non-COVID-19 pneumonia (NCoP). They demonstrated 74.58-99.63% accuracy and 0.74-0.99 AUCs (areas under the ROC curve) with less than 2 s of inference time [216]. Another online server can distinguish COVID-19 patients from bacterial pneumonia patients and healthy people with a recall (sensitivity) of 93% and PPV of 86% while extracting main lesion features such as GGO for assisting doctor decision [217]. Due to the limited number of annotated radiographs, transfer learning techniques has been used to accelerate the training time and allow for training deep CNN networks with relatively small datasets [218,219]. Noticeably, Abbas et al. developed DeTraC (Decompose, Transfer, and Compose), a deep CNN architecture using transfer learning and class decomposition, to achieve high accuracy and specificity of 98.23% and 96.34%, respectively, with an ImageNet pre-trained CNN model (VGG19) [220]. Transfer learning is extremely beneficial for training small datasets, but when there are many positive cases to collect radiographs, pre-training on ImageNet will not be useful. In order to develop an automatic COVID-19 prediction model, Chen et al. were able to prospectively collect 46,096 anonymous CT images of 106 COVID-19 inpatients for training using Unet++ [221]. The validation tests on an external dataset achieved a sensitivity and specificity of 98% and 94%, respectively, showing that the DL model performance was on par with expert radiologists and helped reduce the reading time of radiologists by 65% [221]. Shan et al. approached the limitations of the chest CT-based diagnosis procedure differently by building a DL-based automatic segmentation tool to quantify infection volume, dramatically reducing the image delineation time from 1-5 h to 4 minutes while achieving 91.6% Dice coefficient with the manual segmentation [222]. Other than medical computer vision, AI also provides an excellent tool for tele diagnosis of COVID-19 via examining cough and breath sounds. An AI developed by Laguarta's group can identify asymptomatic COVID-19 patients with 100% sensitivity and 83.2% specificity [223]. Several crowdsourced annotated datasets of cough sounds are available to support research in this field, such as COUGHVID with over 25,000 recordings [224] and Coswara with recordings from 941 participants [225]. Even though the performance data encourage the use of AI in assisting COVID-19 diagnosis, it still needs lots of effort for realization in clinical practice. It is not just a matter of accuracy to gain trust from clinicians, especially in the life-death decision-making process. Therefore, diagnostic interpretation, engagement, and communication between AI and clinicians are crucial to developing practical workflows. Conclusions In response to the emergency of the COVID-19 pandemic, the US Food & Drug Administration (FDA) has used its Emergency Use Authorization (EUA) authority to allow for the use of a medicine or testing device without all the evidence that is normally required. By July 23 rd , 2021, FDA authorized 395 tests and sample collection kits for SARS-CoV-2 detection under EUAs [226]. Noticeably, 53 of these can be used with homecollected specimens and 11 of these were authorized for at-home test [226], reflecting an unprecedented trend in the diagnostics market. BME has expanded over the border of applied biological and medical sciences, employing the knowledge of interdisciplinary research from the collaborations with mechanical, chemical, physical, and computer engineers, shifting the focus of diagnostic research to POCT and personal testing solutions while upgrading available tools. In this review, we have summarized one and a half years full of innovations in the field of BME research for COVID-19 diagnostics. While immunoassay-based and NAAT-based diagnostics tools have demonstrated their critical role in our quick response to the initial outbreak, the fast spread and persistence of SARS-CoV -2 have continuously forced the researchers to seek more versatile (iNAATs), precise (CRISPR), high-throughput (deep learning), cost-saving and personalized (microfluidic devices and biosensors) solutions. Nevertheless, none of the single methods is perfect for controlling the disease. Therefore, the development of each method needs to be more specialized in coordinating with the others, much like layers of a Swiss Cheese Model. It is anticipated that in the near future, more and more technology will reach the maturation stage and become essential parts of the new normal in the era of COVID-19. While some BME technologies such as PCR and ELISA seem to have reached their peak of development, iNAATs and other POCT diagnoses will continuously benefit from interdisciplinary research, and they need to focus more on practical perspectives such as cost optimization, portability, versatility, and environmental friendliness. Not only for dealing with this pandemic, but the achievements of BME in this field will provide powerful tools for ensuring health and wellbeing for all, as a goal for sustainable development that the United Nations established. Research highlights -COVID-19 is one of the most severe global health crises that humanity has ever faced -Biomedical engineering (BME) has been extensively and intensively applied to diagnose COVID-19 -Immunoassay-based and NAAT-based diagnostics tools play a critical role in quick response to the initial outbreak -More versatile, precise, high-throughput, and cost-saving solutions are needed for later phases of the pandemic -But none of the single methods is perfect for controlling the disease. -More and more BME inventions need to be developed and will become essential parts of the new normal in the era of COVID-19
9,955.4
2021-01-01T00:00:00.000
[ "Engineering", "Medicine" ]
Flexible inorganic membranes used as a high thermal safety separator for the lithium-ion battery † A fl exible SiO 2 porous fi ber membrane (SF) is prepared by electrospinning followed by calcination in this work. Compared with an organic substrate separator, the SF used as a separator will be an absolute guarantee of the battery thermal safety. The porosity of the SF is 88.6%, which is more than twice that of a regular PP separator. Hydrophilic SF shows better electrolyte wetting ability and its high porosity enables the SF to absorb 633% liquid electrolyte on average, while the lithium-ion conductivity reaches 1.53 mS cm (cid:1) 1 . The linear sweep voltammogram testing of PP and SF suggested that SF, with great electrochemical stability, can meet the requirements of lithium-ion batteries. The cyclic and rate performances of batteries prepared with SF are improved signi fi cantly. Such advantages of the SF, together with its potential in mass production, make the SF a promising membrane for practical applications in secondary lithium-ion batteries. Introduction Lithium-ion batteries have been regarded as one of the major power sources for electric vehicles (EVs) and for the storage of new energy in a smart grid due to its high energy density and particularly stable cycle life. 1,24][5] The separator is the core component of lithium-ion battery safety as it prevents the direct contact of the cathode and anode, while sustaining the free transport of the lithium-ions in the liquid electrolyte. 6,7Polyolen porous membranes, particularly polyethylene (PE) and polypropylene (PP) separators became the most widely used products in the eld of lithium batteries due to their high mechanical strengths, and excellent chemical and electrochemical stabilities.However, a safety issue, caused by the thermal shrinkage of the polyolens at higher temperatures, has limited their further applications in energy storage systems, especially in hybrid electric vehicle (HEV) areas. 8,9o improve the thermal stability of the separator for the safety of lithium-ion batteries, the polyolen membranes were modi-ed [10][11][12][13] and lots of new type of separators [14][15][16][17][18] were designed.Among them, the purely inorganic separators prepared with the sintering method could have the advantage of "absolutely" thermal stability, strong electrolyte absorption content and no dendrite puncturing problems. 19,20However, their poor exibility for cell winding assembly limits their practical application. 2][23][24][25] With high thermal, chemical, electrochemical stabilities and uniform micro-nano pores, they were expected to be a potential candidate as the separator for high safety batteries.In this work, polyvinyl alcohol (PVA) was used as template polymer and silicate (TEOS) was used as the source of SiO 2 , the PVA/TEOS precursor hybrid ber membrane was prepared by the electrospinning method.The PVA was removed via high temperature sintering process and the basic inorganic microstructure of the precursor hybrid ber membrane was retained, leading to exible SiO 2 inorganic porous ber membrane (SF) materials.Such SF used as separator can guarantee the thermal safety of the batteries.The high porosity of SF obtained via the electrospinning technique allows it to absorb more liquid electrolyte, which further leads to high conductivities, thus improving the cyclic and rate performances of batteries.We believe that these advantages of the SF, together with potential in mass production, make the SF a promising candidate for practical applications in secondary lithium batteries.grade, Aladdin Shanghai), tetraethyl orthosilicate (TEOS, Aladdin Shanghai) and deionized water.All materials were purchased commercially without further purication. Preparation of precursor solutions PVA solution was prepared at mass fraction 10% by dissolving it in the deionized water at 60 C with vigorous stirring at high rate for 5 h.Silica gel was prepared by hydrolysis and polycondensation by the dropwise addition of H 3 PO 4 into TEOS, then stirring at room temperature for 4 h.The mass ratio of TEOS : H 2 O : H 3 PO 4 is 1 : 1 : 0.02.Then, 10 g PVA solution was dropped slowly into an equivalent weight of silica sol-gel at room temperature, then stirred for another 5 h.Thus, the viscous precursor solution, used for the electrospinning, was obtained. Electrospinning and fabrication of SF The as-prepared gel was loaded into a syringe and the needle of the syringe was connected to a direct current (DC) high-voltage power source.The grounded stainless steel drum was placed 17 cm from the tip of the needle as the collector wrapped with aluminum foil and rotated at 40 rpm.The feeding rate of the precursor solutions by the syringe pump was 1 mL h À1 .The fabrication chamber was kept at constant temperature (25 C) and relative humidity (45%).An electric potential of 18 KV between the needle and the stainless steel drum was applied by the high-voltage power source and the hybrid nano-brous membranes were obtained on the aluminum foil.The hybrid nano-brous membranes were with heated at a temperature increase rate of 2 C min À1 in the air until it reached 600 C and maintained for 2 h to remove the organic constituents of PVA, the SF was obtained aer being cooled down to room temperature. Electrode preparation and cell assembly Coin cells were prepared for battery performance tests.A mixture of slurry containing 5 wt% acetylene black (super-P), 5 wt% polyvinylidene uoride (PVDF) and 90 wt% LiMn 2 O 4 (Qingdao Xinzheng Material Co., Ltd, China) in N-methyl pyrrolidine (NMP) was prepared for cathode of the cells.The PP separator (Nantong Tianfeng New Electronic Materials Co., Ltd.) and SF were used as separators for preparing the batteries.Batteries aer injected by the equivalent weight of electrolyte (1 mol L À1 LiPF 6 dissolved in a mixed solution of dimethyl carbonate (DMC), ethylene carbonate (EC) and diethylcarbonate (DEC) with a volume ratio of 1 : 1 : 1) were assembled in the argon gas within a glove box (Mbraun, Germany). Characterization of the separators The surface morphologies of pure PP separator and SF were investigated by (Phenom pro, Netherlands) at the acceleration voltage of 10 kV.Bruker AVANC III was used for the solid state NMR spectrometer test.Instron 3300 with a speed of 5 mm min À1 was used for the mechanical strength test.The porosity of the SF and PP separator can be calculated by the following equation with the n-butanol uptake method: where r BuOH and r P represent the densities of n-butanol and polymer, while M BuOH and M m represent the n-butanol and mass of membrane, respectively. A commercial drop shape analysis system (Powereach JC2000C1, Shanghai Zhongchen Digital Technique Equipment Co. Ltd., China) was used for the static contact angles text of PP separator and SF.The electrolyte uptake of the membranes was calculated by the following equation: where W 0 and W are the weights of the membranes before and aer absorbing the liquid electrolyte, respectively.The linear sweep voltammograms (LSV) of Li/separator/stainless steel (SS) cell at a scan rate of 5 mV s À1 was measured in order to test the stability of PP and SF separator. The ionic conductivities of PP separator and SF absorbing liquid electrolyte and sandwiched between two stainless steel electrodes were investigated by an electrochemical workstation (Solartron, SI-1260, England) with the frequency range of 1 kHz to 100 kHz. Cells with PP separator and SF were prepared to investigate the cyclic and rate performance using the electrochemical test equipment (LAND-V34, Land Electronic, China).To study the cyclic performances of the batteries, the cells were charged to 4.2 V and discharged to 3 V at 1.0C and the rate performance tests were carried out at current rates of 0.5C, 1.0C, 2.0C, 5.0C, 10.0C and 0.5C. Results and discussion The SEM images of the precursor hybrid ber membrane and the distributions of ber diameter and pore size are shown in Fig. 1.It can be seen from the gure that the surface bers of the hybrid ber membrane prepared by the electrospinning process is smooth and straight.The ber diameter distribution chart (Fig. 1b) shows that the diameter of bers has a concentrated distribution between a narrow range of 130-200 nm, manifesting good quality of the neness of bers.The pore size of the hybrid ber membrane is formed by the random layout of the bers.Aer disregarding the incomplete pores at the edges of the image, the pore size distribution of hybrid ber membranes is calculated (as shown in Fig. 1c and d).The pore size is concentrated at 0.05 mm 2 , albeit this, there still exist pores with sizes larger than 3.00 mm 2 . The PVA component in the precursor hybrid ber membrane was removed via high temperature sintering process to prepare the SF membrane.The SEM image of the SF and the distributions of the ber diameter and pore size are shown in Fig. 2. It can be seen from Fig. 2a that the SF prepared by the sintering process retains the microstructure of the hybrid ber membrane fairly well, with smooth ber surface and uniform ber neness.Compared with the hybrid ber membrane, bers of SF are slightly curved.According to the ber diameter chart (Fig. 2b), the diameters of the inorganic bers are mainly between 150-300 nm.The average diameter of the inorganic ber increased aer the sintering process, suggesting that the ber itself would undergo a physical changes such as nano-void formation during the sintering process, rather than a theoretical reduction in size due to the loss of the PVA composition in the ber mass.Fig. 2c and d, also show that the pore size of the SF is concentrated at 0.04 mm 2 , with more pores having a size between 0.04 mm 2 to 0.30 mm 2 and less pores having a size above 1.5 mm 2 .The highly localized distribution of pore size indicate more uniformity in the pore sizes, which accompanies the physical changes of the bers.Pore size uniformity is desirable and will contribute to the uniformity of the current in the lithium-ion battery and improves the performance of lithiumion battery.As shown in the Fig. S1, † the pure PP separator shows an interconnected submicron porous structure which is a typical morphology from the dry process.In comparison, the pores of SF formed by the direct accumulation of bers are large, which together with individual ber characteristics, may be more conducive to absorbing liquid electrolyte and to the rapid conduction of Li + . 26,27he NMR of the SF and precursor hybrid ber were shown in Fig. 3.It can be seen from Fig. 3, precursor hybrid ber possess characteristic peaks of silicate species and silica.This shows that during the preparation of hybrid ber membranes, TEOS has been partially hydrolyzed to silicic acid and silica, and the concentration of silicate is higher than that of silica. 28The NMR spectrum of SF was consistent with that of silica, indicating that TEOS completely hydrolyzed to SiO 2 aer calcined at high temperature, and nally the pure inorganic SiO 2 nanober membrane was obtained.We also tested the mechanical strength of SF.The relevant data is in the Fig. S2.† The PP separator prepared with uniaxial stretching process will result in a signicant mechanical strength decrease in this direction.Experiments show that even compared with the uniaxial stretching direction of the PP separator 7.5 MPa, the tensile strength of SF 3.5 MPa was still much lower.Obviously, the tensile strength of SF cannot fully meet the practical applications needs and we are currently also looking for ways to improve its mechanical strength. Good separator material exibility is critical in battery manufacturing, which requires exibility for assembly processes in both pouch and cylindrical cell congurations. 29As shown in Fig. 4, the SF is coiled around a glass rod to illustrate its good exibility and can also recover the smooth morphology aer winding.Fig. 3 The 29 Si NMR test of precursor hybrid fiber membrane and SF.Fig. 4 Flexibility test of the SF separator before and after winding. The SF obtained aer the high temperature sintering process could have the advantages of "absolute" thermal stability.To test the thermal stability of the separator under more severe conditions, the separators were ignited by a direct ame of a lighter and the results were shown in Fig. 5.The PP separator burned up immediately and was completely decomposed in just 1 s.The morphologies and the exibility of SF showed negligible changes in 1 min.This ame-resistant properties of the SF are very useful for improving the thermal stability of lithium-ion batteries. Separator and liquid electrolyte with good wettability can effectively sustain the electrolyte solution, and thus facilitate the lithium-ion transport between the cathode and the anode. 30,31Therefore, the wettability of the separator with the liquid electrolyte plays an important role in battery performance.The time dependence of the contact angle between the surface of the electrolyte and the separator surface reects the wettability of the separator, which was measured and showed in Fig. 6.A high contact angle implies low wettability.As per the conventional/general procedure, a drop of the electrolyte was deposited on the separator surface, and the time-dependent contact angle was measured.As shown in Fig. 6, the contact angle for the PP separator was 70.0 at 1 s and decreased to 55 at 10 s.In contrast, the SF has a contact angle as small as 32 at 1 s and further decreased to below 10 at 10 s, indicating signicant improvement in the wettability of the SF with the electrolyte. A separator with high porosity and high liquid electrolyte uptake can retain more electrolyte and improve better ion transport.The porosity, electrolyte uptake and the ionic conductivity of PP and SF separators were measured and summarized in Table 1.The porosities of PP and SF are 43.6% and 88.6% respectively.The porosity of SF is nearly twice that of the PP.The electrolyte uptakes of PP and SF membranes are 62.9% and 633% respectively.The electrolyte uptake of SF is more than 10 times of the PP.SF having less weight and higher porosity results in a signicant increase of the liquid electrolyte uptake.The ionic conductivities of PP separator and SF aer soaking in the electrolyte solution are 0.80 mS cm À1 and 1.58 mS cm À1 .The ionic conductivity of SF is nearly double the PP.The high porosity and high uptake of liquid electrolyte of SF is benecial to increasing the ionic conductivity.High electrolyte uptake and ionic conductivity also help improve the cyclic and rate performance of lithium-ion batteries. To achieve the practical application as lithium-ion battery separator, it is important to investigate the electrochemical Cells were charged and discharged for 500 cycles from 3.0 to 4.2 V at 1C rate for the cyclic performance test.Battery performances with PP and SF separators were shown in Fig. 8 and 9. As can be seen in Fig. 8, the traditional LiMn 2 O 4 batteries discharge curve of PP and SF are almost the same, means there is no side effect for the use of SF membrane.Upon cycling, the LiMn 2 O 4 cathode with PP separator and SF separator exhibited a high specic capacity of 110 mA g À1 at the rst cycle and was stabilized above 500 cycles.The battery retained 87.5% capacity with SF separator and 80.5% with PP separator aer 500 cycles.The improvement in cyclic performance of battery with SF separator was associated with the high electrolyte uptake and better wettability of the separator with the electrolyte. Batteries were charged and discharged at the rates of 0.5C, 1.0C, 2.0C, 5.0C, 10.0C and 0.5C for the rate performance test.As the Fig. 10 showed, the discharge capacities of batteries with PP separator and SF separator decreased gradually with the increase in rate.The capacity of the batteries displayed no big difference in low rate and batteries prepared with SF showed higher rate capacity at high rate.The improvement in rate performance of battery was associated with the high electrolyte uptake and ionic conductivity of SF separator. Conclusions The SF can maintain the stability of the nano-porous structure even when it is ignited.The as-prepared SF used as separator can signicantly enhance the thermal safety of the batteries which is critical for the development of electric vehicles and energy storage system.The porosity, electrolyte uptake and lithium-ion conductivity of the SF is 88.6%, 633% and 1.53 mS cm À1 , which are much higher than those of the PP separator, resulting in improved cyclic and rate performances of batteries.Such functional advantages, together with the potential for mass production, makes SF suitable for practical applications in secondary lithium batteries. Fig. 1 Fig.1(a) SEM image of the precursor hybrid fiber membranes, (b) the fiber diameter distribution chart of precursor hybrid fiber membranes, (c) the area of precursor hybrid fiber membrane pore size distribution statistics (d) the pore size distribution chart of precursor hybrid fiber membranes. Fig. 2 Fig. 2 (a) SEM image of the SF, (b) the fiber diameter distribution chart of SF, (c) the area of SF pore size distribution statistics (d) the pore size distribution chart of SF. Fig. 7 Fig.7The linear sweep voltammograms of PP and SF with Li and stainless steel electrode from 0 V to 6 V. Fig. 6 Fig. 6 (a) Contact angle for PP separator at 1 s, (b) contact angle for SF separator at 1 s, (c) contact angle of the separators changes with the time. Fig. 5 Fig. 5 The thermal stability test of the (a) PP at 1 second and (b) SF separator at 1 second and 1 minute. Fig. 9 Fig.9The cyclic performance of the batteries with PP separator and SF separator. Fig. 10 Fig.10The rate performance of the batteries with PP separator and SF separator. Table 1 The electrolyte uptake and ionic conductivity of the separators
4,044.6
2018-01-19T00:00:00.000
[ "Materials Science", "Engineering" ]
Reply to: Animal magnetic sensitivity and magnetic displacement experiments In their matters arising article however, have enormous sample sizes (n = 2996 and n = 17,799), and hence, the effects reported might reflect shifts in the mean position of a tight but long-tailed distribution.This way, even a 0.01-degree shift in inclination might be detected hundreds of kilometres away since the long-tailed distribution's movement might be minimal (since sensitivity is low), yet a statistical trend might be observed given the large sample size (see Fig. 1). Secondly, sensitivity assessed via philopatric accuracy might differ markedly from the actual physical sensitivity of a system.In much the same way that the winning guess of 'number of jelly beans in the jar' is often astonishingly accurate-despite each participant's ability to count beans being relatively poor-repeated estimations of position using a noisy magnetoreceptor can give the impression of accuracy far beyond that predicted by its physical sensitivity.Indeed, a simple simulation using normally distributed noise suggests that a sensor with a sensitivity of 0.6°can achieve a functional sensitivity of 0.05°by measuring the magnetic field instantaneously once a day for 100 days (an increase in accuracy of more than an order of magnitude; see Fig. 1).This could explain the magnetic imprinting observed in birds, where a protracted post-fledging period is almost guaranteed.Such an ability would not, however, explain responses to virtual displacement; these experiments lack the repeated sampling required to improve sensitivity using central tendency. Thirdly, during a virtual magnetic displacement, the subject animal experiences a constant magnetic field.Importantly, there is no opportunity for the animal to move and experience alterations in the field that may allow it to take multiple readings and gain an impression of gradients of change and, therefore, discriminate between minimal signals.If one imagines a test in which someone puts their hand into a bath of warm water for several minutes, removes it, waits several hours, then puts the hand into a new bath of water with a difference of 1 °C, and then determines which bath was hotter-it would be an extremely difficult test.If, instead, there was a single bath of water, and in one end it was 1 °C hotter than the other, then by moving the hand back and forth in the bath it would be relatively easy to determine which is the hotter end.In both scenarios the sensitivity of the hand is the same, but the test in the first is far more challenging.The first (difficult) scenario is comparable to the opportunity provided to an animal when it has been virtually magnetically displaced and is required to determine the parameters of the field.In contrast, the second scenario allows for https://doi.org/10.1038/s42003-024-06270-x the possibility of sensing gradients of change.All of the citations that Lohmann et al. provide as evidence for higher sensitivity values than those we used are relevant only to animals that were allowed to experience gradients of change of magnetic fields [1][2][3][4][5][6][7][8][9][10] .Therefore, they are not relevant to how sensitive an animal may be to a virtual magnetic displacement. It is, then, unclear from the literature what our expectation of sensitivity to magnetic displacement is expected to be for any taxa.This, in turn, increases the utility of the tool presented in our original manuscript since, given this ambiguity, it is perhaps better to be safe than sorry.Nonetheless, whilst we believe that the parameters presented in our original manuscript represent a 'best case' for magnetic sensitivity (it is hard to imagine, based on other sensory inputs, a sub-1-degree sensitivity, or sensitivity below the daily secular variation of the geomagnetic field), it is perhaps worth considering how virtual displacement might function under an assumption of extremely high sensitivity (far higher than we believe there is evidence for).To that end, our tool demonstrates that even in such instances many possible locations can still exist spanning a very large geographic area.For example, Fig. 2 shows that even if animals are sensitive to 0.05°of inclination and 20 nT of intensity, there are locations spanning the width of the mid-Atlantic Ocean (locations, for example, that are highly relevant to the movements of loggerhead turtles), and in the Gulf of Mexico and Western Pacific Ocean, that all have the same magnetic values (red dots in Fig. 2).We might conclude, therefore, that (a) an of absolute magnetoreceptive sensitivity requires further research going forward using a different paradigm to behavioural responses to changes in the magnetic field and, (b) that irrespective of sensitivity, extreme care must be taken when conceptualising and interpreting behavioural responses to virtual displacement. Fig. 1 | Fig. 1 | Magnetic secular variation and philopatry.a-d A schematic demonstrating how slight magnetic secular variation might cause changes in distribution even in animals with low-resolution magnetoreceptors.In Case a, the magnetic field has shifted left slightly, causing a drop in recovery probability at the starred location even though most animals return to the natal site (owing to the tight distribution caused by low magnetic sensitivity).Case b is intermediate, whilst Case c shows the opposite phenomenon; the field has shifted right, and a response can be recorded at the starred site even though the sensitivity of any given animal is very low.e A simulation showing how even noisy sensors can achieve remarkably accurate philopatry (<0.05 degrees) through repeated sampling of the magnetic field.Each point represents 100 simulations where the magnetic field is sampled n times with both sensor error and local secular variation.Whiskers represent standard deviation.Magnetic data is derived from the Hartland Magnetic Observatory (UK). Fig. 2 | Fig. 2 | Multiple possible virtual displacement locations with high sensitivity assumed.Red dots indicate all the locations that share the same magnetic parameters (magnetic total intensity = 45083 nT, magnetic inclination = 56.42degrees) with a sensitivity assumption of ±20 nT for magnetic intensity, and ±0.05 degrees for magnetic inclination.Magnetic values obtained from the IGRF for May 2020.
1,352.4
2024-05-27T00:00:00.000
[ "Biology", "Physics" ]
Editorial: Psychological Models for Personalized Human-Computer Interaction (HCI) The behavior of users in the digital world, such as online shopping or social media activity, is increasingly supported by personalized systems, such as recommender systems (Ricci et al., 2015) and personalized learning. Early work on personalized systems was mainly data-driven, based on behavioral data, such as ratings, likes, and purchases (e.g., Bell et al., 2007). Although these systems are useful for both users and service providers, the main downside is the limited interpretability and explainability of the data. Such limitations in both interpretability and explainability translate in using data without understanding the root-cause of behaviors. Recent work has thus started to adopt a more theory-driven approach by including psychological theories and models to improve personalized systems (see for an overview; Graus and Ferwerda, 2019). These systems take advantage of psychological theories/models, such as emotions (Tkalčič et al., 2013b; Tkalčič and Ferwerda, 2018), personality (Ferwerda et al., 2017; Wu et al., 2018), skills (Ferwerda and Graus, 2018), and culture (Schedl et al., 2017) to explain and predict behaviors of users. This allows for a deeper understanding of users’ behavior, preferences, and needs, which in turn also lead to more generalizable results. Moreover, digital behavior has also been used to infer user traits and characteristics. For example, social media activities have been used to predict personality traits (Skowron et al., 2016) and intelligence, whereas the field of affective computing has been active in devising methodologies for inferring emotional states from digital signals (Tkalčič et al., 2013a). INTRODUCTION The behavior of users in the digital world, such as online shopping or social media activity, is increasingly supported by personalized systems, such as recommender systems (Ricci et al., 2015) and personalized learning. Early work on personalized systems was mainly data-driven, based on behavioral data, such as ratings, likes, and purchases (e.g., Bell et al., 2007). Although these systems are useful for both users and service providers, the main downside is the limited interpretability and explainability of the data. Such limitations in both interpretability and explainability translate in using data without understanding the root-cause of behaviors. Recent work has thus started to adopt a more theory-driven approach by including psychological theories and models to improve personalized systems (see for an overview; Graus and Ferwerda, 2019). These systems take advantage of psychological theories/models, such as emotions (Tkalčič et al., 2013b;Tkalčič and Ferwerda, 2018), personality Wu et al., 2018), skills (Ferwerda and Graus, 2018), and culture to explain and predict behaviors of users. This allows for a deeper understanding of users' behavior, preferences, and needs, which in turn also lead to more generalizable results. Moreover, digital behavior has also been used to infer user traits and characteristics. For example, social media activities have been used to predict personality traits (Skowron et al., 2016) and intelligence, whereas the field of affective computing has been active in devising methodologies for inferring emotional states from digital signals (Tkalčič et al., 2013a). RESEARCH TOPIC CONTENT In view of this situation, this Research Topic aimed at collecting state-of-the-art research that supports personalized services with psychological theories/models. In particular, we encouraged the authors to submit original research articles, case studies, reviews, theoretical and critical perspectives, and viewpoint articles on the following topics: (i) Psychological theories/models that explain online behavior (e.g., personality, emotions, cognitive biases and illusions, learning styles, emotional contagion in group settings), (ii) Psychological theories/models to personalize digital interactions (e.g., in user interfaces, recommendations, social robots and chat-bots, e-learning), and (iii) Prediction of psychological models drawing data from digital behavior information resources (e.g., social media, e-commerce, physical activities, online learning, group scenarios). Within this collection we accepted 13 works. In total there were 11 original research articles, one brief research report and one perspective article. The authors affiliation countries were diverse, including Europe (Germany, Italy, Poland, Austria, Norway, and Sweden), North America (USA and Canada), and Asia (Pakistan, Japan, Malaysia, China, and South Korea). The topics cover ( In this work, Pan explores how technology acceptance and self-efficacy contribute to the attitude toward technology-based self-directed learning. His results indicate a high relationship between these factors. Sessa et al. explore how the attachment style influences the reaction in case of displeasing messages. Their results indicate that the communication styles of frankness and mitigation are related to attachment styles. The psychological acceptability of utterances has been shown to be influenced by the social distance in the study conducted by Miyamoto et al.. The study conducted by Abbasi et al. was researching the relationship between personality and video games engagement. The results they obtained suggest that openness to experience, extraversion, agreeableness, and conscientiousness positively predict consumer engagement in electronic sports games. Xu and Ye aimed at understanding the personality traits and the motivations of active live streaming viewers as well as their user behaviors in the general population in China. Their results indicate that extraversion was negatively associated with live streaming use, while openness was positively associated. The emotion of Schadenfreude, pleasure at another's misfortune, has been investigated by Cecconi et al. , who found that, in an corpus of social media posts in italian, a set of hashtags (e.g., #Glistabene, #Benglista = hedeservedit) are strong predictors of shadenfreude. Schürmann and Beckerle propose a framework for designing cognitive models for a given research question. The framework consists of five external and internal aspects related to the modeling process: research question, level of analysis, modeling paradigms, computational properties, and iterative model development. Steichen and Fu found that a user's cognitive style can be inferred from the user's eye gaze while using an information visualization system. Neumayr and Augstein present a systematic survey of personalized collaborative systems. Nordmo et al. investigated the intimate relationship between humans and robots. They found that females expect to feel more jealousy if their partner got a sex robot, rather than a platonic love robot. Hulaj et al. carried out a study investigating factors that influence dthe performance in video games in terms of matchmakin rating (MMR). They found that the perceived competence and autonomy were the only significant predictors of MMR performance beyond matches played. Huifeng and Ha investigated what influences the termination of a customer relationship and found several factors: upkeep, time, benefits, personal loss, and motivation. A research on the relationship between psychopatological personal traits and online hate behavior was conducted by Sorokowski et al. . Their results show that high scores in Psychopathy subscale are significant predictors of posting hating comments online. AUTHOR CONTRIBUTIONS BF, LC, and MT wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
1,534.4
2021-04-06T00:00:00.000
[ "Psychology", "Computer Science" ]
Endpoint estimates for the maximal function over prime numbers Given an ergodic dynamical system $(X, \mathcal{B}, \mu, T)$, we prove that for each function $f$ belonging to the Orlicz space $L(\log L)^2(\log \log L)(X, \mu)$, the ergodic averages \[ \frac{1}{\pi(N)} \sum_{p \in \mathbb{P}_N} f\big(T^p x\big), \] converge for $\mu$-almost all $x \in X$, where $\mathbb{P}_N$ is the set of prime numbers not larger that $N$ and $\pi(N) = \# \mathbb{P}_N$. converge for µ-almost all x ∈ X, where P N is the set of prime numbers not larger that N and π(N) = #P N . I Let (X, B, µ, T) be an ergodic dynamical system, that is (X, B, µ) is a probability space with a measurable and measure preserving transformation T : X → X. The classical Birkhoff theorem [2] states that for any function f from L p (X, µ) with p ∈ [1, ∞), the ergodic averages converge for µ-almost all x ∈ X. This classical result, among others, motivates studying ergodic averages over subsequences of integers. In this article we are interested in pointwise convergence of the following averages, where P N is the set of prime numbers not larger than N and π(N) = #P N . The problem of ergodic averages along prime numbers was initially studied by Bourgain in [4] where the case of functions belonging to L 2 (X, µ) has been covered. It was extended by Wierdl in [22] to all L p (X, µ), for p > 1, see also [6,Section 9]. However, the endpoint p = 1, was left open for more than twenty years. Following the method developed in [7] by Buczolich and Mauldin, LaVictoire in [13] has shown that for each ergodic dynamical system there exists f ∈ L 1 (X, µ) such that the sequence (A N f : N ∈ N) diverges on a set of positive measure. The purpose of this article is to find an Orlicz space close to L 1 (X, µ) where the almost everywhere convergence holds. We show the following theorem (see Theorem 7.4). Theorem A. For each f ∈ L(log L) 2 (log log L)(X, µ), the limit exists for µ-almost all x ∈ X. In light of the pointwise convergence obtained by Bourgain in [5], see also [16], to prove Theorem A it suffices to show the weak maximal ergodic inequality for functions in Orlicz space L(log L) 2 (log log L)(X, µ). This inequality is deduce from the following restricted weak Orlicz estimate. By appealing to the Calderón transference principle, see [8], Theorem B is deduced from the corresponding result for integers Z with the counting measure and the shift operator. To be more precise, for a function f : Z → C, we define Our main result is following theorem (see Theorem 6.3). Theorem C. There is C > 0 such that for any subset F ⊂ Z of a finite cardinality for all 0 < λ < 1. Theorem C together with ℓ 2 (Z) estimates are sufficiently strong to imply the maximal inequality for all ℓ p (Z) spaces, for p > 1, giving an alternative proof of the Wierld's theorem [22]. Let us now give some details about the proof of Theorem C. Without loss of generality, we may restrict the supremum to dyadic numbers. It is more convenient to work with weighted averages Given t > 0, for each n ∈ N, we decompose the operator M 2 n into two parts A t n and B t n , in such a way that the maximal function associated with A t n has ℓ 1,∞ (Z) norm t f ℓ 1 , whereas the one corresponding to B t n has ℓ 2 (Z) norm exp − c √ t f ℓ 2 . When applied to the distribution function sup n∈N M 2 n (1 F ) > λ , we can optimize both estimates by taking t ≃ log 2 (e/λ). This idea originated to Ch. Fefferman [9], see also Bourgain [3]. Ionescu introduced this technique in a related discrete context, see [11]. The decomposition of M 2 n uses the circle method of Hardy and Littlewood. However, to achieve the exponential decay of the error term, due to the Page's theorem, the approximating multiplier has to contain the second term of the asymptotic as well. Thus, the possible existence of the Siegel zero entails that in the neighborhood of the rational point a/q the approximating multiplier L a,q 2 n (· − a/q) depends on the rational number a/q. We refer to Sections 3 and 5 for details. Thanks to the log-convexity of ℓ 1,∞ (Z), the weak type estimates are reduced to showing At this stage we exploit the behavior of the Gauss sums described in Theorem 2.1. Let us emphasize that under the Generalized Riemann Hypothesis we can obtain in Proposition 3.1, and consequently in Theorem 3.2, a better error estimate. However, it is not clear whether one can prove Theorem 6.1 with the bounds proportional to √ t f ℓ 1 . The paper is organized as follows. In Section 2, we collect necessary facts about Dirichlet characters and the zero-free region. Then we evaluate the Gauss sum that appears in the approximating multiplier (Theorem 2.1). Section 3 is devoted to construction of the approximating multipliers. In Sections 5 and 6, we show ℓ 2 and the weak type estimates, respectively. In Section 7, we give two applications of Theorem C. Namely, we show how to deduce the maximal ergodic inequality for functions from ℓ p (Z), (Theorem 7.1). Next we apply the transference principle (Proposition 7.3) and show almost everywhere convergence of the ergodic averages (A N f : N ∈ N) for f ∈ L(log L) 2 (log log L)(X, µ), (Theorem 7.4). Notation. Throughout the whole article, we write A B (A B) if there is an absolute constant C > 0 such that A ≤ CB, (A ≥ CB). Moreover, C stands for a large positive constant which value may vary from occurrence to occurrence. If A B and A B hold simultaneously then we write A ≃ B. The set of positive integers and the set of prime numbers are denoted by N and P, respectively. For G We start by recalling some basic facts from number theory. A general reference here is the book [17]. is called a Dirichlet character modulo q. The simplest example, called the principal character modulo q, is defined as (mn, q) = 1. For each character χ there is the unique primitive character χ ⋆ modulo q 0 for some q 0 | q, such that The character is quadratic if it takes only values {−1, 0, 1} with at least one −1. Recall that, if χ ⋆ is a primitive quadratic character with modulus q 0 , then , and q 0 is square-free, or • 4 | q 0 , q 0 /4 ≡ 2 or 3 (mod 4), and q 0 /4 is square-free. Given a Dirichlet character χ and s ∈ C with ℜs > 1, we define the Dirichlet L-function by the formula In fact, L( · , χ) extends to the analytic function in {z ∈ C : ℜz > 0}. There is an absolute constant c > 0, such that if χ is a Dirichlet character modulo q, then the region contains at most one zero of L( · , χ), which we denote by β q . The zero β q is real and the corresponding character is quadratic. The character having zero in (1) is called exceptional. Since L(β, χ) = 0 implies that L(1 − β, χ) = 0, we may assume that 1 2 ≤ β q < 1. The Gauss sum of a Dirichlet character χ modulo q is defined as where A q = 1 ≤ a ≤ q : gcd(a, q) = 1 , and ϕ(q) = #A q . Let us recall that for each ǫ > 0 there is C ǫ > 0 such that We set τ( χ) = ϕ(q)G( χ, 1). Let us denote by µ the Möbious function, which is defined for q = p α 1 1 . . . p α n n , where p 1 , . . . , p n are distinct primes, as and µ(1) = 1. The following theorem plays the crucial role in Section 6. Theorem 2.1. Let χ be a quadratic Dirichlet character modulo q induced by χ ⋆ having the conductor q 0 . For x ∈ Z, we set r = gcd(q, x). Then provided that q/q 0 is square-free, gcd(q/q 0 , q 0 ) = 1 and r | q/q 0 . Otherwise the sum equals zero. Let us observe that the identity (4) together with (2) imply that for any ǫ > 0. Moreover, G( χ, a) 0 entails that q is square-free or 4 | q and q/4 is square-free. A Let us denote by A N the averaging operator over prime numbers, that is for a function f : Z → C we have where P N = [1, N] ∩ P and π(N) = #P N . Since sums over primes are very irregular, it is more convenient to work with By the partial summation, we easily see that , To better understand the operators M N , we use the Hardy-Littlewood circle method. Let F denote the Fourier transform on R defined for any function f ∈ L 1 (R) as To simplify the notation we denote by F −1 the inverse Fourier transform on R or the inverse Fourier transform on the torus T ≡ [0, 1), depending on the context. Let m N be the Fourier multiplier corresponding to M N , i.e., (8) m Then for a finitely supported function f : Z → C, we have . To simplify the notation we write For β < 1, we notice that the operators M β N are not averaging operators. Moreover, by the partial summation and (10), we get Hence, Moreover, Given q ∈ N, and a ∈ A q , we set when there is an exceptional character χ q modulo q and β q is the corresponding zero. Proof. Observe that for a prime p, p | q if and only if (p mod q, q) > 1. Hence, Then, by the partial summation, we obtain Analogously, for any 1 2 ≤ β ≤ 1, we can write By the Page's theorem, there is an absolute constant c > 0 such that for each if there is no exceptional character modulo q, and when there is an exceptional character χ modulo q, and β is the concomitant zero. Therefore, by (15) and (16), we obtain Finally, by the prime number theorem and the proposition follows. Next, we select η : R → R, a smooth function such that 0 ≤ η ≤ 1, and We may assume that η is a convolution of two smooth functions with supports contained in − 1 2 , 1 2 . For s ∈ N 0 , we set η s (ξ) = η 2 4s ξ . We define a family of approximating multipliers, by the formula where R s = a/q ∈ Q ∩ (0, 1] : a ∈ A q , and 2 s ≤ q < 2 s+1 , q is square-free or 4 | q and q/4 is square-free , and R 0 = {1}. We set ν n = s ≥0 ν s n . Theorem 3.2. There are C, c > 0 such that for all n ∈ N 0 and ξ ∈ T, where m N is defined by (8). Proof. Let Q n = exp c 2 √ n where the constant c is determined in Proposition 3.1. By the Dirichlet's principle, there are coprime integers a and q, satisfying 1 ≤ a ≤ q ≤ 2 n Q −1 n , and such that Let us first consider the case when 1 ≤ q ≤ Q n . We select s 1 ∈ N 0 satisfying 2 s 1 +1 < 1 2 2 n Q −2 n ≤ 2 s 1 +2 . For s ≤ s 1 and a ′ /q ′ ∈ R s , with a ′ /q ′ a/q, we have Therefore, by (6) and (11), which implies that For s > s 1 , by (6) we obtain If q is square-free or 4 | q and q/4 is square-free then there is s 0 ∈ N 0 such that a/q ∈ R s 0 , thus Q n ≥ 2 s 0 . By Proposition 3.1, Finally, if q and q/4 are not square-free then by Proposition 3.1, It remains to deal with Q n ≤ q ≤ 2 n Q −1 n . By the Vinogradov's inequality (see [ Therefore, by (6) and (11), n , which entails that If s > s 2 , then by (6), we get hence by (18), and the theorem follows. E ℓ 1 In this section we prove that the maximal function associated with kernels (M β 2 n : n ∈ N 0 ) has weak ℓ 1 (Z)norm equidistributed in residue classes. Before embarking on the proof, let us recall two lemmas essential for the argument. e 2πiξ x η s (ξ) dξ The following theorem is the main result of this section. Since η s = η s η s−1 , by Young's convolution inequality and Lemma 4.1, we obtain where the last inequality is a consequence of 1 ≤ Q ≤ 2 2s . Therefore, in view of Lemma 4.2, we immediately get which is the desired conclusion. Essentially the same reasoning as in the proof of Theorem 4.3 leads to the following theorem. ℓ 2 We are now in the position to prove ℓ 2 (Z) boundedness of the maximal function associated to the multipliers (ν s n : n ∈ N). Theorem 5.1. For each ǫ > 0 there is C > 0 such that for all s ∈ N 0 , and any finitely supported function f : Z → C, Proof. We divide the supremum into two parts: 0 ≤ n < 2 s+4 and 2 s+4 ≤ n. Then the following holds true. It remains now to treat supremum over n ≥ 2 s+4 . For each 1 2 ≤ β < 1 we set R β s = a/q ∈ R s : β q = β . and R 1 s = R s . In view of the Landau's theorem [17,Corollary 11.9], there are O(log s) distinct β's. Therefore, it suffices to show the following claim. Let us fix 1 2 ≤ β ≤ 1. We define Observe that the functions x → I(x, y) and x → J(x, y) are Q s periodic where By the Plancherel's theorem, for u ∈ Z Q s , we have 2 −n |u| · η sf (· + a/q) L 2 , because by (11), Therefore, by the triangle inequality Since R s contains at most 2 2(s+1) rational numbers, by the Cauchy-Schwarz inequality we get Observe that Now, by multiple change of variables and periodicity we get Using Theorem 4.4, we can estimate Notice that Since supports of η s (· − a/q) are disjoint while a/q varies over R s , by (6) we get which together with (24) imply (23) and the theorem follows. Corollary 5.4. There are C, c > 0 such that for each t > 0, and any finitely supported function f ∈ Z → C, Proof. Since our assertion follows from Theorem 3.2 and Theorem 5.1. Indeed, by the Plancherel's theorem and Theorem 3.2 we get sup t ≤n On the other hand, by Theorem 5.1, which concludes the proof. W In this section we investigate the weak type estimates for the multipliers Π t n : n ≥ t . Then together with results from Section 5 we deduce Theorem C. Theorem 6.1. There is C > 0 such that for all t > 0 and any finitely supported function f : Z → C, Proof. Let us fix 2 s ≤ q < 2 s+1 for some 1 ≤ s ≤ √ t. Let 1 2 ≤ β ≤ 1. Suppose that χ is a quadratic Dirichlet character modulo q induced by χ ⋆ having the conductor q 0 . We claim that the following holds true. Claim 6.2. There is C > 0 such that for any finitely supported function f : Z → C, The constant C is independent of q, β and χ. What is left now is to prove Claim 6.2. Let r ∈ {1, . . . , q}. For x ≡ r mod q, we have Hence, by Theorem 4.3, we obtain Next, by Young's convolution inequality we get . Now, by Theorem 2.1, we can compute where in the last inequality we have used Lemma 4.2 together with Lemma 4.1. Since (see e.g. [19]) proving the claim and the theorem follows. Theorem 6.3. There is C > 0 such that for any subset F ⊂ Z of a finite cardinality and all 0 < λ < 1, Proof. We start by proving the following statement. Claim 6.4. There are C, c > 0 such that for each t > 0, there are two sequences of operators (A t n : n ∈ N) and (B t n : n ∈ N) such that M 2 n = A t n + B t n , and for any finitely supported function f : Z → C, Without loss of generality, we may assume that f is non-negative finitely supported function on Z. For 1 ≤ n < t, we set A t n f = M 2 n f , and B t n f ≡ 0. Since by the prime number theorem, we have Hence, by the Hardy-Littlewood theorem, For t ≤ n, we set and B t n f = M 2 n f − A t n f . In view of Corollary 5.4 and Theorem 6.1, we obtain (27) and (26), respectively, and the claim follows. Now, the theorem is an easy consequence of Claim 6.4. Indeed, given a subset F ⊂ Z of a finite cardinality, for any t > 0, we can write Thus, taking t = (2c) −2 log 2 (e/λ), we get the desired conclusion. In view of (7), Theorem 6.3 entails the following corollary, which is precisely Theorem C. Corollary 6.5. There is C > 0 such that for any subset F ⊂ Z of a finite cardinality and all 0 < λ < 1, A In this section we show two applications of Theorem 6.3 and Corollary 6.5. First, we prove that the restricted weak Orlicz estimates together with strong ℓ 2 bounds are sufficient to get ℓ p maximal inequalities for all 1 < p ≤ 2. Next, we conclude almost everywhere convergence of ergodic averages for functions in some Orlicz space close to L 1 . 7.1. ℓ p theory. Theorem 7.1. For each p ∈ (1, 2] there is C > 0 such that for any function f ∈ ℓ p (Z), 7.2. Pointwise convergence. Let (X, B, µ) be a probability space with a measurable and measure preserving transformation T : X → X. We consider the following averages With a help of the Calderón transference principle from [8] applied to Corollary 6.5, we deduce the following proposition. Proposition 7.3. There is C > 0 such that for any subset A ∈ B, and all 0 < λ < 1, Proof. Fix A ∈ B and x ∈ X. For R > L > 0, we define a finite subset of F ⊂ Z by setting Then for 0 ≤ n ≤ R − N, N ≤ L, Hence, By Corollary 6.5, Since T preserves the measure µ, by integrating with respect to x ∈ X we obtain = C(R + 1)λ −1 log 2 (e/λ)µ(A). We now divide by R and take R approaching infinity to get µ x ∈ X : max 1≤ N ≤L A N 1 A T n x > λ ≤ Cλ −1 log 2 (e/λ)µ(A). Finally, taking L tending to infinity by the monotone convergence theorem we conclude the proof. We are now in the position to show µ-almost everywhere convergence of the ergodic averages (A N f : N) for a function f from the Orlicz space L(log L) 2 (log log L)(X, µ). Let us recall that L(log L) 2 (log log L)(X, µ) consists of functions such that ∫ X | f (x)| log + | f (x)| 2 log + log + | f (x)| dµ(x) < ∞ where log + t = max{0, log t}. The space L(log L) 2 (log log L)(X, µ) is a Banach space with the norm f L(log L) 2 (log log L) = where f * is the decreasing rearrangement of f , that is and φ(t) = log 2 (1 + t) log 1 + log t . Theorem 7.4. There is C > 0 such that for each f ∈ L(log L) 2 (log log L)(X, µ), In particular, for each f ∈ L(log L) 2 (log log L)(X, µ), for µ-almost all x ∈ X. Proof. We first prove the following claim. Since | f (x)| ≤ a j for x ∈ A j , we have Moreover, if j > k then for x ∈ A j and y ∈ A k , we have | f (x)| ≥ | f (y)|. Since µ(A j ) = 2 −j , we get a j 1 [2 − j−1 ,2 − j ) (t). On the other hand, by (33) we have f L(log L) 2 (log log L) = which together with (34) conclude the proof.
5,152
2019-07-10T00:00:00.000
[ "Materials Science" ]
Identification of strongly interacting organic semimetals Dirac- and Weyl point- and line-node semimetals are characterized by a zero band gap with simultaneously vanishing density of states. Given a sufficient interaction strength, such materials can undergo an interaction instability, e.g., into an excitonic insulator phase. Due to generically flat bands, organic crystals represent a promising materials class in this regard. We combine machine learning, density functional theory, and effective models to identify specific example materials. Without taking into account the effect of many-body interactions, we found the organic charge transfer salts (EDT-TTF-I$_2$)$_2$(DDQ)$\cdot($CH$_3$CN) and TSeF-TCNQ and a bis-1,2,3-dithiazolyl radical conductor to exhibit a semimetallic phase in our ab initio calculations. Adding the effect of strong particle-hole interactions for (EDT-TTF-I$_2$)$_2$(DDQ)$\cdot($CH$_3$CN) and TSeF-TCNQ opens an excitonic gap in the order of 60 meV and 100 meV, which is in good agreement with previous experiments on these materials. INTRODUCTION Semimetals have attracted huge attention due to their striking transport properties, analogies to high energy physics phenomena, and potential for functionalization [1][2][3]. Their realization relies on a delicate combination of symmetry, electron-filling, and band ordering enforcing the existence of the nodes in the band structure at the chemical potential while having a vanishing density of states (DOS) at the crossing point [4][5][6]. It has been shown extensively for the case of Dirac semimetals, that under a sufficiently high interaction strength a dynamical mass term can be generated leading to a quantum phase transition into a gapped phase [7][8][9][10]. This quantum phase transition strongly depends on: i) the effective fine structure constant α eff , describing the ratio of the coupling of the Fermion field to its gauge field versus the kinetic energy; ii) the dimension of the system; iii) the number of fermionic flavors. Similar phenomena where also discussed in the case of Weyl semimetals [11,12] and line-node semimetals [5,13,14]. With the goal of identifying experimentally feasible materials to investigate interaction effects in nodal semimetals we focus on organic crystals. Organic crystals typically exhibit strong intramolecular forces and weak intermolecular forces, leading to tiny hopping amplitudes for electrons between molecules and resulting flat electronic bands. The flatness corresponds to a tiny quasiparticle kinetic energy and dominant interaction effects. We focus on the excitonic insulator state occurring when a weakly screened Coulomb interaction between a hole and an electron leads to an electron-hole bound state [15,16]. Even though organics seem to be promising materials for strong interaction effects, we face several major difficulties: i) the search space is massive, e.g., the crystallographic open database stores ≈ 200, 000 crystal structures containing carbon and hydrogen [17]; ii) organics are typically large band gap insulators [18]; iii) complex unit cells and strong correlation effects are challenging for an ab initio description. Hence, we apply the following procedure: first, we apply machine learning to narrow down the search space to a computationally feasible set of materials which are predicted to show a tiny band gap; second, using density functional theory we compute the band structures for these materials; third, we select the occurring semimetals, construct effective electronic models and numerically solve a self-consistent Schwinger-Dyson equation to estimate the size of an excitonic gap. RESULTS The general workflow of our study can be summarized as follows. 1. Machine learning. We trained a neural network (the continuous-filter convolutional neural network scheme -SchNet [19]) on 24,134 band gaps of nonmagnetic materials taken from the Organic Materials Database -OMDB [18]. We applied the model to 202,117 materials containing carbon and hydrogen stored in the crystallographic open database -COD [20]. 2. Band structure calculations. We select 414 materials where the band gap is predicted small, but nonzero (0.01 eV ≤ ∆ ≤ 0.4 eV) and perform medium accuracy ab initio calculations using VASP incorporating the effect of spin-orbit interaction (SOI). Note that all calculations stored within the OMDB were peformed without SOI. Out of the 414 materials we found promising features in the band structures for 9 materials. For these 9 materials we performed high-accuracy VASP calculations taking into account structural optimization and SOI. We found the organic charge transfer salts (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) and TSeF-TCNQ and a bis-1,2,3-dithiazolyl radical conductor which exhibit a semimetallic phase. Based on symmetry and chemistry we determine the relevant mechanisms to protect the nodal features. We will discuss the outcome of the three steps in more detail in the following. Machine Learning According to the OMDB, organic crystals show a mean ab initio band gap of ≈ 3 eV with a standard deviation of 1 eV [18]. Fig. 1(a) shows a comparison of the band gap distribution of the training set with the band gap distribution obtained using machine learning (ML). While the amount of data is much bigger for the materials taken from the COD, the general shape of the histogram of calculated and predicted gaps agrees well, i.e., the ML model successfully reproduced the band gap statistics. Due to the highly complex structures of organic crystals and the relatively small data set, our trained ML model has a large mean absolute error (MAE) of 0.406 eV. Compared to the average band gap of ≈ 3 eV, this value represents a sufficient accuracy. However, as we are interested in the tiny gap regime, the order of the MAE is similar to our acceptance range 0.01 eV ≤ ∆ ≤ 0.4 eV. Nevertheless, the general guidance of our model was sufficient to identify a few final example materials. We note that a more sophisticated prediction scheme reaching a high accuracy on small and complex data sets would significantly advance the outcome of our approach in the future. The ABABAB π-stacked bis-1,2,3-dithiazolyl radical was synthesized by Yu et al. [29]. It crystallizes in the non-centrosymmetryc space group P2 1 2 1 2 1 (19) and orders in a canted antiferromagnetic structure with T N ≈ 4.5 K. It was reported to undergo a spin-flop transition to a field-induced ferromagnetic state at 2 K and a magnetic field strength of H ≈ 20 kOe. For all materials the calculated crossing points occur at the Fermi level with simultaneously vanishing DOS. The selected k-path follows the convention of pymatgen [30]. (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) shows a crossing of 4 bands at the Brillouin zone center (Γ; Fig. 2(d)) exhibiting a tiny splitting of two two-fold band degeneracies away from Γ due to a small magnetization of two central carbon atoms with ≈ 0.65 µ B . For (TSeF-TCNQ) we tization for all sites within the (TSeF-TCNQ) unit cell. For the π-stacked bis-1,2,3-dithiazolyl radical we find a 2-fold degenerate Weyl node within the ferromagnetic phase along the path ΓX (X = (0.5, 0.0, 0.0), Fig. 2 (f)). A comparison with the metallic band structure of the nonmagnetic and the fully-gapped band structure of an antiferromagnetic phase can be found in the supplementary material. Protection of the nodes To understand the nature of the crossings observed in the electronic band structures we are going to discuss the symmetry protection of the nodes. In Fig. 3 we show the molecule resolved partial DOS of (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN). While there are in total four (EDT-TTF-I 2 ) 2 molecules and two DDQ and CH 3 CN molecules in the unit cell, the inversion symmetry present in the crystal enforces pairwise degenerate contributions to the DOS. As can be verified, the main contribution to the DOS around the Fermi energy stems from (EDT-TTF-I 2 ) 2 . Due to the specific stacking structure, molecules in charge transfer salts are known to undergo a transition into a dimerized electronic state, where molecular orbitals of pairs of molecules bind significantly stronger to each other than to other molecules in the crystal [24,31,32]. In other words, we can introduce three energy scales: i) the hopping τ αβ of electrons between atoms α, β within a molecule; ii) the hopping t µν of electrons between molecules µ, ν within a molecular dimer; iii) the hopping s ij of electrons between different dimers i, j. In the limit of s ij t µν τ αβ , the problem can be separated. Assume that the Hamiltonian describing a EDT-TTF-I 2 reveals an electronic ground state with an s-like molecular orbital. Then, the Hamiltonian describing the dimerization can be written asĤ = τΨ † σ xΨ withΨ = (ĉ 1 ,ĉ 2 ), whereĉ † i (ĉ i ) creates (annihilates) an electron in molecule i. Hence, the two eigenstates are given by φ ± = 1 √ 2 (ĉ 1 ±ĉ 2 ) with the respective energies ±τ . In the crystalline unit cell these dimer states can be used to construct a basis where the φ ± are even (+) and odd (-) with respect to inversion symmetry. Furthermore, taking the center of masses of the (EDT-TTF-I 2 ) 2 dimers, the resulting lattice (without (DDQ)·(CH 3 CN)) can be approximated by a reduced lattice with one dimer site per unit cell, where the lattice constant in a direction is decreased by a factor of 1 2 . This decrease of the unit cell also introduces an effective half-filling as the initial space group symmetry is P 1, which according to its associated Bieberbach manifold is half-filled for an odd number of electrons [33]. The half-filling together with the effective bipartite lattice introduces (besides parity and time-reversal) a particle-hole symmetry. Considering a four-band model with two orbital and two spin degrees of freedom we obtain the following effective k · p Hamiltonian around the Γ point (see method section for details), The Hamiltonian reveals a symmetry protected four-fold degenerate Dirac node at the center of the Brillouin zone. By lattice symmetry this node does not carry any topological charge according to the Nielsen-Ninomiya theorem [34,35]. In general, the underlying space group symmetry for (TSeF-TCNQ) (P2 1 /c) does not allow for the high degeneracy of the nodal line observed. The nature of the crossing stems from the quasi one-dimensional nature of the charge transfer salt due to the specific molecular stacking. In Fig. 4 (a) we show the center of mass coordinates of the involved molecules which form weakly interacting one-dimensional chains. Each chain can be approximated to have a dispersion relation of E µσ = s µ 0 + 2s µ 1 cos a 2 · k, with µ being the molecule index distinguishing between TSeF and TCNQ and σ denoting the spin (note that the dispersion relation itself is independent of the spin). As there are two TSeF chains as well as two TCNQ chains present in the crystal, we observe two fourfold degenerate bands (2 chains × 2 spins per molecule) which are allowed to cross in a plane if no hybridization is taken into account (see Fig. 4 (d)). From the tilted stacking of molecules in the different chains (shown in Fig. 4 (b)+(c)) it becomes apparent that the involved hopping of electrons between different chains (interchain coupling) has to be weaker by several order of magnitude than the intrachain coupling (see also fitted parameters in methods section). In the basisΨ = (ĉ TSeF-1σ ,ĉ TCNQ-1σ ,ĉ TSeF-2σ ,ĉ TCNQ-2σ ) T and allowing for a hopping between chains we formulate an effective Hamiltonian of the form where we use ∆ 1 = t 1 cos 1 2 a 1 · k , ∆ 2,3 = t 2,3 cos 1 2 ( a 2 + a 3 ) · k . An example band structure for Fig. 4 (e) which reproduces the ab initio band structure of Fig. 2 (e) effectively. A slightly more advanced effective Hamiltonian is described in the methods section. The radical bis-1,2,3-dithiazolyl exhibits Weyl nodes for the low-temperature high-field ferromagnetic phase. Hence, time-reversal symmetry is broken and the crystal reflects a magnetic space group symmetry, depending on the magnetization direction. Assume we choose the magnetization in x-direction then the corresponding group is given by P 2 1 2 1 2 1 (# 19.27). To verify that the observed nodes are Weyl nodes we calculated the atom and orbital resolved weights to the band structure. We observe that the two bands forming the node belong to different atoms in the molecules within the unit cell. The main contributions stem from either the unsaturated nitrogen and the oxygen atom or the saturated nitrogen and one sulfur atom as shown in Fig. 5. Hence, we conclude that the bands belong to different orbital subspaces allowing for a crossing. In symmetry terms the two orbitals correspond to bands with different eigenvalues to the only unitary symmetry element present, (C 2x , 1/2, 1/2, 0), along the invariant line k x . Hence, a 2-band k · p Hamiltonian at a crossing point has to be invariant under the Pauli matrix τ x , H(k x , k y , k z ) = τ x H(k x , −k y , −k z )τ x . This results in an allowed low energy Hamiltonian H( K ± + k) ≈ ±t x k x τ x +t y k y τ y +t z k z τ z , at two points K ± along the x-axis. Both points reveal a monopole charge of ν ± = sign [±t x t y t z ]. Effective Models and Excitonic Gap To explain the discrepancy between DFT semimetallic phases and experimentally observed semiconductivity in (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) and (TSeF-TCNQ) we argue that both systems are likely to undergo an excitonic instability. Compared to known inorganic Dirac or Weyl semimetals, the band width of all three materials discussed here is smaller by at least one order of magnitude. As the decreased band width induces a small Dirac velocity (small kinetic energy) of the elementary excitations of the system, we verify that a quasiparticle interaction term becomes dominant. We estimate the size of the effect using effective band structure models. For (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) we extended the model given in (1) to a lattice periodic version (x → sin(x)). For (TSeF-TCNQ) we construct an eight band model incorporating two spin and four orbital degrees of freedom (two belonging to molecular orbitals from TSeF and two belonging to TCNQ). The allowed dispersion relation of the model is restricted by the symmetry generators: parity, two-fold rotation about x-axis, time-reversal symmetry (details given in the methods section). To approximate the size of the instability, we solve a BCS-like excitonic gap equation, similar to Refs [36][37][38]. Note that given the numerous degrees of freedom, multiple excitonic gap symmetries might occur, such as inter-or intranode instabilities with or without breaking of time-reversal or other lattice symmetries [38,39]. We focus on an s-wave gap. Details on the calculation are given in the methods section. We obtained an excitonic gap of ≈ 60 meV for (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) at T → 0 K. This value agrees with the experimentally determined gap of ≈ 105 meV. We argue that despite (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) being a regular semiconductor, it is an excitonic insulator with an electronic structure shown in Fig. 6(a). We repeat the approach for (TSeF-TCNQ) and obtain a slightly higher value of ≈ 150 meV ( Fig. 6(b)). In contrast to the Dirac semimetal (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN), (TSeF-TCNQ) has two degenerate line nodes which gap out while the excitonic gap is formed. To verify the experimentally observed metal-insulator transition at 40 K would require a full temperature-dependent screening which is beyond our present study. Machine Learning The ML model is trained on a dataset of 24,134 ab initio band gaps of non-magnetic organic crystals stored in the organic materials database -OMDB [18,40]. The dataset is divided into a training, validation and test set of 15000, 3000 and 6134 materials, respectively. We used the continuous-filter convolutional neural network scheme -SchNet [19] with a batch size of 32, a cutoff radius of 5.0Å, 32 features, 3 interaction blocks, a learning rate of 10 −4 , and 50 Gaussians to expand atomic distances (see Ref. [40] for parameter choice). The initial band gap data exhibits a Wigner-Dyson like shape as shown in Fig. 1(a), having a mean of ≈ 2.9 eV and a standard deviation of ≈ 1.1 eV. Our trained ML model shows a mean absolute error (MAE) of 0.406 eV and a RMSE of 0.602 eV which is interpreted as an accuracy of ≈ 90%. Note that the underlying data is rather small (≈ 2 × 10 4 materials) and highly complex (average of 85 atoms per unit cell). The trained ML model is able to reproduce the band gap statistics on a set of 202,117 organic crystals (≈ 2 × 10 4 materials) taken from the COD [20], as shown in Fig. 1(a). These predictions also fit a Wigner-Dyson distribution ∼ x 5.62 e −0.45x 2 . The performance of our model on the test set is shown in Fig. 1(b). ab initio Calculations We performed ab initio calculations in the framework of the density functional theory (DFT) using a projector augmented-wave method as implemented in the Vienna ab initio simmulation package VASP [41]. Structures are taken from the COD [20] and transformed into VASP input using pymatgen [30]. During the self-consistent calculation of the electron density and the DOS we used a Γ-centered mesh with a k-mesh density of 800 points perÅ −3 for the quick materials scan of 414 candidate materials and a k-mesh density of 1500 points perÅ −3 for the refined calculations of 9 prospective semimetals. For the latter we performed an optimization of the atomic positions using a conjugate gradient algorithm. All calculations were performed with SOI and the exchange correlation functional was approximated by the strongly constrained and appropriately normed semilocal density functional SCAN [42] including van-der-Waalscorrections using the Tkatchenko-Scheffler method with iterative Hirshfeld partitioning [43,44]. The cut-off energy of the plane wave expansion was chosen to by 600 eV. Effective Hamiltonians We generated effective Hamiltonians for (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN) and TSeF-TCNQ. To describe the four-fold degeneracy at the Γ point observed for (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN), we construct a model with two orbital (τ i ) and two spin (σ i ) degrees of freedom, Here, each molecule in the unit cell contributes one molecular orbital. Neglecting higher and lower energy bands, a four-fold degeneracy is obtained assuming an even-and an odd-parity molecular orbital, and add respective lattice symmetries (space group P1): paritŷ P = τ 3 σ 0 ( k → − k), time-reversal −iσ 2K ( k → − k; K being the complex conjugation), and an emergent particle-hole symmetryĈ = σ 2 . Up to linear order we obtain The best fit to the ab initio calculated band structure resulted in the parameters a 1 = 0.47 eV, a 2 = −0.33 eV, a 3 = 0.11 eV, b 1 = 0.16 eV, b 2 = −0.38 eV, b 3 = 0.15 eV. The nodes for TSeF-TCNQ occur in the interior of the Brillouin zone. Therefore, we construct a latticeperiodic model with 8 bands (4 orbital-, 2 spin degrees of freedom). The orbital degrees of freedom are obtained from respective permutations of the two TSeF and two TCNQ molecules within the primitive unit cell. We represent the generators of the factor group G/T C 2h (G is the space group P2 1 /c, T is the corresponding group of pure lattice translations) as follows,P = P(1, 2, 3, 4) × σ 0 ( k → − k),Ĉ 2x = P(2, 1, 4, 3) × iσ 1 (k x → k x , k y → −k y , k z → −k z ) and add time-reversal symmetry as before. Here P(permutation) denotes a 4×4-dimensional permutation matrix. Note that each symmetry element comes with a corresponding double group partner. The total Hamiltonian is written as H = H orbital × H spin . We generate H orbital in the basis (|Ψ TSeF1 , |Ψ TSeF2 , |Ψ TCNQ1 , |Ψ TCNQ2 ) using Excitonic Instability We model quasiparticle-quasihole excitations of the material in terms of the following interaction process, where incoming (outgoing) solid lines belong to fermionic annihilation (creation) operatorsΨ q (Ψ † q ) and the dashed line to a screened scalar interaction V ( q). The Hamiltonian is given bŷ (6) Here, ν is a corresponding quantum number and a = ± (a = −a) denotes electron and hole states.Ĥ 0 denotes the non-interacting effective Hamiltonian of the system. We derive an excitonic gap equation using a Green function approach similar to Refs [36][37][38] and define G ν,± ( p, τ − τ ) = − TΨ ν, p± (τ )Ψ † ν, p± (τ ) and F ν,± ( p, τ − τ ) = − TΨ ν, p± (τ )Ψ † ν, p∓ (τ ) . We proceed by calculating the equations of motion for G and F using the Heisenberg formalism and decomposing the fourpoint functions in terms of G and F [46]. We impose particle-hole symmetry of the electronic band structure ξ + ν ( k) = ξ − ν ( k) as well as a real excitonic gap function ∆ ν and derive the BCS-like gap equation which is in agreement with Ref. [36]. Here, E ν, p = ξ 2 ν ( p) + ∆ 2 ν ( p). We construct the exact Brillouin zone using the algorithm of Finney [47] and solve (7) using a Liouville-Neumann series, where the integration is performed using a quasi-Monte-Carlo integration scheme [48]. We calculate the zero temperature excitonic gap tanh βE ν, q 2 → 1 and approximate the interaction by a screened Coulomb interaction V ( q) = − 4π q 2 + k 2 , with k being a screening vector generally temperature temperature dependent [36]. The screening vector is approximated by the Thomas-Fermi screening, which for Dirac semimetals is given by k = 2gα π k F [38], with k F being the Fermivector and g being the Dirac cone degeneracy. Organic crystals typically exhibit purification during the crystallization process expelling dopants. We assume a clean sample with Fermi-level deviation δµ = 0.01 eV. CONCLUSION Typically, nodal states within the electronic structure of organic crystals are inferred by indirect measurements, e.g., via the resistivity [49], optical conductivity [50], and local spin susceptibility [51] as has been done extensively for the pressure induced semimetal phase of (BEDT-TTF) 2 I 3 . A direct observation of the electronic structure of organic materials with angle-resolved photoemission spectroscopy (ARPES) is difficult due to: usually tiny crystal sizes limiting signal; insulating behavior leading to charging of the samples; limits in available orientations and problems in preparing well-defined surface terminations. However, ARPES has been performed for organics in selected cases [52]. Here, the reported experimental maximum/median/minimum crystal sizes (mm) of the materials described by us are 0.24/0.05/0.04 for (EDT-TTF-I 2 ) 2 (DDQ)·(CH 3 CN), 0.23/0.05/0.02/ for (TSeF-TCNQ), and 0.34/0.04/0.02 for bis-1,2,3-dithiazolyl. While these crystal sizes are tiny, recent progress in the field has decreased the focus area of state-of-the-art ARPES devices tremendously [53], making a measurement of the photo electron spectrum of the three materials possible in the near future. The materials reported here provide a natural platform to investigate the rare excitonic insulator phase as a consequence of strong interaction effects within symmetry protected nodal semimetals. While several inorganic nodal semimetals are known, the space of organics remains fairly unexplored in this regard. The realm of 3D organic semimetals is mainly composed of the single example α-(BEDT-TTF) 2 I 3 and modifications which exhibit tilted Dirac nodes under pressure (≈ 2.3 GPa) [51,54] or chemical strain [55]. Hence, the outcome of our ML and ab initio calculations are promising with respect to the identification of novel examples. We note that the choice of the s-wave excitonic instability due to strong interactions is only the simplest scenario and was chosen to estimate the size of the effect. However, other forms of gap openings and richer phase diagrams are imaginable. The ABABAB π-stacked bis-1,2,3-dithiazolyl radical was reported to undergo a transition into a canted antiferromagnet below a critical temperature of 4.5 K and to undergo a spin-flop transition to a field-induced ferromagnetic state at a temperature of 2 K and a magnetic field strength of H ≈ 20 kOe [29]. Our calculations revealed Weyl nodes along the path ΓX in the Brillouin zone for the ferromagnetic phase. To better understand the emergence of the phase and its correspondence to the underlying magnetic ordering we performed similar band structure calculations incorporating full structural optimization for an antiferromagnetic and a non-magnetic ordering (details on the ab initio calculation can be found in the methods section). Even though the nonmagnetic phase exhibits topological nodes along the path ΓY the overall behavior is metallic as there is no vanishing density of states at the Fermi level. The topological protection of the nodes for the nonmagnetic phase is a consequence of the underlying orthorhombic symmetry with space group P2 1 2 1 2 1 where the mechanism is described in Ref. [56]. The antiferromagnetic phase exhibits a clear band gap of the size of ≈ 0.15 eV. The comparison of the electronic structure for all three magnetic phases is shown in Fig. 8. The material (EDT-TTF-I)2(TCNQ) with COD-ID 4506562 exhibits topological nodes within the Brillouin zone close to the Fermi level as shown in Fig. 9. However, the material is not a semimetal as it exhibits a large density of states at the Fermi level. Appendix D: List of predictions We trained a machine learning model based on the continuous-filter convolutional neural network scheme SchNet [19] on 24,134 ab initio calculated band gaps stored within the organic materials database -OMDB [18] (details can be found in the method section). We applied the successfully trained model on 202,117 crystal structures stored within the crystallographic open database -COD [20]. All crystal structures considered belong to organic molecular crystals or metal organic frameworks which were synthesized before. Organic materials tend to be large band gap insulators and only 414 materials were predicted to have a band gap of 0.01 ≤ ∆ ≤ 0.4 eV, where we explicitly tried to exclude organic metals. The list of predicted band gaps for the 414 materials denoted with their COD-ID is given in Table I. TABLE I. COD-IDs, band gap ∆, and number of atoms in the primitive unit cell Nat for the subset of 414 COD materials where the band gap was predicted to be small (0.01 ≤ ∆ ≤ 0.4 eV). COD ID ∆ (eV) N at COD ID ∆ (eV) N at COD ID ∆ (eV) N at COD ID ∆ (eV) N at COD ID ∆ (eV) N at COD ID ∆ (eV) N at
6,256.6
2020-10-27T00:00:00.000
[ "Materials Science", "Physics" ]
Defect-states Passivation Strategy in Perovskite Solar Cells In the modern era, energy demand rises dramatically accompanied by the rapid growth of our population, causing urgent energy shortages and environmental issues around the globe. People turned their attention to solar energy for an eco-friendly and economic solution, in which, perovskite solar cells emerged and had caught a great deal of attention in the past decades for their promising and commercial development potential. To fully release their capability for a high-performance device, defect mechanisms which are one of the main factors inhibiting the efficiency and stability, as well as passivation strategies must be thoroughly studied. In this review, the concept and formation mechanism of the defects are summarized, the corresponding defect characterization techniques regarding their working principles and downsides were also compared. Furthermore, substantial passivation strategies were discussed. Although perovskite solar cells still have a long way to go, facing difficulties in a lot of other aspects, we believe that the research we are doing now is of great significance in making perovskite into a real application. INTRODUCTION Perovskite solar cell is a kind of solar cell that employs hybrid organic-inorganic metal halide as core lightharvesting materials. It has occupied a leading position and become one of the hot spots in the photovoltaic research field owing to its economical manufacture costs, simple solution processing fabrication, and rapidly boosting photoelectric conversion efficiency (PCE), which has attracted great attention from the business market since its potential for next photovoltaic evolution. The PCE has skyrocketed from 3.8% [1] in 2009 to 25.5% [2] this year, with the help of strong light-harvesting ability [2], small exciton binding energy [3], fast charge transport properties [4], and prolonged charge carrier lifetime [5]. Perovskite is a special crystal structure with the chemical components simplified in ABX3, generally, in our case, A is small organic molecules or metal cations (Cs + , MA + , FA + ), B refers to divalent metal cations (Pb 2+ , Sn 2+ , Ge 2+ , Cu 2+ ) and X represents halide (I -, Br -, Cl -). The classical perovskite photovoltaic material is CH3NH3PbI3, in which the halogen ions are located in the octahedral top with the lead wrapped in the center of the octahedral cage, and the organic methyl amino groups located at the top angles of the face-centered cubic lattice. From the lessons learned from dye-sensitized solar cells (DSSCs), the first perovskite device structure was mesoporous superstructure, in which a large number of mesoporous TiO2 were adopted to support the perovskite materials, and also played a role in electron transport. [6] Further studies demonstrated the superior charge transport properties of perovskite itself and led to the birth of the planar structure. In a planar junction perovskite solar cell, several hundred-nanometer thick absorber layers were sandwiched between the electron transport layer (ETL) and hole transport layer (HTL) without a mesoporous scaffold. Despite its shorter development period, it provided an efficiency of over 15% and provide simplified device configurations at that time. [7] Inverted perovskite has a device structure known as "p-i-n", in which HTL is at the bottom of intrinsic perovskite layer i with ETL n at the top, in which light was illuminated through the HTL surface. In recent years, a novel kind of multidimensional perovskite has become promising candidates to surmount several challenges in perovskite devices, especially, longterm stability. Mostly, it forms a ruddlesden-popper structure with the help of aliphatic or aromatic alkyl ammonium cation [8]. DEFECTS IN PEROVSKITE SOLAR SYSTEM Despite the fast advance in the perovskite performance, the existence of defects leads to compromised PCE due to non-radiative recombination centers and trap induced charge injection barriers, which results in a gap between the current PCE and the theoretically deduced Shockley-Queisser limit. 11 The defects or trap-states have negative impacts on both the electronic and optic properties. In laser and LEDs, deep-level defects encouraged non-radiative recombination and lower the photoluminescence quantum yield, the radiative recombination at shallow defects also broadened the emission spectrum. Nevertheless, defects significantly accelerate the efficiency drop under the outdoor environment and hinder the long-term stability issues. Therefore, studying, and depressing defects in perovskite is a vital topic for perovskite future practical applications. The defects in semiconductors can be classified into several categories. Point defects are one of the most important ones, includes vacancies, interstitials, and antisites, which are caused by missing atoms, extra atoms, and exchanged atoms, respectively. Aside from point defects, there can be line (one-dimensional) defects, where lattice periodicity is discontinuous along a line. Defects can develop into even higher dimensions, such as twodimensional surface defects and grain boundaries, and three-dimensional voids and precipitates. The defects can be described according to two criteria. One is the formation energy of defects, which determines the probability of a defect type to exist in the material. The smaller the formation energy, the easier the defect can be triggered and generated. Another criterion is the position of the defect energy levels compared to the valence band maximum (VBM) and conduction band minimum (CBM). For energy levels close to the VBM or CBM, they are referred to as shallow-level traps while those located in the middle third of the forbidden bandgap are referred to as deep-level traps. Shallow-level traps can release the trapped charge carriers back to the band edges especially at elevated temperatures [12,13] whereas the deep-level traps cannot detrap charges easily. Fig. 2. Various kind of defects: a, perfect lattice; b, vacancy; c, interstitial; d, anti-site substitution; e, Frenkel defect (interstitial and the vacancy created from the same ion); f, Schottky defect (anion and cation vacancies occurring together); g, substitutional impurity; h, interstitial impurity; i, edge dislocation (line defect propagating along the axis perpendicular to the page); j, grain boundary; and k, precipitate. Passivation Strategies in Perovskite Solar Cells For halide perovskites with un-passivated interfaces, large grain boundaries and high defect density would strongly enhance the activity of trap-assisted recombination channel, which is a major loss in halide perovskite-based photovoltaics. [14] Trap assisted recombination at interface defects is even more detrimental than grain boundaries, by solely passivating interface traps, the PCE can be enhanced by 40% [15]. When a defect state, lies energetically within the semiconductor bandgap, there is a likelihood that an approaching electron or hole will become captured or trapped. The trapped electron (or hole) is likely to be emitted, or de-trapped back to the conduction (or valence) band by phonon absorption if the activation energy is sufficiently small. However, if the activation energy is large, then it is more likely that the trapped carrier will annihilate or recombine with an opposite carrier before it can be emitted. Therefore, as a kind of polycrystalline thin film, perovskite thin-film must have a low density of charge carrier traps (16). Controlling charge carrier trapping is an extremely important issue in the development of high-performance solar cells. Passivation, in physical chemistry and engineering, refers to a material becoming "passive", that is, less affected or corroded by the environment in which it will be used. Passivation involves the application of an outer layer of a shielding material as a micro-coating, created by chemical reaction with the base material [17]. The transition process from the "active state" to the "passive state" by the formation of a passivating lm [18]. For perovskite solar cells, passivation generally refers to either chemical passivation, which reduces the defect trap states to optimize the charge transfer between various interfaces, or physical passivation, which isolates certain functional layers from the external environment to avoid degradation of the device. Surface passivation by Lewis acid/bases Surface passivation for organic-inorganic halide perovskite solar cells by introducing the Lewis bases thiophene and pyridine were proposed, curbing the formation of Pb defects and non-radiative recombination centers, which largely increases the PL quantum yield, PL lifetime as well as the PCE of the device. [19,20] The possible nature of the passivation mechanism is that the Lewis bases might donate electrons to bare Pb atoms originated from halide vacancy. Another Lewis base IT M also helps to achieve an efficiency of 20.5% with a fill factor of 81%, since it stabilizes the Pb octahedrons in the ambient environment [21]. On the other hand, hole traps could be healed via Lewis acid. Iodopentafluorobenzene (IPFB) is a type of Lewis acid that helps to coordinate halogen atoms [22,23] via supramolecular halogen bonding. (24,25) By immersing annealed halide perovskites in the IPFB and drying under N2, the traps of halide perovskites can be effectively passivated, which prevents charge accumulation and recombination at the interface. Grain boundary passivation Fullerene derivatives are a kind of famous superstar materials used in an organic photovoltaic system, while in perovskite devices, they were usually used in inverted p-in structured perovskite solar cells. The fullerene layers deposit on the perovskite layers, eliminating photocurrent hysteresis and improving the device performance. [26] An ultra-thin PCBM layer was coated on the perovskite surface, followed by heat treatment, during which the PCBM diffused into the grain boundaries, where plenty of defects gathers. It passivates the PbI3 anti-site defects during the formation of perovskite grains. [27] In perovskite, self-passivation is a superior property that achieves perovskite. There are three methods to introduce a PbI2 passivation layer at perovskite grain boundaries. The first one is the self-induced formation of PbI2 from the controlled degradation of pristine perovskite thin films via thermal [28,29,30,31] or water vapor treatment [32][33][34]. The second is the preparation of a nonstochiometric perovskite precursor solution with an excess of PbI2 (usually 3%-10% molar ratio relative to the perovskite [35][36][37][38]; and the final one is the incomplete reaction of PbI2 through a two-step solution or vapor reaction method [37][38][39] Another key component that constructs perovskite is CH3NH3I (MAI), which could also be introduced to complete self-passivation. This process was achieved by MAI vapor post-treatment, which was found to be an efficient method to passivate defect sites on perovskite grain surface [40,41]. Bulk Passivation via alky metal doping In perovskite devices, the photocurrent hysteresis phenomenon is an inevitably involved issue that should be addressed. In recent years, a universal strategy named alky metal doping strategy is proposed. Time-resolved photoluminescence (TRPL) and Photoluminescence quantum efficiency (PLQE) measurements show that potassium doping is an efficient procedure to perform trap state passivation. The PCE of the passivated device would jump from 17.3% to 21.5%, accompanied by the enhancement of VOC (Open-circuit voltage) and JSC (Short-circuit current density). Besides, the ion migration is also inhibited in that observation, which strongly hindered the hysteresis. [49,50,51] Passivation of the charge carrier transport pathways Trap states located at the surface of the selected transportation layer are also crucial aspects. The typical ETL TiO2 could be passivated by an ultrathin layer of TiO2 (atomic layer deposition), or a chemical deposited TiCl4 layer. [52] Furthermore, lithium ions were adopted to facilitate the electronic properties of the TiO2 mesoporous layer by reducing electronic trap states. Additionally, selfassembled fullerene derivatives, pyridine, and other semiconductor shell layers that possess high electron mobility could also be brought in to help the passivation of the TiO2 layer. [53][54][55] Aside from the functional materials involved in the operational devices, the selective electrodes could not be ignored to fulfill the performance enhancement procedures. An ultra-thin Ni surface layer could be applied to the Au electrode that functioned as both a physical passivation barrier and a hole-transfer catalyst [56], which would enhance photocurrent density and substantially better water stability [57,58] Methods to evaluate the trap states properties According to the abovementioned details about the trapstates in perovskite solar systems, it could be readily found that basic knowledge about the trap-states is vital in efficiency promotion strategy. Therefore, the priority is to capture the fruitful properties of the trap-states. Several classical techniques are summarized herein, accompanied by corresponding basic principles. Space-Charge-Limited-Current (SCLC) SCLC is a powerful method to estimate the density of defect states and charge mobility from measured currentvoltage characteristics, it can be determined by the product of space-charge density derived from the permittivity of the semiconductor [59] Defects can trap and scatter the free charge carriers, thus change the electrical properties of halide perovskites. Under charge injections through ohmic contacts, it can be found that there are three distinct regions in the log-log current-voltage curve. [60] When the voltage is relatively small, ‫ܫ‬ ܸ ݊ with n=1. This is the first region called the Ohmic region. The second region is the trap-filled limit (TFL) region, where the applied voltage reaches a certain threshold value ‫,ܮܨܸܶ‬ the defects become saturated and are not able to trap more free carriers. The resistance thus jumps to a lower value and n increases to above 3. The third region is called the Child region with n equals to 2 that follows the Mott-Gurney law [61]. The defects density can be derived with the assumption that all defects are filled, and injected charges can freely move, where ‫ܽݎݐ݊‬ is the defect density, ߝ is the relative dielectric constant to vacuum, ߝ0 is the vacuum permittivity, ‫ݍ‬ is the electric charge and ‫ܮ‬ is the thickness [62]. Density functional theory (DFT) calculation The determined trap concentration via SCLC is not accurate as different charges cannot be differentiated, for instance, the potential of ion migration within the crystal resulting in underestimation of trap concentration. Knowing this situation, theoretical calculations, such as DFT methods, is necessary to explore the nature of traps because it is not possible to find out the origin of imperfections experimentally. DFT is a computational modeling method using supercells of a large number of atoms to investigate the electronic structure through functionals of spatially dependent electron density. The small formation energy of defect states in perovskite devices is estimated and the shallow distribution property is elucidated via DFT, which facilitates the strong trap states tolerance ability and prolongs the carrier lifetime in perovskite devices. (65 Thermal admittance spectroscopy (TAS) Defects could be deemed as an analog of small capacitors since the trapping-detrapping behavior can be considered as a charge-discharge process. By applying alternating current (AC), the admittance of the target halide perovskites will respond to the AC voltage. The relative position of the defect energy level to the Fermi level determines the probability the defects are occupied or not [66]. Temperature and frequency-dependent admittance could provide the density of trap states (tDOS) with a trap energy depth profile [67]by the following formula: 3 Where ܸܾ݅ is the built-in heterojunction potential barrier, ߱ is the AC frequency, ‫ܥ‬ is the capacitance and ‫ܦܹ‬ is the depletion width of the space-charge region [68.69]. However, very deep traps or traps with long thermal emission time cannot be detected since this method depends on the dynamics of the trappingdetrapping process as well as the relative position of defect energy level and Fermi energy. CONCLUSIONS Perovskite cells have a promising potential in the future optoelectronic device market, although they are still facing instability and limited PCE undersides. To make it more competitive, we need a dedicated and comprehensive understanding of the defect behavior and mechanism in perovskite cell using various theoretical and experimental methods such as DFT, SCLC mentioned above, and from there we can devise the appropriate and effective ways for defect passivation, and rapid progress could be made in the field so far. Currently, we have so many different strategies for passivation using different materials on different parts, we have surface passivation using Lewis base/ acid, grain boundary passivation using fullerenes derivatives, bulk passivation through doping with alkyl metal, and even passivating charge carrier transport pathways. All these are crucial processes not only to increase the PCE, hinder hysteresis, and also improve long-term stability to make it a more resilient, durable, and competitive device in the future.
3,654
2021-01-01T00:00:00.000
[ "Materials Science" ]
Difference-aware Knowledge Selection for Knowledge-grounded Conversation Generation In a multi-turn knowledge-grounded dialog, the difference between the knowledge selected at different turns usually provides potential clues to knowledge selection, which has been largely neglected in previous research. In this paper, we propose a difference-aware knowledge selection method. It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns. Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection. Automatic, human observational, and interactive evaluation shows that our method is able to select knowledge more accurately and generate more informative responses, significantly outperforming the state-of-the-art baselines. Introduction Knowledge-grounded conversation generation aims at generating informative responses based on both discourse context and external knowledge (Ghazvininejad et al., 2018;Zhou et al., 2018a), where selecting appropriate knowledge is critical to the success of the task. Existing knowledge selection models generally fall into two types. One type is solely based on the context Meng et al., 2020;, which we call non-sequential selection because knowledge selection at different turns is independent. The other type sequentially selects knowledge additionally conditioned on previously selected knowledge (Kim et al., 2020), which we denotes that the corresponding knowledge has little difference from or is identical to the previously selected one, and selecting it may lead to repetitive responses. The red × denotes that the difference is too large, and selecting it could make the response incoherent with the context. call sequential selection. As shown in Kim et al. (2020), such a sequential way can better simulate a multi-turn dialog and facilitate knowledge selection in later turns. However, the difference between selected knowledge at different turns has been largely neglected in prior studies, while it usually provides potential clues to knowledge selection. Figure 1 illustrates an example, where the dialog system selects one from candidate knowledge sentences (all relevant to the context) at the 2 nd turn. Selecting the knowledge that has little difference from or even is identical to the previously selected one (like the 1 st knowledge) may lead to generating repetitive responses, while too large difference (like the 3 rd knowledge) would make the response incoherent with the context. As a result, the dialog system should avoid the knowledge which differs from the previously selected ones either too little or too largely, and instead select an appropriate knowledge sentence (the 2 nd one) which can make the conversation flow smoothly and naturally. We thus propose DiffKS, a novel Differenceaware Knowledge Selection method for knowledgegrounded conversation generation. It first computes the difference between the candidate knowledge sentences provided at the current turn and the previously selected knowledge. Then, in the two models we devise, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection. Automatic and human evaluation on two widely-used benchmarks shows that our method is significantly superior over the state-of-the-art baselines and it can select knowledge more accurately and generate more informative responses. Our contributions are summarized as follows: • We propose to explicitly model and utilize the differential information between selected knowledge in multi-turn knowledge-grounded conversation for knowledge selection. We further devise two variants where the differential information is fused with or disentangled from the context information during knowledge selection. • Automatic, human observational, and human interactive evaluations show that our method significantly outperforms strong baselines in terms of knowledge selection and can generate more informative responses. 2 Related Work Knowledge-grounded Dialog Generation Recently, a variety of neural models have been proposed to facilitate knowledge-grounded conversation generation (Zhu et al., 2017;Young et al., 2018;Zhou et al., 2018a;Liu et al., 2018). The research topic is also greatly advanced by many corpora (Zhou et al., 2018b;Moghe et al., 2018;Dinan et al., 2019;Gopalakrishnan et al., 2019;Moon et al., 2019;Tuan et al., 2019;Zhou et al., 2020). As surveyed in , existing studies have been mainly devoted to addressing two research problems: (1) knowledge selection: selecting appropriate knowledge given the dialog context and previously selected knowledge Meng et al., 2020;Kim et al., 2020); and (2) knowledge-aware generation: injecting the required knowledge to generate meaningful and informative responses (Ghazvininejad et al., 2018;Zhou et al., 2018a;Li et al., 2019;Qin et al., 2019;Yavuz et al., 2019;Zhao et al., 2020). Since selecting the appropriate knowledge is a precursor to the success of knowledge grounded dialog systems, we focus on the knowledge selection problem in this paper. Non-sequential Knowledge Selection The non-sequential selection models capture the relationship between the current context and background knowledge Meng et al., 2020;. For instance, PostKS ) estimates a posterior distribution over candidate knowledge sentences, which is based on both the context and the golden response, and only uses the context to estimate a prior distribution as an approximation of the posterior during inference. Besides, ; Meng et al. (2020); also belong to non-sequential selection models. Different from our work and ; Kim et al. (2020) that select knowledge from candidate knowledge sentences, their methods are devised for selecting important text spans or fragments from the background knowledge document that will be used in generation. Therefore these works have a different task setting from ours. Sequential Knowledge Selection The sequential selection models additionally make use of previously selected knowledge to facilitate knowledge selection (Kim et al., 2020). For instance, Kim et al. (2020) propose a Sequential Latent Knowledge Selection (SLKS) model. It keeps track of the hidden states of dialog history and previously selected knowledge sentences. Our method is parallel to SLKS because we also utilize the previously selected knowledge. However, we explicitly compute the difference between knowledge selected at different turns, while SLKS only encodes the already selected knowledge in an implicit way. In addition, recently there emerge a number of works that propose RL-based models to select a path in structured knowledge graph (KG) (Xu et al., 2020a,b), which also select knowledge in a sequential way. While our method is designed to ground the conversation to unstructured knowledge text, we will leave as future work the application of our method to such KG-grounded dialog generation tasks Moon et al., 2019;Zhou et al., 2020). Task Formulation In a multi-turn dialogue, given a post and a sequence of knowledge sentences at each turn, our goal is to select appropriate knowledge and generate a proper response to the current context. Formally, the post at the τ -th turn is a sequence of tokens x τ = x τ 1 , . . . , x τ |x τ | , and the response to be generated is y τ = y τ 1 , . . . , y τ |y τ | . The background knowledge k τ = k τ 1 , . . . , k τ |k τ | contains a sequence of knowledge sentences provided at the is a sequence of tokens in the i-th sentence. Note that under the setting of multi-turn dialogue, we use c τ x τ −1 ; y τ −1 ; x τ as the given context at the τ -th turn, where [·; ·] denotes concatenation. In Section 3.2 and 3.4, we will omit the superscript τ for simplicity. Encoders The context is encoded with a bidirectional GRU (Cho et al., 2014): where ← − h c,1 as the context representation. Similarly, the knowledge sentences are encoded with another BiGRU: as the representation of k i . Specifically, we add an empty sentence k 0 that indicates no knowledge being used. Figure 2: An overview of model structure. Difference-aware Knowledge Selection In order to select proper knowledge, our model gets aware of the difference between the current candidate knowledge sentences and the previously selected knowledge. To make full use of the contextual dependency and relevance between the knowledge sentences 1 , our model first compares candidate knowledge sentences to explore their correlations, where the comparison is conducted using BiGRU: Then, the model computes the difference of each knowledge sentence r τ i from the knowledge selected in the previous M turns h τ −m k M m=1 : Inspired by Wang et al. (2018), we define the difference as follow: where F is a fully connected layer activated with tanh. Note that at the first turn, we set o 1 i to a zero vector because there is no differential information to be obtained. For that intuitively the knowledge selected in the previous turn has the largest impact and most clues for the current selection, we studied the simplest case where M = 1, saying o τ i = Dif f h τ −1 k , r τ i , in the main experiments for simplicity. Next, we introduce two variants where the differential information {o τ i } |k τ | i=0 is fused with or disentangled from the contextual information during knowledge selection. Fused Selection where v, W que and W key are trainable parameters. However, it is difficult to distinguish the respective contributions of contextual and differential information to knowledge selection in the above fused variant. We thus devise the disentangled variant as following, where the roles of two types of information are separated, which makes it feasible to conduct ablation study. Figure 4: Disentangled Selection module. The contextual information and the differential information are disentangled to calculate two separate knowledge selection distributions in two independent selectors. Figure 4 gives an overview of the Disentangled Selection module. It has two independent selectors. The Contextual Selector simply looks for the knowledge sentence that has high relevance to the context, just like most existing knowledge selection models do. It only takes advantage of the context h τ c to match each knowledge sentence itself h τ k,i , obtaining a context-aware selection distribution: Disentangled Selection In contrast, the Differential Selector focuses on predicting the next knowledge to be selected conditioned on the previously selected knowledge and differential information, which reveals the process of knowledge transition. Without the access to the contextual information, the Differential Selector views the previously selected knowledge h τ −1 k as query, and the knowledge sentence r τ i with its differential information o τ i as key, to estimate a difference-aware selection distribution: where v, W que and W key are trainable parameters. The final selection distribution is the summation of the distributions of two selectors: Note that the Differential Selector relies on the previously selected knowledge, thus at the first turn, we set β τ Diff,i to 0 for each i. Selecting Knowledge Finally, either adopting the Fused or Disentangled Selection module, the model selects the knowledge sentence with the highest attention score, and uses its representation for further generation 2 : Decoder The decoding state is updated by a GRU: where W D and b D are trainable parameters, and e (y t−1 ) denotes the embedding of the word y t−1 generated in the last time step. Then, the decoder outputs the generation probability over the vocabulary (without normalization): where W G and b G are trainable parameters, and w is the one-hot vector of the word w. Meanwhile, a copy mechanism (Gu et al., 2016) is adopted to output additional copy probability of the words in the selected knowledge sentence k i (without normalization): where H is a fully connected layer activated with tanh. The final probability distribution is computed as follows: (17) where Z is the normalization term. Then we select the word from vocabulary with the highest probability, saying: y t = arg max w P(y t = w). Loss The negative log likelihood loss is adopted: where y τ t * denotes the t-th word in the golden response at the τ -th turn and T is the length of turns in the whole dialogue. We also add supervision on the final knowledge selection distribution: where i τ * denotes the index of the golden selected knowledge sentence at the τ -th turn. The total loss is their summation: where we set λ = 1 in our experiments. WoW (Dinan et al., 2019) contains multi-turn knowledge-grounded conversations, collected by wizard-apprentice mode. Each utterance of the wizard is grounded to a selected knowledge sentence, or indicated by that no knowledge is used. The dialogues are split into 18,430/1,948/965/968 for Train/Dev/Test Seen/Test Unseen respectively, with 4 turns per dialogue and 61 provided knowledge sentences per turn on average. Note that the test data is split into Test Seen (in-domain) and Test Unseen (out-of-domain), where Test Unseen contains topics that are never seen in Train or Dev. Holl-E (Moghe et al., 2018) contains conversations in which one speaker is strictly instructed to give utterances by copying or modifying sentences from the given background document. Similarly, each utterance is annotated regarding the selected knowledge. Following Kim et al. (2020), we tokenized the background document into sentences, and meanwhile ensured that the annotated span is included in a whole sentence. The dialogues are split into 7,211/930/913 for Train/Dev/Test respectively, with 5 turns per dialogue and 60 provided knowledge sentences per turn on average. Implementation Details All the models were implemented with PyTorch (Paszke et al., 2017). The sentences were tokenized with NLTK (Bird and Loper, 2004). We set the vocabulary size to 20K for WoW and 16K for Holl-E and used the 300-dimensional word embeddings initialized by GloVe (Pennington et al., 2014) or from a standard normal distribution N (0, 1). We applied a dropout rate of 0.5 on word embeddings. The hidden sizes were set to 200 for the encoders (totally 400 for two directions) and to 400 for the decoder. We adopted the ADAM (Kingma and Ba, 2015) optimizer with the initial learning rate set to 0.0005. The batch size was set to 8 dialogues. All the models share the same hyperparameter setting and were trained for 20 epochs on one NVIDIA Titan Xp GPU. The checkpoints of our reported results were selected according to BLEU-4 on the Dev sets. Automatic Evaluation We used several automatic metrics: ACC, the accuracy of knowledge selection on the whole test set, corpus-level BLEU-2/4 (Papineni et al., 2002), and ROUGE-2 (Lin, 2004). As shown in Table 1 3 , our method outperforms significantly all the baselines in all the metrics on three test sets (except BLEU and ROUGE on WoW Seen compared with SLKS), which indicates its superiority in selecting proper knowledge and generating informative responses. Compared to the baseline models, our models also demonstrate a stronger ability of generalization from in-domain (WoW Seen) to out-of-domain data (WoW Unseen). It is worth noting that on WoW Unseen, our DiffKS Fus obtains a higher accuracy (19.7) of knowledge selection even than the BERT-enhanced SLKS in their original paper (18.3). We also observed that DiffKS Fus performs a bit better on WoW while DiffKS Dis on Holl-E. We conjecture that it is because in Holl-E, the golden selected knowledge among different turns usually has high contextual dependency (for example, they may be continuous sentences in the document), which makes it feasible to predict the next selected knowledge simply conditioned on the differential information. Human Observational Evaluation We conducted human observational evaluation with pair-wise comparison, where our two models were compared with PostKS++ and SLKS. 100 dialogues were respectively sampled from WoW Seen/Unseen. For each pair of dialogues generated from two models (suppose with T turns), annotators from Amazon Mechanical Turk were hired to give preferences (win, lose, or tie) for each response pair of all the T turns in terms of different metrics. Each pair-wise comparison of dialogues was judged by 3 curators. We adopted the following two metrics: Naturalness evaluates the fluency and readability of a response. Appropriateness evaluates the relevance to the context and whether Table 2, where the Fleiss' Kappa (Fleiss, 1971) values show almost moderate agreements (0.4 < κ < 0.6). Our models significantly outperform PostKS++ in both metrics, and also generally outperform SLKS in terms of Appropriateness. Again, the advantage of our models on WoW Unseen is more evident than on WoW Seen. Human Interactive Evaluation We further conducted human interactive evaluation where real humans converse with one model about a specific topic. We compared PostKS++ and SLKS with our two models. The workers from Amazon Mechanical Turk were asked to first select one topic from 2-3 provided candidate topics, and then converse with one of the models for 3-5 dialogue turns. After conversation, they were required to rate the dialog model with a 5-star scale in terms of the fluency and informativeness of the utterances and the coherence of the whole dialog. Following Dinan et al. (2019); Kim et al. (2020), the interactive evaluation was implemented with ParlAI (Miller et al., 2017). For each model, we averaged the scores from 150 collected conversations on each test set of WoW. We also reported the results of human-human dialog from Dinan et al. (2019); Kim et al. (2020), where each worker converses with another human and the latter has access to knowledge sentences just like the models do. Results are shown in Table 3 4 , where DiffKS Fus gains the highest scores and our models both outperform the other two state-of-the-art baselines, indicating that our models are favorably preferred by human annotators. Ablation Test In order to verify the effectiveness of the differential information in knowledge selection, we conducted ablation tests, which were specifically based on the disentangled variant DiffKS Dis . In DiffKS Dis , we removed either the Differential Selector (DiffSel) or the Contextual Selector (CtxSel), and trained the model with only one of the two selectors. Results are shown in Table 4. Without the differential selector, the model performance is remarkably impaired in all the metrics on three test sets, indicating the importance of utilizing differential information. In comparison, removing the contextual selector is less influential (with less performance drop). We conjecture that this may result from the characteristics of datasets. For instance, in WoW, the apprentice (without access to knowledge) usually reacts passively to the wizard (having access to knowledge). Thus the apprentice posts (contextual information) have limited influence in driving the conversation, which is instead affected or controlled by the wizard. In this case, our differential information that can predict the process of knowledge transition has more influence than the contextual information. In addition, same as Kim et al. (2020), the knowledge sentences in Holl-E are obtained by segmenting a long document into single sentences, which implies that there exists the relevance or contextual dependency between knowledge sentences. Consequently, the differential information is still able to provide considerable clues for knowledge selection even without access to the new user post (the context). Furthermore, after removing DiffSel, DiffKS Dis reduces to a vanilla knowledge selection model where the supervision L KS was directly applied on the 'prior' selection distribution. Nevertheless, the performance of the ablated model is sometimes competitive to the baselines (for instance, in terms of ACC, DiffKS Dis w/o DiffSel obtains 22.3/15.5/29.1 vs. 21.9/14.9/28.0 of PostKS++). It may result from the gap between training and inference caused by the prior-posterior framework adopted in PostKS and SLKS, which may be not superior over directly training the prior selection distribution 5 . Difference From More Turns To investigate the impact of increasing the turns of differential information (the M in Equ.4), we additionally experimented with M = 2, 3, and took the arithmetic average for simplicity in Equ.4, saying ∀i, λ i = 1/M . Results are shown in Table 5. We can find that M = 2 generally achieves the best performance compared with M = 1, 3 for both DiffKS Fus and DiffKS Dis (while M = 3 is still better than M = 1). It further turns out the effectiveness of explicitly modeling differential information. We also conjecture that the model performance would be further improved by assigning the nearest/farthest difference with the largest/smallest weight in Equ.4, saying λ 1 > λ 2 > · · · > λ M , which is more reasonable than the simplified arithmetic average. Accuracy Over Turns To verify whether the sequential knowledge selection facilitates knowledge selection in later turns, we evaluated the accuracy of knowledge selection at different turns. The statistics are shown in Table 6. Our two models have the highest accuracy from the 2 nd to 5 th turns and outperform SLKS and PostKS++ (and SLKS also generally outperforms PostKS++). The results show that our models can select more accurate knowledge consistently over different turns. Case Study alloween activities include trick-or-treating (or the related guising), attending Halloween costume parties, carving pumpkins ack-o'-lanterns, lighting bonfires, apple bobbing, divination games, playing pranks, visiting haunted attractions, telling scary s, and watching horror films. ichael Myers is a fictional character from the "Halloween" series of slasher films. Figure 5: Case study. We marked the selected knowledge sentence in parentheses before each response. The knowledge k1-k5 are about the topic Georgia (U.S. state), while k6 is about History of Australia. The blue denotes duplicate responses resulting from repetitive knowledge selection. The red × denotes incoherent responses resulting from selecting a far different knowledge from previous turns. We show a case from WoW Seen in Figure 5, which compares the responses generated by PostKS++, SLKS and our two models. At the 2 nd turn, PostKS++ generates almost the same responses as at the 1 st turn due to the repetitive knowledge selection. Similar cases occur for SLKS at the 2 nd and the 3 rd turns. Moreover, PostKS++ selects a quite different knowledge sen-tence at the 3 rd turn from those at previous turns, which is about the topic History of Australia but not Georgia (U.S. state). As a result, PostKS++ generates a response which is not coherent to the previous context at the 3 rd turn. In contrast, our two models select both diverse and appropriate knowledge sentences at all the turns, thereby generating informative responses and making the dialog coherent and natural. Conclusion We present a novel difference-aware knowledge selection method for multi-turn knowledge-grounded conversation generation. Our method first compares the candidate knowledge provided at the current turn with the previously selected knowledge, and then selects appropriate knowledge to be used in generation. Experimental results show that our method is able to select knowledge more accurately and to generate more informative responses, outperforming significantly the state-of-the-art baselines.
5,182.8
2020-09-20T00:00:00.000
[ "Computer Science" ]
Dosimetric effect of respiratory motion on planned dose in whole-breast volumetric modulated arc therapy using moderate and ultra-hypofractionation The interplay effect of respiratory motion on the planned dose in free-breathing right-sided whole-breast irradiation (WBI) were studied by simulating hypofractionated VMAT treatment courses. Ten patients with phase-triggered 4D-CT images were included in the study. VMAT plans targeting the right breast were created retrospectively with moderately hypofractionated (40.05 Gy in 15 fractions of 2.67 Gy) and ultra-hypofractionated (26 Gy 5 fractions of 5.2 Gy) schemes. 3D-CRT plans were generated as a reference. All plans were divided into respiratory phase-specific plans and calculated in the corresponding phase images. Fraction-specific dose was formed by deforming and summing the phase-specific doses in the planning image for each fraction. The fraction-specific dose distributions were deformed and superimposed onto the planning image, forming the course-specific respiratory motion perturbed dose distribution. Planned and respiratory motion perturbed doses were compared and changes due to respiratory motion and choice of fractionation were evaluated. The respiratory motion perturbed PTV coverage (V95%) decreased by 1.7% and the homogeneity index increased by 0.02 for VMAT techniques, compared to the planned values. Highest decrease in CTV coverage was 0.7%. The largest dose differences were located in the areas of steep dose gradients parallel to respiratory motion. The largest difference in DVH parameters between fractionation schemes was 0.4% of the prescribed dose. Clinically relevant changes to the doses of organs at risk were not observed. One patient was excluded from the analysis due to large respiratory amplitude. Respiratory motion of less than 5 mm in magnitude did not result in clinically significant changes in the planned free-breathing WBI dose. The 5 mm margins were sufficient to account for the respiratory motion in terms of CTV dose homogeneity and coverage for VMAT techniques. Steep dose gradients near the PTV edges might decrease the CTV coverage. No clinical significance was found due to the choice of fractionation. Introduction Right-sided breast cancer has traditionally been treated under free-breathing (FB) conditions using tangential fields. Hypofractionation has shortened the breast cancer treatment courses from 25 to 15 fractions Open Access *Correspondence<EMAIL_ADDRESS>1 Department of Physics, University of Jyväskylä (JYU), P.O. Box 35, FI-40014 Jyväskylä, Finland Full list of author information is available at the end of the article [1] and 5 fractions [2], henceforth called moderate hypofractionation and ultra-hypofractionation. Ultrahypofractionation is gradually gaining acceptance [3,4], and will reduce the clinical load and costs in breast cancer treatment [4,5]. However, the new emerging fractionation schemes have increased the plan quality requirements which are not always achievable with conventional treatment techniques. The volumetric modulated arc therapy (VMAT) technique has been utilized in treating breast cancer, as it has been proven to yield high coverage and improved homogeneity on the target dose [6][7][8]. While a recent study found VMAT dosimetrically feasible for ultra-hypofractionated left-sided early breast cancer treatments [9], possible dosimetric errors caused by respiratory motion on highly modulated fields have raised a concern. While the dose deviation caused by respiratory motion has been shown to average out after five fractions in lung cancer phantom [10], the dose-averaging effect in whole-breast irradiation (WBI) remains a question as the breast targets are usually large and the chest wall region may undergo shape changes with expiration and inspiration. Clinically, it is recommended to use respiratory gating if respiratory motion range exceeds 5 mm in any direction [11]. The effects of respiratory motion have been studied for tangential breast cancer treatment techniques [12][13][14][15][16], but the feasibility of VMAT techniques has not been investigated for WBI under free-breathing conditions. Previously, the effects of breathing motion on WBI dose distribution have been simulated using isocenter shifts and weighted end-expiration and endinspiration dose calculations [12][13][14]. However, incorporating a realistic respiratory cycle is challenging and few studies using a four-dimensional computed tomography (4D-CT) image set and IMRT technique to investigate the effect of breathing motion on breast irradiation have been published [15,16]. Another IMRT study suggested that the areas of homogeneous target dose would be unaffected and biggest deviations would be observed close to the target edges [17]. However, no direct conclusion can be drawn for the VMAT technique, where the dose is delivered from a multitude of angles. This study is the first to evaluate the feasibility of VMAT technique in right-sided WBI under freebreathing conditions by simulating the delivered dose on 4D-CT image sets. In addition, the differences in dose-averaging effects between moderate hypofractionation and ultra-hypofractionation are evaluated. Materials and methods Ten patients originally diagnosed with lung cancer with 4D-CT images were included in this study. The wholebreast target volumes were delineated according to the ESTRO guideline [18]. As acceptance criteria, a real-time position management (RPM, Varian Medical Systems, Palo Alto, CA) dataset and fully imaged breast region with at least 2 cm margins both cranially and caudally were required. The study protocol was approved by Central Finland Health Care District. Ten phase-triggered 4D-CT images per patient were acquired using Siemens mCT (Siemens Healthcare GmbH, Erlangen, Germany) with 2 mm slice thickness. Patients were instructed to breathe calmly and were imaged in a supine position. End-inspiration and end-expiration phase markers were placed automatically in the RPM data and corrected by a radiotherapist if necessary. The treatment planning was conducted in the endinspiration anatomy to simulate a worst-case scenario. Clinical target volumes (CTV) were delineated on the end-inspiration images by an oncologist and expanded by 5 mm to form the planning target volumes (PTV). The PTV and CTV structures were cropped 5 mm inside from the skin to form PTVin and CTVin. In addition, the PTV was expanded 8 mm outside the body to be used in conjunction with a virtual bolus [19]. All target delineations were carried out in Eclipse (version 15.6, Varian Medical Systems) treatment planning system (TPS). The organs-at-risk (OAR) were automatically delineated on the end-inspiration image using MIM Maestro software (MIM Software Inc, Cleveland, OH) based on national atlas for breast cancer [20] and verified by the planning physicist. The delineated OAR structures were lungs, heart, contralateral breast and liver. A representative respiratory cycle was formed for each patient by averaging the RPM respiratory cycles. The representative cycles were sampled to the median prefiltered respiratory cycle length for each patient. In addition, the range of chest wall movement perpendicular to the planned tangential field central axis was determined from the 4D series for each patient. The midpoint slice of CTV, in cranial-caudal direction, was chosen as the measurement location. In addition, the liver motion amplitude was measured from the 4D series by measuring the cranio-caudal displacement of the liver dome. Eclipse (Varian Medical Systems) treatment planning system (TPS) and Monaco TPS (Elekta AB, Stockholm, Sweden) were used in generating the treatment plans. The plans were generated for Varian TrueBeam linear accelerator with Millenium 120 MLC and Elekta Infinity linear accelerator with Agility MLC, respectively. Eclipse used the Photon Optimizer planning algorithm and analytical anisotropic algorithm (AAA) 15.6 for dose calculation while Monaco used the X-ray Voxel Monte Carlo algorithm for dose calculation. Six treatment courses were planned for each patient using three techniques and two fractionations. Two VMAT techniques, Varian Rapid Arc (RA) and Elekta VMAT (E-VMAT), were used and tangential threedimensional conformal radiation therapy (3D-CRT) plans with tangential main fields and 2-3 subfields were also generated for reference. The RA and 3D-CRT plans were generated in Eclipse and E-VMAT plans were generated in Monaco. These plans are referred to as original plans in this article. The treatment planning was carried out using moderate hypofractionation (40.05 Gy in 15 fractions of 2.67 Gy). The ultra-hypofractionated plans (26 Gy in 5 fractions of 5.2 Gy) were formed by adjusting the fractionation of the original 15 fraction plans for each technique. No reoptimizing was performed to avoid confounding effects of different planning objectives and differences in dose distributions. The maximum leaf speeds were 25 mm/s and 65 mm/s for RA and E-VMAT, respectively, and the dose rate was limited to 600 MU/ min for all plans. The prescribed dose was normalized to the mean dose of PTVin. An 11 mm virtual bolus was utilized in RA and E-VMAT treatment planning. The VMAT fields were restricted to tangential directions, to better spare the contralateral normal tissue compared to conventional VMAT [8]. Posterior arcs ranged between 181° and 260°-275° and the anterior arcs between 325°-360° and 60°-70° according to individual patient anatomy. The collimator angles for posterior and anterior arc fields were ± 5°-20° for RA and ± 2° E-VMAT. The planning goal was that 95% of the prescribed dose covered 95% of the PTVin and 98% of the CTVin. The V107% was limited to 1 cc. The mean dose of ipsilateral lung was limited to 20% of the prescribed dose. In addition, the volume of 16 Gy dose was limited to 20% of the ipsilateral lung. Similarly, mean doses to contralateral lung, breast and heart were limited to 1 Gy. In addition, the normal tissue V110% was limited to 1 cc. A workflow was designed to simulate the respiratory motion perturbed dose for each treatment course using the original Eclipse and Monaco plans and the RPM data acquired during patient imaging (Fig. 1). SureCalc Monte Carlo dose calculation algorithm was used to calculate all dose distributions in MIM Maestro software with generic beam models for Varian TrueBeam and Elekta Infinity linear accelerators. In this article, the planned dose refers to the dose distribution calculated in MIM. The initial dose distributions calculated in Eclipse or Monaco were not used in the analysis. A custom-made Matlab script (2020b, MathWorks Inc, MA, USA) was used to divide the original plans into respiratory phase-specific subplans, that is a part of the original plan that would be irradiated during a given respiratory phase. In the phase-specific plans, the dose rate was zeroed between control points (CP) not coinciding with the respiratory phase. For VMAT plans, the starting respiratory phases for the first anterior and posterior arcs were randomly determined for each fraction (Fig. 2). The starting phases of the second anterior and posterior fields were subsequent to the last phase of the preceding arc. The gantry angles coinciding with the respiratory phases were solved using the representative respiratory cycle and gantry rotation speed. CPs were added to the plans with a tolerance of ± 0.1°, if the respiratory phase changed between the original CPs. Similarly, the starting respiratory phases of anterior and posterior 3D-CRT fields were determined randomly, and the plans were divided into phase-specific subplans, according to the amount of monitor units (MU) per field. Average pauses between fields, 2.1 s after the open field and 1.1 s between subfields, were adapted into the division algorithm. The phase-specific dose distribution of each phase-specific plan was calculated in the corresponding respiratory phase image. The phase-specific dose was then deformed and superimposed onto the end-inspiration phase planning image. The sum of all deformed phase-specific distributions represented the dose delivered in one treatment fraction. The process of dividing the original plans into phase-specific plans was then repeated for all fractions included in the original plan (total 5 or 15 times). The random starting respiratory phases for the first anterior and posterior arcs or fields were resampled for each fraction. Once all the fraction-specific dose distributions were calculated for a given technique and fractionation, they were summed to form the final respiratory motion perturbed course-specific dose distribution. Thus, a total of 6 course-specific dose distributions were simulated per patient. Finally, the planned doses were calculated according to the original plans in MIM, and differential dose distributions were formed by subtracting the planned doses from the corresponding course-specific doses. Dose volume histograms (DVH) were compared between the planned and respiratory motion perturbed distributions for both fractionations. In addition to the planning objectives, the maximum dose to 1 cc volume (D1cc) was evaluated for all structures. The minimum dose to 1 cc (Min%), conformity index (CI) and homogeneity index (HI) were evaluated for PTVin and CTVin. HI and CI were calculated using formulas (D2%-D98%)/ D prescription and V95% total /V structure , respectively. The statistical difference between the DVH parameters was determined by Wilcoxon signed rank test (p < 0.05). The differential distributions were deformed to an anatomy of one patient to localize the statistically significant differences between planned and respiratory motion perturbed dose. Student's t test was performed on the differential distributions on a voxel-by-voxel basis including adjacent neighboring voxels. The significance level was adjusted using Benjamini-Hochberg method [21]. Furthermore, an average of the differential distributions was formed in the chosen patient anatomy. Results The respiratory motion characteristics across all patients are presented in Figs. 3 and 4. The range of chest wall movement exceeded 5 mm for one patient. This patient was excluded from the further analysis and reported separately, as respiratory gating is recommended for respiratory motion larger than 5 mm [11]. The average range of chest wall movement between end-inspiration Fig. 2 The original VMAT plan with four arcs a is divided into respiratory phase-specific subplans (b). The starting respiratory phases were randomly determined for the first anterior and posterior arcs (phases 4 and 7, as an example). Image on the right (c) demonstrates these arcs in a single phase-specific subplan that only contains the irradiation coinciding with, for example, the eighth respiratory phase. The MUs delivered in this phase-specific subplan are indicated with bars. Dose rate is zeroed between CPs not coinciding with the eighth respiratory phase and end-expiration phases, perpendicular to the 3D-CRT central axis, was 2.0 ± 1.0 mm (range 1.0-4.1 mm) for the included patients. A linear relationship between liver and chest wall motion amplitudes was observed for nine patients. One patient had a pronounced chest wall motion range compared to the liver motion amplitude (Fig. 4). The median respiratory periods ranged from 2.6 to 4.6 s, with 14 to 44 accepted respiratory cycles across the included patients. The average PTV volume was 1120 cc (range 627-1783 cc). The beam-on times of a single fraction were longer for 5-fraction plans. For example, the average beam-on times of 5 and 15-fraction RA plans were 136 s (range 121-148 s) and 83 s (range 77-93 s), respectively. Respiratory motion induced a slight decrease in the PTVin coverage for RA (approximately 1.2%, p < 0.01) and E-VMAT (approximately 1.5%, p < 0.01), respectively (Table 1). Furthermore, a slight decrease in CTVin coverage was found for E-VMAT (approximately 0.4%, p < 0.01). The PTVin coverage was best retained by the 3D-CRT technique, for which the 5 and 15-fraction dose coverages decreased only by 0.2% (p = 0.30) and 0.3% (p = 0.31), respectively. The PTVin coverage was retained with RA in all included patients with both fractionations, whereas the coverage decreased below the planning goal in one 15-fraction case for 3D-CRT (from 96.0 to 94.8%) and 5 and 15-fraction cases for E-VMAT (96.5 to 94.4% and 96.5 to 94.5%) for one patient. CTVin coverage decreased below the planning goal only in the aforementioned 15-fraction 3D-CRT case (98.2 to 97.6%). Typically, the coverage decreased in the upper and lower medial parts of the PTV or lateral chest wall region Table 1 The dose-volume histogram parameters for the cropped planning and clinical target volumes (PTVin and CTVin) The Planned and Perturbed columns indicate the planned and respiratory motion perturbed parameters, respectively. The units of V95, D1cc and Min% are presented as percentages of the prescribed dose. Statistical significance between the planned and respiratory motion perturbed dose is represented by bolding (p < 0.05) and between fracitonations by an asterisk (p < 0.05) .9 ± 1.6 9.6 ± 1.5 9.6 ± 1.5 8.9 ± 0.6 8.6 ± 0.6 8.6 ± 0.6 6.8 ± 0.8 7.5 ± 0.9 7.3 ± 0.7 ( Fig. 5 and Additional files 1-6: Supplementary Animations 1-6). The HI of PTVin increased with both VMAT techniques (p < 0.01) while no significant change was observed with 3D-CRT (Table 1). Furthermore, the HI of CTVin decreased slightly with 3D-CRT and RA (p < 0.05), but on the contrary, an increase was observed for E-VMAT (p < 0.05). The CI of PTVin and CTVin decreased by 0.05-0.07 (p < 0.01) for all techniques. The D1cc of the contralateral breast decreased for all techniques (p < 0.05, Table 2). The D1cc of the ipsilateral lung decreased for all techniques (p < 0.05) except for the 5-fraction 3D-CRT. Small increases in mean doses were observed for contralateral breast, contralateral lung and heart for E-VMAT (p < 0.05). The mean dose for the contralateral breast decreased slightly for 3D-CRT and RA. Large variation in liver maximum dose was observed. Areas of underdose were observed inferior to the PTV, in the anterior and lateral chest wall region and between the PTV and sternum (Fig. 5). Slight overdose was observed superior to the PTV, in the superior part of liver and in the middle lobe of the ipsilateral lung. However, statistically significant areas were found only for E-VMAT (p < 0.025). Statistical significance between the fractionations was found in PTVin (V95% and CI), CTVin (CI) and contralateral lung (mean) for RA and in Body for E-VMAT. However, the differences in DVH parameters between fractionations were small and estimated clinically irrelevant. The PTVin coverage, HI and CI decreased the most for the patient that was excluded for all techniques (Additional file 7: Table S3). However, the CTVin coverage objective was retained as only small decreases were observed. The changes in PTVin and CTVin HI were the largest for RA and E-VMAT compared to other patients. The results regarding the OAR DVH parameters varied across the techniques (Additional file 7: Table S4). However, the largest increase in ipsilateral lung D1cc was observed for all techniques in this patient. Discussion There were only slight differences between the static and respiratory motion perturbed VMAT distributions. Slight decreases in PTV dose coverage and homogeneity were observed for arc techniques, but the CTV dose coverage was retained on average. No significant decreases were found for PTV or CTV coverage when using 3D-CRT technique, which is in agreement with the previous studies utilizing tangential techniques [12][13][14]. The PTV dose coverage was mainly compromised in areas with steep dose fall-off gradients, such as the medial PTV region. The E-VMAT plans had a steep dose gradient in the medial part of PTV due to avoiding the contralateral breast and thus the most notable dose decrease was observed in the medial region of the PTV. The decrease might be avoided by decreasing the dose gradient in this region, although this might result in increased dose to contralateral tissue. The RA planning did not result in a similar steep dose gradient in this region and the dose decrease in the medial parts of CTV was thus smaller in volume. However, dose to the contralateral breast was higher compared to E-VMAT. Statistical significance was not found in the 3D-CRT or RA differential distributions (Fig. 5), mainly due to small patient cohort. The CTVin coverage was also retained for the excluded patient, despite having the largest respiratory amplitude (5.3 mm) and period (6.1 s). However, the decrease in CTVin coverage was greater than on average for E-VMAT. However, the largest decrease in PTV dose coverage was found in this patient regardless of the technique (~ 2% for 3D-CRT, 4.1-4.7% for RA and E-VMAT). The respiratory amplitude of over 5 mm has been considered to have clinically significant impact in a study using the wedge technique [13]. Changes in HI of PTV indicated decreased homogeneity for VMAT plans. The CTVin dose homogeneity was conserved for 3D-CRT and RA techniques and only a small increase was observed for E-VMAT, even though the formula for HI is susceptible to changes in D2% and D98%. This suggests that respiratory motion increased dose heterogeneity in the areas of PTV edges, similar to observations for IMRT [17]. The PTV and CTV conformity increased consistently with respiratory motion for all patients. The most significant changes in OAR parameters were observed for maximum doses (D1cc) as the changes in mean dose were marginal. Dose to the ipsilateral lung decreased slightly in the chest wall region and increased slightly in the center of the lower lobe. Furthermore, large variation in liver maximum dose was observed. This was expected since the liver might move into the fields with expiration. On average, the DVH parameters determined for 5-fraction plans were equal to those determined for 15-fraction plans. For one patient, the perturbed 5-fraction 3D-CRT distribution retained the PTVin coverage goal while the corresponding 15-fraction distribution did not (difference of 0.7 percentage points (pp)). Similar effect was observed in the CTVin coverage of the same patient. In this study, ultra-hypofractionation resulted in longer beam-on times (136 s vs. 83 s, for RA), and the MLC patterns were identical as a function of gantry angle, because the 5-fraction plans were scaled from Some limitations exist in the study. Conducting the treatment planning on the end-inspiration image creates limitation, since free-breathing CT image is used for planning when respiratory gating is not used. However, variance in patient position was observed between the free-breathing image and 4D series in this patient cohort. The scope of this study was rather to simulate a worstcase scenario and thus end-inspiration image was used for planning. In addition, the included patients were initially treated for lung cancer and retrospectively selected for this study to investigate the effects of respiratory motion on the planned WBI dose distributions. However, as the patients' average chest wall motion range was similar to the breast cancer patients' respiratory motion range observed in previous studies [12,13], the present patient group was considered suitable for the study. Furthermore, non-rigid fusions were used to transform and superimpose respiratory phase specific dose distributions to the end-inspiration phase. Inaccuracies in nonrigid fusions may generate uncertainty to simulated dose distributions and thus, exaggerate differences between planned and simulated distributions. To evaluate the quality of non-rigid fusion, all structures delineated to end-inspiration CT were deformed and transformed to other respiratory phases and quality of deformed structures was visually reviewed. Finally, it should be noted that other sources of error, such as tissue deformation [22], setup error [23] and variability in respiratory patterns [24,25], were not considered in this study. The aim of this study was to evaluate the dosimetric effect of respiratory motion in free-breathing WBI for VMAT techniques. While the PTV coverage and dose homogeneity declined, they were retained in the CTV in this worst-case scenario approach, when the respiratory motion amplitude of the chest wall is less than 5 mm. The general conclusion is that the homogeneous CTV dose is retained with moderate and ultra-hypofractionation, even though the dose is delivered from multiple angles. While other sources of error are present in a realistic breast cancer treatment, forming the PTV as a 5 mm expansion of CTV was sufficient in terms of respiratory motion induced error. Conclusion Respiratory motion of less than 5 mm in magnitude did not result in clinically significant changes in the planned free-breathing WBI dose. The 5 mm margins were sufficient to account for the respiratory motion in terms of CTV dose homogeneity and coverage for VMAT. Steep dose gradients near the PTV edges might affect the CTV coverage. The 15 and 5-fraction approaches provided roughly the same dose-averaging results, as the increased fraction dose was combined with slower MLC speeds and longer beam-on times in this study.
5,443.2
2021-10-06T00:00:00.000
[ "Medicine", "Physics" ]
Fluctuations and thermodynamic geometry of the chiral phase transition We study the thermodynamic curvature, $R$, around the chiral phase transition at finite temperature and chemical potential, within the quark-meson model augmented with meson fluctuations. We study the effect of the fluctuations, pions and $\sigma$-meson, on the top of the mean field thermodynamics and how these affect $R$ around the crossover. We find that for small chemical potential the fluctuations enhance the magnitude of $R$, while they do not affect substantially the thermodynamic geometry in the proximity of the critical endpoint. Moreover, in agreement with previous studies we find that $R$ changes sign in the pseudocritical region, suggesting a change of the nature of interactions at the mesoscopic level from statistically repulsive to attractive. Finally, we find that in the critical region around the critical endpoint $|R|$ scales with the correlation volume, $|R| =K\;\xi^3$, with $K = O(1)$, as expected from hyperscaling; far from the critical endpoint the correspondence between $|R|$ and the correlation volume is not as good as the one we have found at large $\mu$, which is not surprising because at small $\mu$ the chiral crossover is quite smooth; nevertheless, we have found that $R$ develops a characteristic peak structure, suggesting that it is still capable to capture the pseudocritical behavior of the condensate. INTRODUCTION The thermodynamic theory of fluctuations allows to define a manifold spanned by intensive thermodynamic variables, {β k } with k = 1, 2, . . . , N , and equip this with the notion of a distance, d 2 = g ij (β 1 , β 2 , . . . , β N )dβ i dβ j where g ij is the metric tensor, that depends in general of the {β k } and measures the probability of a fluctuation between two equilibrium states. The metric tensor can be computed from the derivatives of the thermodynamic potential, therefore the knowledge of the latter is enough to define the metric on the manifold. Thermodynamic stability requires g > 0 where g is the determinant of the metric; the condition g = 0 determines a phase boundary in the {β k } space and g < 0 corresponds to regions of thermodynamic instability. By means of g ij it is possible to define the scalar curvature, R, using the standard definitions of the Riemann geometry; in this context, R is named the thermodynamic curvature, and the theory that studies R is called thermodynamic geometry . One of the merits of R is that it carries the physical dimensions of a volume and because of hyperscaling, around a second order phase transition |R| ∝ ξ d where d denotes the spatial dimension and ξ is the correlation length: as a consequence, R diverges at a second order phase transition, and by means of R it is possible to estimate ξ by virtue of pure thermodynamic functions. In general, the divergence of R at a second order phase transition occurs in correspon-dence of the condition g = 0, therefore looking for phase transitions in the {β k } space it is equivalent to look for the zeros of g or for the divergences of R; there are however other possibiliteis, like the divergence of one of the metric elements or of their derivatives (see eq. (10)). In this study, we analyze the thermodynamic geometry, and in particular the thermodynamic curvature, of the Quark-Meson (QM) model of Quantum Chromodynamics (QCD), augmented with the fluctuations of the σ−meson and pions, see [45][46][47][48][49][50] and references therein. Despite the abundant literature about thermodynamic curvature, a systematic study of the effect of fluctuations on R is missing, therefore we aim to fill this gap addressing the questions of how fluctuations influence the thermodynamic geometry. Being this a first study on fluctuations in the context of thermodynamic curvature, we introduce the fluctuations in the simplest way possible, namely using the Cornwall-Jackiw-Toumbulis (CJT) effective action formalism for composite operators [51] and limiting ourselves to the largely used Hartree approximation [49,50] in which momentum dependent self-energy diagrams are neglected. Within these approximations, the effect of the interaction of the fluctuations with the medium is a shift in their mass that can be computed solving self-consistently the Schwinger-Dyson equations for the propagators and for the mean field condensate. Moreover, it is possible to write down in a simple form the contributions of the fluctuations to the thermodynamic potential, therefore to evaluate the effect on the thermodynamic curvature. Possible future improvements are mentioned briefly in the Conclusions. Although this is a study about thermodynamic geome-try, the model we use has been built up for modeling the chiral phase transition of QCD at high temperature and finite chemical potential, therefore it is useful to summarize briefly a few known facts about the QCD phase diagram and how this relates to that of the QM model, to give more context to the subject that we discuss here. At zero baryon chemical potential, from first principles Lattice QCD calculations we learn that QCD matter experiences a smooth crossover from a low temperature confined phase, in which chiral symmetry is spontaneously broken, to a high temperature phase in which chiral symmetry is approximately restored [52][53][54][55][56]. Since the chiral restoration at large temperature is a smooth crossover, it is not possible to define uniquely a critical temperature, rather it is more appropriate to define a pseudo-critical region, namely a range of temperature in which several physical quantities (chiral condensate, pressure, chiral susceptibility and so on) experience substantial changes. This crossover is reproduced by the QM model, and the pseudo-critical temperature predicted by the model is in the same ballpark of the pseudo-critical temperature od QCD, that is T c ≈ 150 MeV ≈ 10 12 K. At large finite baryon chemical potentials the sign problem forbids reliable first principle calculations, therefore effective models like the QM model have been used to study the phase structure of QCD at finite µ and it has been found that the smooth crossover becomes a first order phase transition if µ is large enough: this suggests the existence of a critical endpoint (CEP) in the (T, µ) plane at which the crossover becomes a second order phase transition with divergent susceptibilities, and this point marks the separation between the crossover on the one hand and the first order line on the other hand. Recently, information theory has also been applied to the QCD phase diagram [57]. We anticipate here the main results. The curvature is found to be positive at low temperature, as for an ideal fermion gas; then a change of sign is observed near the chiral crossover, where R develops a local minimum which becomes more pronounced when the chemical potential is increased; finally, R becomes positive again at high temperature and approaches zero from above. A change of sign of R has been observed for many substances [18,20,22,23,25,[27][28][29]31] as well as in previous studied on the thermodynamic curvature of the chiral phase transition [43,44] and it has been interpreted in terms of the nature of the attractive/repulsive microscopic interaction. We support this idea here, and we interpret the change of sign of R around the chiral crossover as a rearrangement of the interaction at a mesoscopic level, from statistically repulsive far from the crossover to attractive around the crossover. Moreover, the height of the peak of R increases along the critical line as µ is increased from zero to the corresponding CEP value and diverges at the CEP: this is in agreement with |R| ∝ ξ 3 since the correlation length remains finite at the crossover but increases as the crossover becomes sharper and eventually diverges at the critical endpoint. We check quan-titatively the relation between R and ξ near the CEP by identifying ξ = 1/M σ , where M σ is the pole mass of the σ−meson that carries the fluctuations of the σ field. Even more, we find that fluctuations enhance |R| at the crossover at small µ, and we interpret this as the fact that the fluctuations make the chiral broken phase more unstable and favor chiral symmetry restoration at finite temperature; near the CEP we do not find substantial effects of the fluctuations on R, and we interpret this as the fact that even without fluctuations, the mean field thermodynamic potential predicts a second order phase transition at the CEP with divergent susceptibilities and a divergent curvature [43,44], and the fluctuations cannot change this picture but can only alter the values of the critical exponents. The plan of the article is as follows. In Section I we briefly review the thermodynamic geometry and in particular the thermodynamic curvature. In Section II we review the QM model. In Section III we discuss R for the QM model. Finally, in Section IV we draw our conclusions. We use the natural units system = c = k B = 1 throughout this article. I. THERMODYNAMIC GEOMETRY Consider a thermodynamic system in the grandcanonical ensemble whose equilibrium state is characterized by the pair (T, µ), where T is the temperature and µ is the chemical potential conjugated to particle density. In order to define the thermodynamic geometry it is convenient to shift to new coordinates X = (X 1 , X 2 ) = (β, γ) with β = 1/T and γ = −µ/T . It is well known that a thermodynamic system at equilibrium can fluctuate to another equilibrium state characterized by different values of X, and the probability of this fluctuation can be computed within the standard thermodynamic fluctuation theory. In order to formulate this as well as the geometry of thermodynamics, it is possible to define a metric space on the 2-dimensional manifold spanned by (β, γ) introducing a distance between two points, analogously to what is done in Riemannian geometry. In particular, we define the distance d 2 as [3,15] where for a system with grand-canonical partition function Z we have put with φ ≡ βP , the pressure P = −Ω, with Ω representing the thermodynamic potential per unit volume; moreover φ ,ij denotes the second derivative of φ with respect to i and j. The distance in Eq. (1) can be connected to the theory of thermodynamic fluctuations as follows. The probability of the system fluctuating from X = (β, γ) to where g is the determinant of the metric tensor defined in Eq. (2). Large probability of a fluctuation corresponds to small d 2 , while small probability to large d 2 . Therefore, a large thermodynamic distance between two equilibrium states, X and X + dX, means a small probability to fluctuate from X to X + dX; vice versa, a small distance implies a large probability of fluctuate. In this sense, Eq. (1) measures the distance in the (β, γ) plane between two thermodynamic states in equilibrium. Thermodynamic stability requires that g ββ > 0 and g > 0, while g = 0 corresponds to a phase boundary and regions with g < 0 are unstable. The stability conditions ensure that d 2 is a positive definite quantity. The second derivatives are related to the fluctuation moments: where F i denotes the physical quantities conjugated to X i and · · · is the standard ensemble average. In our case we have: where U, N denote the internal energy and the particle number respectively. Equipped with a metric tensor in the (β, γ) manifold, it is possible to define the Riemann tensor, where the Christoffel symbols are given by Standard contraction procedure allows to introduce the Ricci tensor, R ij = R k ikj and the scalar curvature, R = R i i that in this context is called the thermodynamic curvature. For a two-dimensional manifold the expression for R simplifies considerably, namely [15] where | . . . | denotes the determinant of the matrix. Notice that the curvature diverges for g = 0 namely on a phase boundary, unless the determinant in the numerator of Eq. (10) vanishes on the same boundary. It has been postulated that |R| ∝ ξ 3 in proximity of a second order phase transition, where ξ is the correlation length of the fluctuations of the order parameter [3]. This relation is natural in the hyperscaling hypothesis because R brings the physical dimension of a volume; being based on scaling, this relation should be valid only in proximity of a second order phase transition. It is remarkable that many independent theoretical calculations based on different models confirm this hypothesis [3,15,37,58]; therefore, not only the study of R in the (β, γ) twofold brings information about the phase transitions, but it allows for an estimate of the correlation volume based only on the thermodynamic potential rather than computing correlators: this is one of the merits of the thermodynamic geometry. II. THE QUARK-MESON MODEL In this section, we review the quark-meson (QM) model in which fermions (in our context, quarks) interact with mesons (that are the σ-meson and the pions in our work), and is based on the Lagrangian density with the mesonic and fermionic parts respectively given by and Here Φ is the matrix field with τ 0 the unity matrix and − → τ = (τ 1 , τ 2 , τ 3 ) the Pauli matrix, − → π = (π 1 , π 2 , π 3 ) is an isotriplet of pion fields, σ is the isosinglet field and Ψ is a massless isodoublet quark field. A common approximation, done in particular in the context of effective field theories for the quark chiral condensate of QCD, is that of mean field in which the meson fields are replaced by their uniform, time independent saddle point values σ = f π and − → π = 0. In this study we want to go beyond the mean field approximation, including the quantum fluctuations of the meson fields and studying their effect on the thermodynamic geometry (the functional integral over the fermion fields can be done exactly on top of the mean field solution). Within a gaussian approximation, the partition function of the model is given by where the subscripts f and m stand for fermions and mesons respectively; in this model, both quarks and meson fluctuations propagate on the background of the condensate of the σ field, the value of which is determined consistently by solving the gap equations (see below). The thermodynamic grand potential is Before giving the expression for Ω, we emphasize that both Ω f and Ω m contain ultraviolet divergent contributions arising from momentum integration of the single particle energies, that correspond to the usual zero point energy of ideal gases of fermions and bosons; these contributions cannot be simply subtracted since they contain a dependence from the condensate that in principle affects the response of the condensate itself to temperature and chemical potential. We now give the expression of Ω. Starting with Ω f , the standard renormalization procedure gives In the second and third lines of the right hand side of Eq. (17) we recognize the standard relativistic free gas thermodynamic potential at finite temperature and chemical potential; on the first line of the right hand side of the same equation we show the zero temperature, zero chemical potential contribution that is potentially divergent and has been renormalized at the scale Q f . The mesonic contribution, Ω m , can be obtained via the standard the CJT effective action formalism in the Hartree approximation in which momentum-dependent self-energy corrections are neglected. Differently from [49,50] we do not include the vacuum term of the meson potential, so the pressure of the pions and σ−meson is zero at T = µ = 0: the condensation energy takes contributions only from the classical potential plus the fermion loop, while the mesons appear as excitation of the ground state at finite temperature. This choice is done also for the sake of simplicity because including a further zero temperature, zero chemical potential renormalized term of the mesons would introduce an additional renormalization scale that would lead to unexpected behaviors of the thermodynamic quantities [50]. Within these approximations we have [49,50] where for = σ, π we have put with E = k 2 + M 2 , and Within this model, for given temperature and chemical potential the unknown are the value of the condensate, namely the expectation value of σ, as well as the inmedium meson masses M σ and M π : these are obtained by solving the gap equations, that are with The gap equations depend of the renormalization scale, Q f , as well as of three parameters, m, λ and h. At the tree-level, namely when no meson and quark loops are considered, the parameters m, λ and h are fixed to reproduce the physical values m σ , m π as well as σ = f π at T = 0 and µ = 0, where we use small letters to denote physical masses at T = µ = 0; without the fermion and meson loops these give where the subscript tree reminds that these are quantities computed using the tree-level potential. In order to fix the renormalization scale we have to adopt one renormalization condition, that is where λ results from the gap equations at T = µ = 0, namely m 2 and h from the gap equations at T = µ = 0 are always equal to the tree value: Finally, from eq.s (28) and (29) we have III. RESULTS In this section we report and discuss our results. Firstly, we show briefly the effect of fluctuations on the condensate, then we focus on the thermodynamic geometry. Our purpose is to show the existence of a pseudocritical region in which the condensate substantially decreases with temperature, then study the elements of the thermodynamic metric as well as the scalar curvature around this region. For the parameters we take f π = 93 MeV, m σ = 700 MeV, m π = 138 MeV and finally g = 3.6: the latter is chosen so that the constituent quark mass at T = µ = 0 is M = 335 MeV. The resulting value of the renormalization scale is Q f = 709 MeV. In both cases, a range of temperature where σ decreases exists, that signals the partial restoration of chiral symmetry (chiral symmetry cannot be restored exactly due to the soft explicit breaking in the action). In Fig. 2 we plot the in-medium masses of the σ-meson and pions as a function of temperature, for several values of the quark chemical potential. These have been computed for the model with fluctuations included. We no- tice that for each of the values of µ considered, a range of temperature exists in which the σ-meson mass decreases while the pions mass increases, and the two match at high temperature signaling the approximate restoration of the O(4) symmetry, as well as the decoupling of these particles from the low energy spectrum of the model. Moreover, the lowering of M σ to a minimum is a sign that the fluctuations of the scalar field are enhanced near the chiral crossover. In Fig. 3 we plot the pressure versus the temperature for the models with and without fluctuations, for µ = 0 (black lines) and µ = 300 MeV (red lines). At fixed µ and T the fluctuations increase the pressure as expected; however, we notice that at large values of µ the contribution of the fluctuations becomes less important in comparison with the mean field pressure. B. The thermodynamic curvature In Fig. 4 we plot the scalar curvature, R, versus temperature for three values of the quark chemical potential: the upper panel corresponds to the case without fluctuations, the lower panel to that with fluctuations. We notice that in both cases, R develops a peak structure around the chiral crossover, in agreement with [43,44]. This is expected thanks to the relation between R and the correlation volume around a phase transition: as a matter of fact, at a second order phase transition R diverges due to the divergence of the correlation volume, while at a crossover the correlation length increases but remains finite and susceptibilities are enhanced so R is expected to grow up in the pseudocritical region. Therefore, the thermodynamic curvature can bring information about the correlation volume also near a crossover. In addition to this, we find that at small µ the peaks of R are more pronounced when the fluctuations are included. This is an interesting, new observation about the thermodynamic geometry and is related to the fact that fluctuations make the chiral broken phase more unstable. This can be seen from the determinant of the thermodynamic metric, g, see Fig. 5: at small µ in the critical region the determinant with fluctuations is smaller than the one without fluctuations (g = 0 corresponds to thermodynamic instability and infinite curvature), while increasing µ the determinant in the critical region is not very affected by the presence of the fluctuations. This is in line with the results of the pressure in Fig. 3 in which we show that fluctuations do not give a substantial contribution in the critical region at large µ. When µ is increased, R is enhanced in the critical region both with and without fluctuations. This is most likely related to the fact that the critical endpoint with the second order phase transition and the divergent correlation length already appears within the mean field approximation, so the main role of the fluctuations is that to change the critical exponents but not to change the phase structure. The scalar curvature changes sign around the crossover, both with and without fluctuations: this is in agreement with [43,44] and can be interpreted as a rearrangement of the collective interactions in the hot medium around the chiral crossover, from statistically repulsive (due to the fermionic nature of the bulk) to attractive. This piece of information was not accessible to previous model calculations on the QCD phase diagram and represents a merit of the thermodynamic geometry. C. The critical temperature and the endpoint The crossover nature of the transition to the chiral symmetric phase at high temperature leaves an ambiguity on the definition of a critical temperature: in fact, it is possible to adopt several definitions to identify the critical region, in which the order parameter decreases substantially. We compare the predictions of the model using four different definitions. Firstly, we define the pseudocritical temperature, T c (µ), as the temperature corresponding to the maximum of ∂σ/∂β (which coincides with the maximum of ∂σ/∂γ). A second definition is the temperature at which ∂M σ /∂β is maximum (the same of ∂M σ /∂γ). Thirdly, we can define T c as the one at which M σ is minimum (since at this temperature the correlation length of the fluctuations of the order parameter is the largest). Finally, the peculiar structure of R = R(T ) at a given µ allows for the fourth definition, namely the temperature at which R presents its local minimum. In Fig. 6 we show T c versus µ obtained with the four definitions. We notice that the different definitions give consistent results with each other. This supports the idea that we can use the peaks of R to identify the chiral crossover, which in turn suggests that R is sensitive to the crossover from the broken to the unbroken phase even though this is not a real second order phase transition. In the phase diagram shown in Fig. 6 the crossover line terminates at a critical endpoint, CEP, located at (µ CEP , T CEP ) = (350 MeV, 30 MeV). Approaching this point along the critical line, the crossover turns into a second order phase transition with divergent susceptibilities, then the transition becomes first order with jumps of the condensate across the transition line. In Fig. 7 we plot R versus temperature for values of µ close to the critical endpoint: upper and lower panels correspond to the results without and with fluctuations respectively. As expected, approaching the critical endpoint the magnitude of the peak value of R becomes larger, as it should be since the crossover becomes a second order phase transition there and R should diverge at the CEP. D. Thermodynamic curvature and correlation volume It is interesting to compare the thermodynamic curvature around the critical line, with the correlation volume ξ 3 , where ξ is the correlation length. This comparison is interesting since according to hyperscaling arguments, around a second order phase transition |R| = Kξ 3 with K of the order of unity; restoration of chiral symmetry is a crossover rather than a real phase transition, at least far from the critical endpoint, therefore we can check how the hyperscaling relation works around such a smooth crossover and how it changes approaching the CEP. In Fig. 8 we compare the thermodynamic curvature, computed along the critical line, with the correlation volume, the latter being estimated by taking ξ = 1/M σ as a measure of the correlation length of the fluctuations of the order parameter. We find that both the correlation volume and the thermodynamic curvature behave qualitatively in the same way near the CEP; moreover, the numerical values of the two quantities is comparable in the critical region. We conclude that our study supports the idea that |R| = Kξ 3 in proximity of the second order phase transition. In the small µ regime the relation between the curvature and the correlation volume does not need to be satisfied since in this regime the critical line is a smooth crossover. In fact, we find that for small values of µ the agreement between |R| and ξ 3 is not as striking as the one in proximity of the CEP; nevertheless, we still find that the two quantities behave qualitatively in the same way, namely they stay approximately constant for a broad range of µ then grow up as the CEP is reached. IV. CONCLUSIONS We have studied the thermodynamic geometry around the chiral phase transition at finite temperature and chemical potential. The phase transition has been studied within the quark-meson (QM) model augmented with meson fluctuations; within this model the phase transition at large temperature and small chemical potential is actually a smooth crossover, which turns to a second order phase transition at the critical endpoint then becomes a first order phase transition at large values of the chemical potential. The main goals have been to analyze the relation between the thermodynamic curvature, R, and the correlation volume for smooth crossovers, and how this changes approaching a second order phase transition at the critical endpoint. Moreover, we have studied the effect of the fluctuations, pions and σ−meson, on the top of the mean field thermodynamics and how these affect the thermodynamic curvature around the crossover. Of particular interest is the σ−meson since it corresponds to the amplitude fluctuation mode and its mass can be related directly to the correlation length of the fluctuations of the order parameter. Fluctuations have been introduced within the Cornwall-Jackiw-Toumbulis effective potential formalism [51] in the Hartree approximation; we have neglected the zero point energy contributions of the meson fields, both for the sake of simplicity and to avoid the unexpected behavior of thermodynamic quantities when these are included and two renormalization scales are needed [50]. Within this approach, the Schwinger-Dyson equations for the meson propagators become simple equations for the meson masses that can be solved, consistently with the gap equations, to get the condensate and the masses as a function of temperature and chemical potential. This study is a natural continuation of previous works [43,44] in which the same problem has been analyzed within the mean field approximation. We have found that in the region of small values of µ, the fluctuations enhance the magnitude of the curvature. We understand this in terms of the stability of the phase with broken chiral symmetry, that can be analyzed by the determinant of the metric, g: in fact, the condition of stability reads g > 0 while g = 0 corresponds to a phase boundary where a phase transition happens and R diverges, so the smaller the g the closer the system is at a phase transition and the larger is R. We have found that the determinant with fluctuations and around the crossover is smaller than g without fluctuations in the same range of T and µ, meaning that fluctuations make the chiral broken phase less stable. This result is expected, since fluctuations of the order parameter represented by the σ−meson tend to wash out the σ−condensate. On the oher hand, at larger values of µ and in proximity of the critical endpoint, the fluctuations do not bring significant changes to the mean field solution around the critical line and R is less sensitive to the fluctuations. This is also easy to understand, because the mean field thermodynamics already predicts the existence of the critical endpoint with a divergent curvature [43,44], so the role of the fluctuations is just that to change the mean field critical exponents. We have found that R changes sign in the pseudocritical region: this suggests that around the chiral crossover, the interaction changes at mesoscopic level from being statistically repulsive to attractive. This change in the nature of the interaction is not accessible to methods based on standard thermodynamics, and this prediction represents one of the merits of the thermodynamic geometry. We have verified that in the critical region around the critical endpoint |R| scales with the correlation volume, |R| = Kξ 3 with K = O(1), in agreement with hyperscaling arguments: thus |R| brings information on the correlation volume. In proximity of the crossover at small µ the correspondence between |R| and the correlation volume is not as good as the one we have found at large µ, which is not surprising because at small µ the chiral crossover is quite smooth; nevertheless, we have found that R develops a characteristic peak structure, suggesting that it is still capable to capture the pseudocritical behavior of the condensate. This study presents the natural continuation of the work started in [43,44], and offers possibilities for further investigations. From the fields theory point of view, two natural questions are whether it is worth to relax the Hartree approximation to get a more realistic description of the quantum fluctuations, and whether the renormalization of the zero point energy of mesons has to be done on the same footing of that of quarks. Investigating these topics might bring to a more confident application of the theory of quantum fluctuations to the thermodynamic geometry. Moreover, fluctuations can be treated also with the Functional Renormalization Group (FRG) approach [82,83] under suitable assumptions on the full effective potential: it is of a certain interest to study the effects of the fluctuations on the thermodynamic geometry using FRG. We plan to report on these topics in the near future.
7,232.4
2020-10-07T00:00:00.000
[ "Physics" ]
Quantum supremacy in driven quantum many-body systems A crucial milestone in the field of quantum simulation and computation is to demonstrate that a quantum device can compute certain tasks that are impossible to reproduce by a classical computer with any reasonable resources. Such a demonstration is referred to as quantum supremacy. One of the most important questions is to identify setups that exhibit quantum supremacy and can be implemented with current quantum technology. The two standard candidates are boson sampling and random quantum circuits. Here, we show that quantum supremacy can be obtained in generic periodically-driven quantum many-body systems. Our analysis is based on the eigenstate thermalization hypothesis and strongly-held conjectures in complexity theory. To illustrate our work, We give examples of simple disordered Ising chains driven by global magnetic fields and Bose-Hubbard chains with modulated hoppings. Our proposal opens the way for a large class of quantum platforms to demonstrate and benchmark quantum supremacy. Introduction-Quantum computational supremacy is the ability of quantum devices to efficiently perform certain tasks that cannot be efficiently done on a classical computer [1,2]. Early proposals for realizing quantum supremacy include boson sampling [3][4][5] and random quantum circuits [6][7][8]. In both cases, the computational hardness stems from the inability of a classical computer to efficiently sample the output probabilities of a complex quantum evolution. Experimental efforts towards achieving quantum supremacy include optical networks for boson sampling [9][10][11][12][13] and superconducting circuits for random circuits [14]. Signatures of quantum supremacy have been observed recently with 53 superconducting qubits [15]. Analog quantum simulators are controllable quantum platforms specifically built to implement complex quantum many body models [16][17][18][19]. In these experiments, complex quantum dynamics have been implemented which cannot be reproduced with existing classical numerics and have shed light on important questions in quantum many-body physics [20]. However, rigorous proof of quantum supremacy involving complexity theory in those systems are yet to be shown, with the few exceptions of the 2D quantum Ising [21,22] and the 2D cluster-state models [23]. In this work, we provide evidence that when generic isolated periodically-driven quantum many-body systems thermalize, in the sense that any observables can be obtained from the microcanonical ensemble, sampling from their output distribution cannot be efficiently performed on a classical computer. These constitute a large class of quantum simulators that are currently available [14,[24][25][26][27][28][29]. Our analysis is based on the absence of collapse of the polynomial hierarchy and two plausible assumptions: the worst-to average-case hardness of the sampling task and the experimental realisability of random *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>unitaries as predicted by the Floquet Eigenstate Thermalization Hypothesis (ETH). We support our findings by examining specific examples of disordered quantum Ising chain driven by a global magnetic field and the onedimensional Bose-Hubbard (BH) model with modulated hoppings. These models have been widely implemented experimentally [14,[24][25][26][27][28][29], making our work of broad interest to the experimental community. General framework-Let us consider a generic periodically-driven quantum many-body system whose Hamiltonian is described byĤ(t) =Ĥ 0 +f (t)V . HereĤ 0 is the undriven Hamiltonian,V is the driving Hamiltonian such that Ĥ 0 ,V = 0, and f (t) is periodic with period T . We require that the time-averaged Hamiltonian H ave = 1 T T 0Ĥ (t)dt describes an interacting many-body system [30]. Let Z = {|z = ⊗ L i |z i } be a complete basis of manybody Fock states, where z i = {0, 1, 2, .., D i − 1} denotes the basis state of a local quantum system of dimension D i and where i ∈ [1, L]. In what follows, we assume without loss of generality that D i = D for all i, resulting in an Hilbert space of dimension N = D L . The state after M driving periods is |ψ M =Û M F |z 0 , whereÛ F = T exp −i T 0Ĥ (t)dt ≡ exp −iĤ F T andT is the time-ordering operator. We assume that the initial state |z 0 is a product state. The effective time-independent Floquet HamiltonianĤ F fully describes the dynamics probed at stroboscopic times t = nT . The probability of measuring the Fock state |z is then p M (z) = | z|ψ M | 2 with where the sum is performed over M − 1 complete sets of basis states. More precisely, the set of basis states {|z m } is associated with the quantum evolution after m driving cycles with z 0 (z M = z) being the initial (readout) configuration. The expression in Eq. (1) can be viewed as the Feynman's path integral where each trajectory is defined by a set of configurations {z 0 , z 1 , ..., z M }. The ETH states that generic isolated many-body quantum systems thermalize by their own dynamics after a long enough time, regardless of their initial state. In that case, any generic observable is expected to evolve toward the canonical ensemble with a finite temperature [31]. For driven quantum many-body systems, it has been shown that not only thermalization still occurs, but that for low-frequency driving, the associated temperature becomes infinite [32]. In this limit, the Floquet operatorÛ F shares the statistical properties of the Circular Orthogonal Ensemble (COE). This is an ensemble of matrices whose elements are independent normal complex random variables subjected to the orthogonality and the unitary constraints. This emergent randomness is the particular ingredient responsible for the hardness in calculating the output probability of Eq. (1), as there are exponentially many random Feynman trajectories that are equally important. We emphasize that the external periodic drive is crucial to reach the required level of randomness [33,34]. A more detailed analysis ofĤ F shows that the presence of low-frequency driving allows to generate effective infiniterange multi-body interactions [32,35]. Therefore lifting the constraints imposed by the limited local few-body interactions generally encountered in physical systems [36]. Quantum supremacy-To understand the computational task, let us first define some essential terms used in the complexity theory, namely approximating, sampling, multiplicative error and additive error. Let us imagine an analog quantum device built to mimic the quantum dynamics that would lead to p M (z) = | z|ψ M | 2 . In practice, such device will encode an output probability q(z) that differs from p M (z) due to noise, decoherence and imperfect controls. Both probabilities are said to be multiplicatively close if where α ≥ 0. The task of approximating p M (z) up to multiplicative error is to calculate q(z) that satisfies the above equation for a given z. However, such degree of precision is difficult to achieve experimentally as the allowed error is proportional to p M (z) which can be much smaller than unity. A more feasible task is to approximate p M (z) up to additive error, defined as with β > 0. Note that the additive error involves summing over all possible output strings z ∈ Z, while the multiplicative condition applies to each z individually. The task of approximating p M (z) even with additive error is still unrealistic as it requires a number of measurements that grows exponentially with the size of the system. What a quantum device can do is to sample strings from q(z). Hence, we define the task of sampling from p M (z) up to additive error as generating strings from q(z) while q(z) is additively close to p M (z). This task is our central focus to show quantum supremacy. We emphasize that it is different from "certifying quantum supremacy" [37] which consists of certifying if Eq. (3) holds. To show that the above sampling task cannot be done efficiently by a classical computer, we follow the standard argument which proceeds as follows. Let us suppose that there is a classical machine C able to sample from p M (z) up to additive error and that the distribution of p M (z) anti-concentrates, i.e. Here, the distribution is obtained from a set of unitary matrices {Û F } that are realizable experimentally. The Stockmeyer theorem states that, with the help of a NP oracle, that machine C can also approximate p M (z) up to multiplicative error for some outcomes z [39]. We emphasize that the sampling task is converted to the approximation task in this step. If the latter is #P-hard, then the existence of that machine C would imply the collapse of the polynomial hierarchy to the third level, which is strongly believed to be unlikely in computer science. Hence, assuming that the polynomial hierarchy does not collapse to the third level, we reach the conclusion that a classical machine C does not exist. The two fundamental conditions of the proof, that is the #P-hardness of approximating p M (z) up to multiplicative error and the anti-concentration of p M (z), can be more formally based on the two following theorems. In theorem 1, we introduced the key notion of worstcase hardness of the entire set of COE matrices {Û COE }. This corresponds to the scenario where at least one instancep M (z), i.e. a single unitaryÛ ∈ {Û COE } and a single configuration z ∈ Z, is hard to approximate with multiplicative error. However, that one instance may be impractical to produce experimentally as the full set of COE matrices {Û COE } (p M (z)) might not coincide with the experimentally accessible set {Û F } (p M (z)). That is due to the fact that even though Floquet ETH strongly suggests thatÛ M F is an instance uniformly drawn from the {Û M COE }, not allÛ M COE matrices will be realizable by {Û M F }. More desirable is the average-case hardness where most instances are hard. Consequently, to ensure that the hard instance in Y can be found within {Û F } and thus prove quantum supremacy for realizable driven analog quantum systems, we further assume the following two conjectures. Conjecture 1 (Average-case hardness) For any 1/2e fraction of Y, approximatingp M (z) up to multiplicative error with α = 1/4 + o(1) is as hard as the hardest instance. Here o(·) is the little-o notation. Informally, conjecture 1 assumes the worst-to-average case reduction in Y which is common in most quantum supremacy proposals [40]. Conjecture 2 connects the mathematically constructed COE to experimentally accessible driven analog quantum systems by stating that the ensemble {Û M F } is statistically equivalent to a set of instances uniformly drawn from {Û M COE }. This conjecture is supported by the observation that isolated systems evolving under genericÛ M F thermalize to infinite temperature resulting in fully random final quantum states [32][33][34] in experimentally relevant timescales [24,35]. Compared to existing quantum supremacy proposals, the reliance of the main theorem (see below) on conjecture 2 is not standard and may be seen as undesirable. But from our perspective, this conjecture makes a connection between computational complexity and the experimentally tested Floquet ETH that is applicable to a broad class of generic periodically-driven quantum systems as implemented in a variety of analog quantum simulators. Proving or disproving conjecture 2, either directly or indirectly by refutation of the main theorem while conjecture 1 holds true, is by itself of fundamental interest in physics. The fraction used in conjecture 1 is chosen so that the approximate Haar random measure ensures that some hard instances in Y can be realized with {Û M F }. In combination of theorems 1 and 2, the two conjectures finally allow us to state the main theorem of this work. Main Theorem Assuming conjectures 1 and 2, the ability to classically sample from p M (z) up to an additive error β = 1/8e for all unitary matrices in {Û F } implies the collapse of the polynomial hierarchy to the third level. In what follows, we address in detail the proofs of theorems 1 and 2 while the detailed application of the standard Stockmeyer argument to prove the main theorem is provided in Appendix A #P hardness of simulating COE quantum dynamics-To prove theorem 1, we first notice that the COE is an ensemble of all orthogonal unitary matrices. This includes the well-known instantaneous quantum polynomial (IQP) circuitsÛ IQP =ĤẐĤ, whereĤ consists of Hadamard gates andẐ is an arbitrary (possibly nonlocal) diagonal gate on the computational basis, both acting on all qubits [6]. The IQP circuits constitute one of the early proposals of quantum supremacy. Multiplicative approximation of their output probabilities are known to be #P-hard in the worst case [41,Theorem 1.4]. SinceÛ M IQP =ĤẐ MĤ still adopt the general form of the IQP circuits, we conclude that there exists at least one instance in Y that is #P-hard for multiplicative approximation. To see how the hardness could emerge for a typical instance in Y (conjecture 1), one can in principle map the path integral in Eq. (1) to the partition function of a classical Ising model with random complex fields. The latter is widely conjectured to be #P-hard on average for multiplicative approximation [21,42]. In this context, the key is to note that a COE unitary evolution can be written asÛ COE =Û T CUEÛ CUE , whereÛ CUE is a random matrix drawn from the Circular Unitary Ensemble (CUE), i.e. the ensemble of Haar-random matrices [43]. Furthermore,Û CUE can be decomposed into a set of universal quantum gates which can be mapped onto a complex Ising model. This mapping procedure has already been described in ref. [7] to support the conjecture of the worst-to-average case in the context of random quantum circuits. A detailed and intuitive description of this protocol is presented in Appendix B. Anti-concentration of COE dynamics.-To prove the second and necessary ingredient of the proof, i.e. theorem 2, we write where d (z) = z|E E |z 0 , φ M, = M E T mod 2π, |E is an eigenstate ofĤ F with eigenenergy E . For COE operators, d (z) are real [43] and their distribution, denoted as Pr(d), is given by the Bessel function of the second kind (see Fig. 1(a) and Appendix C for a detailed derivation). Consequently, the values of d (z) for different and z do not concentrate on a particular value. Now let us consider the statistics of the phases {φ M, }. We define the level spacing as For a single driving cycle M = 1, the phases {φ 1, } for COE are known to exhibit phase repulsion, i.e. the phases are correlated [32]. The COE distribution Pr COE (r ) is depicted in Fig. 1(b), where Pr COE (0) = 0 explicitly indicates the phase repulsion. For multiple driving cycles M 2π/E T , the correlations are erased due to energy folding, i.e. the effect of the modulo 2π. This results in the Poisson (POI) distribution of the level spacing, Pr POI (r ) = 2/(1 + r 2 ), with the peak at r = 0, see Fig. 1 The Bessel function distribution of d (z) and the POI distribution of φ M, ensure that the output distribution Pr which suggests that the system explores uniformly (approximately Haar-random) the Hilbert space [14,21]. This satisfies the anti-concentration condition since [7]. To see the emergence of the Porter-Thomas distribution, we write Due to the Poisson distribution in the long time limit, the phases {φ M, } can be thought of as independent variables randomly and uniformly distributed in the range [0, 2π). Using the product distribution formula and the central limit theorem, one can show that the distributions of a z and b z are normal distributions with zero mean and variance 1/2N . Sincep M (z) = a 2 z + b 2 z , the Porter-Thomas distribution ofp M (z) can be derived using the fact that the square sum of two Gaussian variables follows the χ-squared distribution with second degree of freedom [44]. A detailed derivation is presented in Appendix C. Example of driven many-body systems.-We give two specific examples of driven systems that display statistical properties consistent with the COE and hence partially support conjecture 2. For both cases, the modulation is f (t) = 1 2 (1 − cos(ωt)), where ω = 2π/T and initial states are randomized product states. (i) 1D Ising chain: We consider an Ising chain described by the HamiltonianĤ ISING W is the disorder strength,Ẑ l is the Pauli spin operator acting on site l, and J is the interaction strength. The drive is a global magnetic fieldV ISING = F L−1 l=0X l , where F is the driving amplitude. Similar models have been implemented in various quantum platforms, including trapped ions [27] and superconducting circuits [28]. (ii) 1D Bose-Hubbard model: We consider the BH model described by the HamiltonianĤ is a bosonic annihilation (creation) operator at site l, U is the onsite interaction, and µ l is the local disorder as defined above. The drive modulates the hopping amplitudeŝ ). Similar models have been implemented in superconducting circuits [14] and cold atoms [24,26,29]. The distribution of d (z) from both models are depicted in Fig. 1(a), showing an agreement with the Bessel function as predicted by COE. The level statistics at M = 1 and M = 25 are depicted in Fig. 1(b), showing an agreement with the COE and the POI distribution, respectively. The driving frequency and the disorder strength are tuned to ensure the observation of the thermalized phase and prevent many-body localization [32,45]. Fig. 2 shows the l 1 -norm distance between Pr(p) and the Porter-Thomas distribution at different m for the Ising and the BH models. It can be seen that, in all cases, the system reaches the Porter-Thomas distribution after multiple driving cycles. The l 1 -norm distance in the longtime limit is decaying towards zero as the size of the system increases. Therefore, the anti-concentration condition is satisfied. In absence of the drive, a similar analysis can be performed for the infinite-time unitary evolution corresponding to generic instances of the undriven thermalized phase in both models. In this case, d (z) does not follow the Bessel function of the second kind and the output distribution never reaches the Porter-Thomas distribution (see Appendix D for numerical simulation of the undriven Ising and Bose-Hubbard models). This is consequence of the energy conservation and the structure imposed by the local interactions, highlighting the key role played by the drive. Conclusions and outlook-Analog quantum simulators realizing quantum many-body systems have generated quantum dynamics beyond the reach of existing classical numerical methods for some time. However, such dynamics has not been theoretically proven to be hard to compute by a classical computer. We have shown here that in the particular case of driven many-body systems, when they thermalize, sampling from their output distribution cannot be efficiently performed on a classical computer. Using complexity theory arguments, we provide strong analytical evidence of the computational hardness stemming from the COE statistics, and provide numerical results showing that COE dynamics can be obtained from driven quantum Ising and BH models for realistic parameters. Our results greatly widen the possibilities to realise quantum supremacy with existing experimental platforms and provide the theoretical foundations needed to demonstrate quantum supremacy in analog quantum simulators. In the future, it would be interesting to extend our results to a broader class of quantum many-body systems such as those with gauge fields, frustrated spin systems, and undriven systems. For example, in Ref. [20], cold atoms in optical lattices have been used to compute the undriven quantum many-body localization transition in two dimensions, which has so far eluded state-of-theart classical numerical techniques [46]. In this section, we provide a detailed proof of the main theorem of the main text, which reads: Main Theorem Assuming conjecture 1 and 2, the ability to classically sample from p M (z) up to an additive error β = 1/(8e) for all unitary matrices in {Û F } implies the collapse of the polynomial hierarchy to the third level. The proof relies on the theorems 1 and 2 and conjectures 1 and 2 presented in the main text. (1), where o(·) is little-o notation [49], is as hard as the hardest instance. Let us begin by considering a classical probabilistic computer with an NP oracle, also called a BPP NP machine. This is a theoretical object that can solve problems in the third level of the polynomial hierarchy. The Stockmeyer theorem states that a BPP NP machine with an access to a classical sampler C, as defined in the main text, can efficiently output an approximationq(z) of q(z) such that |q(z) −q(z)| ≤ q(z) poly(L) . We emphasise that the BPP NP machine grants us the ability to perform the approximating task, in contrast to the machine C that can only sample strings from a given distribution. To see how the BPP NP machine can output a multiplicative approximation of p M (z) for most of z ∈ Z, let us consider . The first and the third lines are obtained using the triangular inequality. To get multiplicative approximation of p M (z) usingq(z), we need the term |p M (z) − q(z)| to be small. Given the additive error defined in Eq. (3) in the main text, this is indeed the case for a large portion of {z} ∈ Z. Since the left hand side of Eq. (3) in the main text involves summing over an exponentially large number of terms but the total error is bounded by a constant β, most of the terms in the sum must be exponentially small. This statement can be made precise using Markov's inequality. Fact 1 (Markov's inequality) If X is a non-negative random variable and a > 0, then the probability that X is at least a is where E(X) is the expectation value of X. Here, the distribution and the expectation value are computed over z ∈ Z. Note that E z (|p M (z) − q(z)|) ≤ β/N is given by the additive error defined in Eq. (3) in the main text. By setting a = β/N ζ for some small ζ > 0, we get By substituting |p M (z) − q(z)| from Eq. (A2), we get Theorem 2 in the main text (the anti-concentration condition) together with conjecture 2 imply that {p M (z)} follows the Porter-Thomas distribution, specially that 1/N < p M (z) for at least 1/e fraction of the unitary matrices in {Û F }. Hence, we can rewrite Eq. (A7) as Here, the distribution is over all z ∈ Z and all unitary matrices in {Û F }. To understand the right hand side of the equation, let P ∩ Q be the intersection between the set P of probabilities that anticoncentrate and the set Q of probabilities that satisfy the Markov's inequality. Since Pr(P ∩ Q) = Pr(P ) + Pr(Q) − Pr(P ∪ Q) ≥ Pr(P ) + Pr(Q) − 1, Pr(P ) = 1/e and Pr(Q) = 1 − ζ , it follows that Pr(P ∩ Q) is no less than 1/e + 1 − ζ − 1 = 1/e − ζ. Following [7,42], we further set β = 1/(8e) and ζ = 1/(2e), so that giving an approximation up to multiplicative error 1/4 + o(1) for at least 1/(2e) instances of the set of experimentally realizable unitary matrices {Û F }. If according to the conjecture 1 and conjecture 2 in the main text, multiplicatively estimating 1/(2e) fraction of the output probabilities from {Û F } is #P-hard, then the Polynomial Hierarchy collapses. This concludes the proof of the main theorem in the main text. Appendix B: Mapping of approximating output distribution of COE dynamics onto estimating partition function of complex Ising models In this section, we provide evidence to support the conjecture 1 in the main text, showing how hardness instances could appear on average. To do this, we map the task of approximating an output distributions of COE dynamics onto calculating the partition function of a classical Ising model which is widely believed to be #P-hard on average for multiplicative approximation [21,42]. The section is divided into two parts. In the first part, we explain the overall concept and physical intuition of this procedure. In the second part, mathematical details are provided. Physical perspective of the mapping procedure The mapping protocol consists of two intermediate procedures. First, we map the COE unitary evolution on universal random quantum circuits and, second, we derive a complex Ising model from those circuits following Ref. [7]. Let us begin by expressing an unitary evolution of COE asÛ COE =Û T CUEÛ CUE whereÛ CUE is a random unitary drawn from the Circular Unitary Ensemble (CUE) i.e. Haar ensemble [43]. We then further decomposeÛ CUE into a set of universal quantum gates [7]. Following Ref. [7], we choose random quantum circuits consisting of n + 1 layers of gates and log 2 N qubits, as shown in Fig. 3(a). The first layer consists of Hadamard gates applied to all qubits. The following layers consist of randomly chosen single-qubit gates from the set { √ X, √ Y , T } and two-qubit controlled-Z (CZ) gates. Here, √ X ( √ Y ) represents a π/2 rotation around the X (Y ) axis of the Bloch sphere andT is a non-Clifford gate representing a diagonal matrix {1, e iπ/4 }. Such circuits have been shown to be approximately t-design [50] for an arbitrary large t when n → ∞, which implies the CUE evolution [51]. The operatorÛ T CUE can be implemented by reversing the order of the gates inÛ CUE and replacing √ Y with √ Y T . We emphasize that decomposing the COE evolution into the random circuits is only done theoretically with an aim to show the average case hardness. In the real experiments, this COE dynamics is realized by the driven many-body systems. The mathematical procedure for the mapping from random quantum circuits to classical complex Ising models is discussed in details in the next part. Specifically, p M (z) from the circuit (Û T CUEÛ CUE ) M , as depicted in Fig. 3(a), can be calculated from the partition function, Here, A(s) is the degeneracy number associated with a classical spin configuration s in the lattice S, s i = ±1, h i represents a on-site field on site i and J ij represents the coupling between the classical spins on site i and j. Since the output probability can also be interpreted as the path integral in Eq. (B1) in the main text, the intuition behind the mapping is that the sum over all possible paths is translated into the sum over all possible classical spin configurations, where the phase accumulated in each path is given by the energy of the complex Ising lattice S. To gain intuitive understanding of this standard mapping, we provide a diagrammatic approach to visualize the lattice S and extract the field parameters {h i }, {J ij }. To begin with, we use the random circuit in Fig. 3(b) as a demonstration. The mathematical descriptions behind each steps are discussed in the next part. • STEP I -For each qubit, draw a circle between every consecutive non-diagonal gates, see Fig. 3(c). Each circle or 'node' represents one classical spin. • STEP II -For each qubit, draw a horizontal line between every consecutive nodes i,j, see Fig. 3(d). These lines or 'edges' represent interaction J ij between two neighboring spins in the same row. In addition, draw a line between every two nodes that are connected by CZ gates. These lines represent the interaction J ij between spins in different rows. • STEP III -Labeling each nodes and edges with the corresponding gates, see Fig. 3(e). • STEP IV -Use the lookup table in Fig. 3(f) to specify h i and J ij introduced by each gate. For example, the √ Y gate that acts between nodes i and j adds −1 to J ij , −1 to h i and +1 to h j . We use the convention that the leftmost index represents the leftmost node. Also, the two T-gates that are enclosed by the node i will add 0.5 + 0.5 = +1 to the local field h i . • STEP V -Finally, spins at the leftmost side of the lattice are fixed at +1, corresponding to the initial state |0 . Similarly, spins at the rightmost side of the lattice are fixed according to the readout state |z . Following the above recipe, we provide the exact form of the parameters in the Ising model for the COE dynamics in the next part, showing that the field parameters {h i } and {J ij } are quasi-random numbers with no apparent structure. Specifically, neither the phase π i h i s i /4 nor the phase π i,j J ij s i s j /4 is restricted to the values 0, π/2, π, 3π/2 (mod 2π) for each spin configurations s. Without such stringent restrictions, approximating the partition function up to multiplicative error is known to be #P-hard in the worst case [41,Theorem 1.9]. This motivates a widely used conjecture in quantum supremacy proposals that such task is also hard on average [21,42]. We emphasize here the major differences between random quantum circuits as proposed in Ref. [7] and our systems. Firstly, our systems are analog with no physical quantum gates involved. The decomposition to quantum gates is only done mathematically. Secondly, our system has discrete time-reversal symmetry, while such symmetry is absent in random quantum circuits. Consequently, the COE in our system is achieved from the Floquet operatorÛ F , while the CUE in random quantum circuits are achieved from the entire unitary evolution. In addition,Û M F in our system does not have the t-design property due to the COE [52, pp.117-119]. However, as shown above, the hardness arguments for the random quantum circuits can be naturally applied to our case. Mathematical details of the mapping procedure In this section, we prove Eq. (B1) by providing justifications of the diagrammatic recipes to map the the evolution U CUE on a Ising spin model with complex fields. Again, the quantum gates of interest consist of both diagonal gates {T, CZ} and non-diagonal gates For simplicity, we start with one-and two-qubit examples before generalizing to the COE dynamics. The mathematical procedure here is adapted from Ref. [7]. a. One-qubit example Let us consider a one-qubit circuit and N + 1 gates randomly chosen from the set gate is fixed to be a Hadamard gate. The output probability is p(z) = | z|Û |0 | 2 , whereÛ = N n=0Û (n) is the total unitary matrix,Û (n) is the n th gate and z ∈ {0, 1} is the readout bit. Below, we outline the mathematical steps underlying the diagrammatic approach followed by detailed explanations for each step: In the second line, we insert an identityÎ n = zn∈{0,1} |z n z n | betweenÛ (n+1) andÛ (n) for every n ∈ {0, .., N − 1}. As a result, this line can be interpreted as the Feynman's path integral where each individual path or 'world-line' is characterized by a sequence of basis variables z = (z −1 , z 0 , ..., z N ). The initial and the end points for every path are |z −1 = |0 and |z N = |z , respectively. In the third line, we decompose z n |Û (n) |z n−1 into the amplitude A(z n , z n−1 ) and phase Φ(z n , z n−1 ). In the fourth line, we introduce A(z) = N n=0 A(z n , z n−1 ). The equation now takes the form of the partition of a classical Ising model with complex energies. Here, z can be interpreted as a classical spin configuration, A(z) as the degeneracy number and i π 4 Φ(z n , z n−1 ) as a complex energy associated with spin-spin interaction. Further simplifications are possible by noticing that, the diagonal gates in the circuits allow the reduction of the number of classical spins. Specifically, if a T gate is applied to |z n−1 , it follows that z n = z n−1 . Hence, the variables z n−1 and z n can be represented by a single classical spin state. The two variables z n−1 , z n become independent only when a non-diagonal gate is applied. Therefore, we can group all variables {z n } between two non-diagonal gates as one classical spin. This procedure leads to the directives presented as the the STEP I of the procedure in the previous section. Formally, for N spin + 1 non-diagonal gates in the circuit (including the first Hadamard gate) z can be characterized by a classical spin configuration s = (s −1 , s 0 , ..., s k , ..., s Nspin ) where s k = 1 − 2z k ∈ {±1} is a spin representing the basis variable immediately after the k th non-diagonal gate, i.e. Lastly, we need to specify A(s) and Φ(s k , s k−1 ) in term of the local fields h k−1 , h k , the interaction J k−1,k , and spin configurations s k−1 , s k . This is done by first considering the gates in their matrix form, i.e. Notice that all non-diagonal gates contribute to the same amplitude A(s k , s k−1 ) = 1/ √ 2, leading to A(s) = 2 −(Nspin+1)/2 . Hence, we can extract the contribution of each gate to Φ(s k , s k−1 ) as The under-script indicates which gate is contributing to the phase. The corresponding h i , h j and J ij are depicted in the lookup table in Fig. 3(f), where i = k − 1 and j = k. The global phase that does not depend on s is ignored as it does not contribute to p(z). b. Two-qubit example Now we consider a two-qubit random circuits to demonstrate the action of the CZ gates. We introduce a new index l ∈ {1, 2} to label each qubit, which is placed on a given horizontal line (row). Since the CZ gate is diagonal, its presence does not alter the number of spins in each row. However, the gate introduces interaction between spins in different rows. This can be seen from its explicit form, i.e. where s 1,k (s 2,k ) is the state of the k th (k th ) spin at the first (second) row. It follows that The corresponding h i , h j , and J ij are depicted in Fig. 3(f) where i = (1, k) and j = (2, k ). We have now derived all necessary ingredients to map a random quantum circuit to a classical Ising model. c. Full COE dynamics Since the COE dynamics can be expressed in terms of a quasi-random quantum circuit, we can straightforwardly apply the above procedure to find the corresponding Ising model. The complexity here solely arises from the number of indices required to specify the positions of all the gates in the circuit. To deal with this, we introduce the following indices -an index l ∈ {1, ..., L} to indicate which qubit / row. -an index µ ∈ {A, B} to indicate which part of the period. A and B refer to theÛ CUE part and theÛ T CUE part, respectively -an index k ∈ {0, 1, ..., N spin (l)} to indicate the spin position for a given m and µ. Here, N spin (l) is the total number of spins at the l th row. Note that due to the symmetric structure ofÛ CUE andÛ T CUE , we run the index k backward for the transpose part, i.e. k = 0 refers to the last layer. -an index ν l,k so that ν l,k = 1 if the k th non-diagonal gate acting on the qubit l is √ X otherwise ν l,k = 0. With these indices, the partition function of the circuit, as shown in Fig. 3(a), can be written as Here G is the total number of non-diagonal gates in the circuit. ζ In this section, we provide additional mathematical details involved in the proof of theorem 2. More precisely, we show that the distribution of the output probability of COE dynamics, Pr(p), follows the Porter-Thomas distribution Pr PT (p) = N e −N p . First, let us consider the output probability p M (z) = | z|ψ M | 2 with To prove lemma 1, we first write d (z) = c z, c 0, , where c z, = z|E and c 0, = 0|E . For the COE dynamics, the coefficients c z, and c 0, are real numbers whose distribution is [43] As discussed in the main text, the phase φ M, becomes random as M 2π/E T . The random sign (±1) from c z, can therefore be absorbed into the phase without changing its statistics. The distribution of d (z) can be obtained using the product distribution formula where K 0 is the modified Bessel function of the second kind. To prove lemma 2, we first note that the distribution of cos φ m, and sin φ m, with φ M, being uniformly distributed in the range [0, 2π) are Pr(cos φ) = 1 Pr(sin φ) = 1 We then calculate the distribution of κ ≡ d (z) cos φ M, using the product distribution formula, i.e. The mean and the variance of κ can be calculated as Since a z is a sum of independent and identically distributed random variables, i.e. a z = N −1 =1 κ , we can apply the central limit theorem for large N . Hence, the distribution of a z is normal with the mean zero and variance Var(a) = N · Var(κ) = 1/2N . The same applies for the distribution of b z . Theorem 2 can be proven using the fact that the sum of the square of Gaussian variables follows the χ-squared distribution with second degree of freedom Pr χ 2 ,k=2 (p) ∼ exp{−p/2σ 2 } [44]. By specifying the variance obtained in Lemma 2 and normalization, the distribution of p M (z) = a 2 z + b 2 z over ∀z ∈ {0, 1} L is the Porter-Thomas distribution. Since the Porter Thomas distribution anti-concentrates i.e. Pr PT p > 1 N = ∞ N p=1 d(N p)e −N p = 1/e , we complete the proof of the theorem 2. Appendix D: Undriven thermalized many-body systems In this section, we analyze the long-time unitary evolution for undriven systems in the thermalized phase. The results presented here highlight the key role played by the drive in generating the randomness required for the above quantum supremacy proof. In particular, we show that for typical undriven physical systems with local constraints (e.g. finite-range interactions) and conserved energy, the output distribution never coincides with the PT distribution. We emphasize that this is a consequence of the inability of random matrix theory to accurately describe the full spectral range of undriven thermalized many-body systems. Indeed, it has been shown that for undriven many-body systems which thermalizes (to a finite temperature), the statistics of the Hamiltonian resembles the statistics of the Gaussian orthogonal ensemble (GOE) [31]. However, it is implicit that an accurate match only applies over a small energy window (usually far from the edges of the spectrum). If one zooms in this small energy window, the Hamiltonian looks random, but if one consider the full spectrum, the local structure of the Hamiltonian appears and the random matrix theory fails at capturing it. To see this, we numerically simulate the undriven Ising Hamiltonian,Ĥ 0 = L−1 l=0 µ lẐl + J L−2 l=0Ẑ lẐl+1 + F 2 L−1 l=0X l , where µ l ∈ {0, W } is a local disorder, W is the disorder strength, F is the static global magnetic field along x and J is the interaction strength. This Hamiltonian is in fact the average Hamiltonian of the driven Ising Hamiltonian used in the main text. In comparison, we also simulate the quantum evolution under an ensemble {Ĥ COE } of synthetic Hamiltonians that are uniformly drawn from the GOE (i.e. without any local constraints). Fig.4 (a) shows the level-spacing statistics of {Ĥ 0 } (obtained over 500 disorder realizations), {Ĥ COE } (obtained over 500 random instances) and their corresponding long-time unitary operatorsÛ = lim t→∞ e −itĤ . We see that the level statistic of the physical Hamiltonian (and its long-time evolution) is indistinguishable from the GOE. However, the discrepancy between the physical and synthetic (GOE) realizations becomes apparent when looking at the eigenstate statistics as shown in Fig.4 (b). While the distribution of d (z) [see Eq. (5) of the main text] from the GOE is in a good agreement with the Bessel function of the second kind, the physical system fails to meet the theoretical prediction. This is in contrast to the driven case as presented in the main text. More importantly in the context of this work, a key difference between the physical Hamiltonian and the random matrix theory prediction can be seen by comparing the distribution of the output states after some time evolution. In Fig.4 (c), we show that the Porter-Thomas distribution is never achieved with the physical systems while it is for the synthetic realizations as well as for the driven case studied in the main text. These results underline the gap between physical Hamiltonians and true random matrices and more importantly, they highlights the important role of the drive in bridging that gap.
9,738.4
2020-02-27T00:00:00.000
[ "Physics" ]
Down-Regulation of Nicotinamide N-methyltransferase Induces Apoptosis in Human Breast Cancer Cells via the Mitochondria-Mediated Pathway Nicotinamide N-methyltransferase (NNMT) has been found involved in cell proliferation of several malignancies. However, the functional role of NNMT in breast cancer has not been elucidated. In the present study, we showed that NNMT was selectively expressed in some breast cancer cell lines, down-regulation of NNMT expression in Bcap-37 and MDA-MB-231 cell lines by NNMT shRNA significantly inhibited cell growth in vitro, decreased tumorigenicity in mice and induced apoptosis. The silencing reciprocal effect of NNMT was confirmed by over-expressing NNMT in the MCF-7 and SK-BR-3 breast cancer cell lines which lack constitutive expression of NNMT. In addition, down-regulation of NNMT expression resulted in reducing expression of Bcl-2 and Bcl-xL, up-regulation of Bax, Puma, cleaved caspase-9, cleaved caspase-3 and cleaved PARP, increasing reactive oxygen species production and release of cytochrome c from mitochondria, and decreasing the phosphorylation of Akt and ERK1/2. These data suggest that down-regulation of NNMT induces apoptosis via the mitochondria-mediated pathway in breast cancer cells. Introduction Breast cancer is one of the most common causes of cancerrelated death in women, which accounts for one in four cancerrelated deaths in the United States [1]. In China, according to the most updated but limited cancer registries, breast cancer is the fifth leading cause of cancer-related death for females [2]. There is a decline in breast cancer mortality since 1995 [1][2][3], however, breast cancer is far from being cured because of delayed detection, the progressive growth, late detection of metastases and resistant to conventional therapies. Therefore, there is an urgent need to identify new biomarkers, which are warranted to provide more information on the tumor biology, chemotherapeutic effects, allowing a better prognostic and possibly predictive stratification of patients. Recent researches have reported that the growth of tumor cells may be inhibited via the mitochondrial apoptotic pathway in breast cancer [4,5]. Nicotinamide N-methyltransferase (NNMT, EC 2.1.1.1), a cytoplasmic enzyme belonging to Phase IIMetabolizing Enzymes, which catalyzes the methylation of nicotinamide and other pyridines to form pyridinium ions using S-adenosyl-L-methionine as methyl donor [6]. NNMT also plays a vital role in nicotinamide metabolism and the biotransformation of many drug and other xenobiotic compounds [7]. NNMT exhibits a high expression in the liver and follows a bimodal frequency distribution which might result in differences among individuals in the metabolism and therapeutic effect of drugs [8]. Proteomics analysis and DNA microarray analysis showed that NNMT was expressed at markedly higher levels in several kinds of cancers. It was identified as a novel serum tumor marker for colorectal cancer (CRC) in 2005 [9]. In addition to CRC, abnormal expression of NNMT was also reported in other tumors such as papillary thyroid cancer [10], glioblastoma [11], gastric cancer [12,13], renal carcinoma [14][15][16], oral squamous cell carcinoma [17,18], lung cancer [19], pancreatic cancer [20,21] and ovarian clear cell carcinoma [22]. Our previous studies have also shown that NNMT is overexpressed in a large proportion of renal cell cancers and that high expression of NNMT is significantly associated with unfavorable prognosis [23]. The most recent studies showed that knockdown of NNMT was able to inhibit the proliferation of KB cancer cells [24], renal carcinoma cells [25] and oral cancer cells [26], and NNMT expression was involved in maintaining cell proliferation by increasing the activity of Complex I (NADH:ubiquinone oxidoreductase) in SH-SY5Y neuroblastoma cells [27]. However, the mechanism of NNMT in cell proliferation is largely unknown and the functional role of NNMT in breast cancer has not been reported. In the present study, we investigated the expression of NNMT in human breast cancer cell lines and found that shRNA targeted against NNMT significantly decreased cell growth, inhibited tumorigenicity in mice and induced apoptosis via the mitochondria-mediated pathway. Ethics statement All experiments in the present study were conducted in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. The animal experiments were previously approved by the Animal Care and Use Committee at Sir RunRun Shaw hospital of Zhejiang University (Permit Number: 20120222-31). The number of animals used was minimized, and all necessary precautions were taken to mitigate pain or suffering. Western blot analysis Cell extracts were prepared with RIPA lysis buffer (Beyotime biotechnology, Shanghai, China). Protein concentrations were measured by a BCA Protein Assay Kit (Beyotime biotechnology, Shanghai, China) using bovine serum albumin (BSA) as the standard. A total of 50 mg of protein samples from each cell line was subjected to 10% sodiumdodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to Immobilon P Transfer Membrane (Millipore, Bedford, MA, USA). After regular blocking and washing, the membranes were incubated with primary antibodies overnight at 4uC, followed by incubating with HRP-conjugated secondary antibodies for 1 h at room temperature. Signals were visualized using enhanced chemiluminescence detection reagents (Millipore, Billerica, MA, USA) and imaged using an Image Quant LAS-4000 (Fujifilm, Tokyo, Japan). All the experiments were independent and conducted at least three times. The protein quantification of the Western blot results were normalized to GAPDH and then compared to the control group, which was normalized as 1. Lentiviral shRNA-NNMT infection Lentiviral vectors against NNMT were synthesized by Gene-Chem Co. Ltd (Shanghai, China). Table 2 showed the sequences of NNMT shRNA 1#, NNMT shRNA 2#, shRNA NC and the frame of lentiviral vectors. Bcap-37 and MDA-MB-231 cells were infected with lentivirus containing shRNA (NNMT shRNA 1#, NNMT shRNA 2#, shRNA NC; MOI = 100 for Bcap-37, MOI = 10 for MDA-MB-231) after seeded (2610 5 cells/well) in six-well plates for 24 h. Ten hours after co-culturing with lentivirus, the supernatant was replaced with fresh medium. 48 h after infection, the transduced cells were sorted for GFP-positive cell populations by BD FACS Aria II System (BD Biosciences, San Jose, CA, USA) and then subjected to functional assays. The efficiency of gene silencing was detected by real-time RT-PCR and Western blot analysis as described above. Cells infected with shRNA NC were used as negative control. siRNA transfection When detecting the ROS production, we chose specific siRNAs to silence NNMT expression in order to avoid the fluorescence of GFP in lentiviral vector interfering with the ROS measurement. 2610 5 cells (Bcap-37 and MDA-MB-231) were plated in 6-well plates in 2 ml antibiotic-free DMEM medium supplemented with FBS and 8 ml the NNMT specific siRNAs (10 mM) (sc-61213, Santa Cruz Biotechnology, CA, USA) were transfected into cultured cells at a final concentration of 80 nM using Lipofectamine 2000 transfection reagent (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Control siRNA contained a scrambled sequence that would not lead to the specific degradation of any known cellular mRNA. MTT assay Cell growth was assessed by the colorimetric MTT assay. All the cells were prepared at a concentration of 2610 4 cells/ml. Aliquots (200 mL) were dispensed into 96-well flat-bottom plates. The cells were allowed to attach for five hours at 37uC and 5% CO 2 , and cell growth was evaluated for up to 120 h. Subsequently, 20 ml of the 5 g/L MTT solution (Sigma, St. Louis, MO, USA) was added to each well. After incubation for 4 h at 37uC, the supernatant was removed carefully, and 150 ml of dimethyl sulfoxide (DMSO, Sigma, St. Louis, MO, USA) was added to each well. Ten minutes after incubation at 37uC, the absorbance value of each well was read at 490 nm using an ELISA plate reader instrument (Model 680, BIO RAD, Osaka, Japan). All experiments were repeated at least three times. The absorbance values at each time point were compared to that of control group at 0 h, which was normalized as 100%. Plate colony formation assay The ability of cells to form macroscopic colonies was determined by a plate colony formation assay. Cells in the Soft agar colony formation assay Soft agar clonogenic assays were performed at least three times to assess anchorage-independent growth. Cells (1610 4 /well in sixwell plates) were detached and plated in DMEM medium containing 0.3% low melting point (LMP) agar with a 0.5% LMP agar layer underlay. The cells were cultured at 37uC in 5% Xenograft experiments Male specific pathogen-free BALB/c Nude mice (6 weeks old, 18-20 g body weight) were handled under pathogen-free sterile conditions, maintained on a 12 hour light:dark cycle (lights on at 7:00am) with continuous access to sterile food and water. For tumorigenicity assays, 2610 6 cells each of the Bcap-37/NC, Bcap-37/NNMT shRNA 1# and Bcap-37/NNMT shRNA 2# cells were subcutaneously injected into the upper portion of the right hind limb of 6 BALB/c Nude mice for each group. Tumor size were not significant both in MCF-7 cells and SK-BR-3 cells. (B) and (D) shows the protein quantification of the western blot results, respectively. The mRNA and protein levels were normalized to GAPDH level and are shown relative to the control groups (normalized as 1). Values in (B, D) are expressed as means 6 SD of four independent experiments. *P,0.05 vs. control group. doi:10.1371/journal.pone.0089202.g007 respectively. All the experiment were repeated at least three times. The results of the cells treated with NNMT siRNA were compared to that of the ones treated with control siRNA, which were normalized as 1. Cytochrome c release measurement The mitochondria fraction and cytosolic fraction were isolated using cytochrome c releasing apoptosis assay kit (BioVision, Inc., Mountain View, CA, USA) according to the manufacturer's instructions. The detection of Cyt c was performed by Western blot analysis as described above. All the experiments were independent and conducted at least three times. The protein quantification of the Western blot results were normalized to GAPDH and then compared to the NC group, which was normalized as 1. Statistical analysis All statistical analyses were carried out using the SPSS 19.0 statistical software package. Data were presented as mean 6 SD. The two-tailed independent-samples Student's t-test was performed to analyze the difference among groups. Statistical significance was define as *p,0.05 and **p,0.01. increased apoptosis percentage were also found (refer to Figure S1). However, MCF-7/ADR is not a natural cell line which grows poorly without doxorubicin. While some other studies indicated that MDA-MB-468 breast cancer cells could express NNMT [28], but we did not confirm this point in our study. Therefore, we did not use MCF-7/ADR and MDA-MB-468 cell lines to perform the further experiment. To study the effect of down-regulation or overexpression of NNMT on cell biological process, Bcap-37 and MDA-MB-231; MCF-7 and SK-BR-3 cell lines were selected for further study, respectively. Down-regulation of NNMT expression inhibited the Cell Growth in vitro and in vitro The efficacy in down-regulating expression of NNMT gene was detected by real-time quantitative RT-PCR and Western blot ( Figure 2). Compared with shRNA NC, mRNA and protein levels of NNMT were reduced significantly after NNMT shRNA 1# and NNMT shRNA 2# lentivirus infected into Bcap-37 and MDA-MB-231 cell lines (p,0.01). There was no statistical significance between cells infected with shRNA NC and wild type cells both in Bcap-37 cells and MDA-MB-231 cells. To evaluate the effect of NNMT specific shRNAs on cell growth in vitro, colorimetric MTT assay was conducted. As shown in Figure 3A and 3B, efficient down-regulation of NNMT resulted in markedly reduced breast cancer cell growth both in Bcap-37 and MDA-MB-231 cell lines. A significant reduction of cell growth could be detected from 72 h when compared to the control cells (p,0.01). These results indicated that NNMT specific shRNAs attenuated the cell growth of breast cancer cells. Plate colony formation and soft agar colony formation assay were conducted to validate the changes in cell proliferation observed using the MTT assay. The plate efficiency of Bcap-37/ NNMT shRNA 1# and Bcap-37/NNMT shRNA 2# cells were lower than that of Bcap-37/NC group ( Figure 4A). The similar result was found in MDA-MB-231 cell models ( Figure 4B). The results of soft agar colony formation assay indicated efficient downregulation of NNMT attenuates the capacity of colony formation ( Figure 4C and 4D) (p,0.01). Figure 5A). The similar result was found in tumor weight. The mean weight of tumors of Bcap-37/NNMT shRNA 1# (0.3660.07 g, n = 6) and Bcap-37/NNMT shRNA 2# (0.4260.05 g, n = 6) groups were significantly lower than Bcap-37/NC groups (0.6560.13 g, n = 6) (p,0.01, Figure 5B). These results indicated shRNA-mediated NNMT knockdown suppresses tumorigenicity in Bcap-37 cells. Taken together, down-regulation of NNMT expression inhibited the cell growth in vitro and in vivo. Down-regulation of NNMT expression increased apoptosis and the ratio of Bax/Bcl-2 To determine the role of NNMT in cancer cell survival, MDA-MB-231 and Bcap-37 cells were treated with NNMT shRNAs and subsequently flow cytometry was used to quantify apoptosis. The extent of apoptosis was expressed as the sum of total percentages of annexin-positive populations, which represented the early apoptosis (Annexin V-PE positive/7-AAD negative) and late apoptosis (Annexin V-PE positive/7-AAD positive). As shown in Figure 6A Figure 6B). These results indicated down-regulation of NNMT increased apoptosis in both cell lines infected with NNMT shRNA 1# and shRNA 2# compared to negative control cells. The Bcl-2 family of proteins, which shares homology in any of the four common Bcl-2 homology (BH) domains, was highly related with apoptosis [29]. Thus, we analyzed the changes of Bcl-2 family and found that the expression of Bax, Bcl-2, Bcl-xL and Puma significantly changed in NNMT down-regulated cells. As shown in Figure 7, the expression of pro-apoptotic proteins, Bax and Puma, was up-regulated significantly in both cell lines infected with NNMT shRNA 1# and NNMT shRNA 2# (p,0.01). On the contrary, the expression of Bcl-2 and Bcl-xL, which are identified as anti-apoptotic proteins, was significantly downregulated (p,0.01). As a result, the down-regulation of NNMT increased both mRNA and protein ratio of Bax/Bcl-2 compared to negative control (p,0.01). This result indicated that apoptosis might be induced by down-regulation of NNMT via regulating the Bax/Bcl-2 ratio in human breast cancer cells. And as shown in Figure 9, overexpression of NNMT resulted in increased breast cancer cell growth both in MCF-7 and SK-BR-3 cell lines using MTT assay. A significant higher proliferation rate could be detected from 72 h when compared to the control cells (p,0.05). Consistent with the results of MTT assay, the colonies formed by NNMT-overexpressed cells were more numerous than those formed by control group (p,0.05, Figure 10). To confirm the role of NNMT in cancer cell survival, the effect of NNMT overexpression on apoptosis was also quantified by flow cytometry. The data showed that overexpression of NNMT attenuates apoptosis in both NNMT overexpressed cell lines compared to negative control cells (p,0.05, Figure 11). All these results showed that overexpression of NNMT could increase cell growth, tumorigenicity and inhibit cell apoptosis, which were consistent with the function of NNMT indicated by down-regulation of NNMT. Down-regulation of NNMT expression increased ROS production ROS production was associated with apoptosis [30] and increasing intracellular ROS levels was highly related to apoptosis induction [31]. The mitochondria-mediated apoptotic pathway of cell death is especially susceptible to ROS. To assess the ROS production in NNMT knockdown breast cancer cells, the NNMT specific siRNAs, instead of shRNAs, was used to avoid interference of fluorescence of GFP in lentiviral vector. The efficacy in downregulated expression of NNMT gene by siRNA was confirmed by real-time quantitative RT-PCR and Western blot (p,0.01, Figure 12A and 12B). And as shown in Figure 12C and 12D, down-regulation of NNMT significantly increased ROS production in both of Bcap-37 and MDA-MB-231 cell lines transfected with NNMT siRNAs (p,0.01). Down-regulation of NNMT expression activated the mitochondria-mediated apoptotic pathway Down-regulation of NNMT increased mRNA and protein ratio of Bax/Bcl-2 and the production of intracellular ROS, which suggested that down-regulation of NNMT may be involved in mitochondria-mediated pathway. To determine whether downregulation of NNMT induces apoptosis via the mitochondriamediated pathway, we detected the release of Cyt c and the activation of related caspases, such as caspase-9 and caspase-3, which were key events in the mitochondria-mediated apoptotic pathway. As shown in Figure 13, Cyt c was observed to accumulate in cytosolic compartment in NNMT down-regulated Bcap-37 and MDA-MB-231 cells, while the amount of mitochondrial Cyt c was obviously decreased. In addition, the caspase-9, caspase-3 and PARP were decreased in NNMT knockdown Bcap-37 and MDA-MB-231 cells, while the cleaved ones were found significantly increased. Taken together, these results indicated that the down-regulation of NNMT in breast cancer cells resulted in activation of the mitochondria-mediated apoptotic pathway. , respectively. Cyt c in cytosolic compartment was significantly increased, while the amount of mitochondrial Cyt c was significantly decreased in both cell lines infected with NNMT shRNA 1# and shRNA 2# compared to negative control. In addition, the caspase-9, caspase-3 and PARP were significantly decreased, while the cleaved ones were found significantly increased compared to negative control. The protein levels were normalized to GAPDH level and all values were shown compared to the NC, which was normalized as 1. Values in (B, D) are expressed as means 6 SD of six independent experiments. **P,0.01 vs. NC. doi:10.1371/journal.pone.0089202.g013 Down-Regulation of NNMT Induces Apoptosis Down-regulation of NNMT expression inactivated Akt and ERK1/2 PI3K/Akt and MAPK pathways are the well known signaling cascade which participated in the regulation of cell progression and survival via protein phosphorylation [32,33]. We tested key signaling components in PI3K/Akt and MAPK pathways and found that the phosphorylation of Akt and ERK 1/2 was decreased in NNMT shRNA treated cells. As shown in Figure 14A, 14B, 14C and 14D, down-regulation of NNMT decreased the expression levels of p-Akt and p-ERK1/2 in Bcap-37 and MDA-MB-231 cells and also decreased the ratio of p-AKT/AKT and p-ERK1/2/ERK1/2. Furthermore, it was confirmed by IGF-1 (Sigma, St. Louis, MO, USA), a potent activator of PI3K/Akt. 100 ng/ml IGF-1 partially decreased the apoptosis in NNMT shRNA treated cells ( Figure 14E and 14F), suggesting the Akt pathway involved in apoptosis induced by down-regulation of NNMT. Discussion NNMT is predominantly expressed in liver, catalyzes the Nmethylation of nicotinamide, pyridines, and other structural analogues that are involved in the biotransformation and detoxification of many drugs and xenobiotic compounds. It plays a pivotal role in cellular events by regulating nicotinamide balance such as energy production, longevity, and cellular resistance to stress or injury [6,[34][35][36]. To the best of our knowledge, there is no direct study on the biological process of NNMT in breast cancer up to now. The only public correlation of NNMT and breast cancer was that NNMT had been found over-expressed in adriamycin-resistant breast cancer cell line MCF-7/ADR compared with its parent cell line MCF-7 [37]. We confirmed high expression of NNMT in some of the breast cancer cell lines, however, the role of NNMT in breast cancer is largely unknown. In the present study, we investigated the biological function of NNMT in breast cancer cell lines (Bcap-37 and MDA-MB-231). ShRNA lentiviral vector against NNMT was designed to inhibit endogenous NNMT expression in both cell lines. Accompany with the down-regulation of NNMT expression, a significant inhibition of cell growth of Bcap-37 and MDA-MB-231 cells was found. The results of nude mice tumorigenesis experiments on Bcap-37 also showed that down-regulation of NNMT expression inhibited cancer cells tumorigenicity in vivo. The silencing reciprocal effect of NNMT was confirmed by over-expressing NNMT in the MCF-7 and SK-BR-3 breast cancer cell lines which lack constitutive expression of NNMT. Our data are consistent with the results derived from bladder cancer cells [38], KB cancer cells [24], oral carcinoma cells [26] and renal cancer cells [25], which strongly suggested that NNMT plays an important role in cancer cell growth in vitro and in vivo. Defective apoptotic machinery often confers survival advantage of cancer cells [29], and apoptosis attenuation is important in progressing to states of high-grade malignancy and resistance to therapy in tumors [39,40]. Thus,we analyzed the effect of downregulation of NNMT on apoptosis. There was a higher percentage of apoptosis population in Bcap-37 and MDA-MB-231 cells infected with NNMT shRNA. The cleaved-caspase-3 and cleaved PARP, which are reliable markers of apoptosis, were also showed increased by down-regulation of NNMT. On the contrary, overexpression of NNMT in the MCF-7 and SK-BR-3 breast cancer cell lines showed attenuated apoptosis when compared to negative control cells. Those results together demonstrated that down-regulation of NNMT induces apoptosis in Bcap-37 and MDA-MB-231, which also suppose that NNMT may play a vital role in breast cancer development via apoptosis. The underlying molecular mechanisms of the apoptosis promoted by downregulation of NNMT in breast cancer cells would further clear the role of NNMT in cancer cells. The Bcl-2 family of proteins, main apoptosis regulators, was designed to explain the mechanism of apoptosis induced by downregulation of NNMT. In the present study, we observed that the expression of Bax and Puma was up-regulated, while the expression of Bcl-2 and Bcl-xL was significantly down-regulated in NNMT shRNA infected breast cancer cells, which resulted in the increase of the ratio of Bax/Bcl-2. Among the Bcl-2 family members, anti-apoptotic Bcl-2 and Bcl-xL have been reported to protect the cells by interacting with mitochondrial proteins such as the adenine nucleotide translocase (ANT) or the voltage dependent anion channel (VDAC), thus preventing them from forming mitochondrial pores, protecting membrane integrity, and inhibiting the release of apoptogenic factors such as Cyt c [41]. On the contrary, Bax can homodimerize or heterodimerize with other pro-apoptotic members such as Bak or truncated Bid, disrupting the integrity of the outer mitochondrial membrane (OMM) by forming mitochondrial pores and increasing its permeability, which can then lead to the release of apoptogenic factors such as Cyt c [42]. Puma, a Bcl-2 family member acting as neutralizing anti-apoptotic proteins, can heterodimerize with Bcl-2 and Bcl-xL and sequester them, thereby blocking their anti-apoptotic action at the mitochondria [29]. Interestingly, down-regulation of NNMT increased ROS production in human breast cancer cell lines was found. It has been reported that increasing ROS production can damage mitochondrial membranes, leading to the opening of mitochondrial permeability transition pore (MPTP) and releasing Cyt c [43,44]. Taken those results together, we infered that downregulation of NNMT in human breast cancer may cause the mitochondria dysfunction and release of Cyt c from mitochondria. The ratio of Bax/Bcl-2 partially showed the response to proximal death and survival signals of cells as reported by Oltvai ZN, et al [45]. Cyt c plays a crucial role for the execution of the mitochondrialmediated intrinsic pathway apoptosis because it can form apoptosome with apoptosis-activating factor 1(Apaf-1) and caspase-9 after releasing into the cytoplasm and activate the executioner caspases-3 and 7, which finally causes cell apoptosis through nuclear fragmentation of cells [46][47][48][49]. To confirm whether down-regulation of NNMT induces apoptosis via the mitochondria-mediated pathway, we analyzed the release of Cyt c and the activation of related caspases, such as caspase-9 and caspase-3, which were key events in the mitochondria-mediated apoptotic pathway. As expected, we have shown that Cyt c was released from mitochondrial fraction into cytosolic fraction and the cleaved caspase-9, caspase-3 and PARP were found significantly increased in NNMT shRNA infected cells. These results indicated that down-regulation of NNMT in breast cancer cells induces apoptosis via the mitochondria-mediated pathway by increasing the ratio of Bax/Bcl-2 and ROS production, resulting in releasing Cyt c from mitochondrial fraction into cytosolic to activate the executioner caspases-3 and 7. In our study, we also found that the phosphorylation of Akt and ERK 1/2 was decreased in NNMT shRNA treated cells. Akt can inhibit apoptosis through multiple mechanisms and preventing AKT activation can induce apoptosis [50,51]. The result of IGF-1 decreased the apoptosis in NNMT shRNA treated cells indicated that the apoptosis induced by down-regulation of NNMT can be attributed, at least partially to Akt inactivation. This also suggests that the Akt pathway involved in the effect of NNMT on cancer cells. Our results of down-regulation of NNMT on Akt pathway are in line with the most recent reports [52,53]. Win KT, et al reported that NNMT overexpression was significantly positively associated with phosphorylation of Akt and indicated worse prognosis in patients with nasopharyngeal carcinoma recently [53]. Another report shown that NNMT expression regulates neurone morphology in vitro via the sequential activation of ephrin-B2 (EFNB2) and Akt cellular signaling pathways [52]. Diverse cellular functions, ranging from cell survival to cell death, are regulated by activation of ERK pathway [54]. We don't know the exact mechanisms of How NNMT phosphorylates ERK and AKT in breast cancer so far. Ulanovskaya et al (Nat Chem Biol) recently reported that the methylation events regulated by NNMT can alter histone-dependent gene expression, but also extend beyond histones to include tumor suppressor proteins like PP2A. We suppose that the production of ROS and condition of PP2A methylation and demethylation regulated by NNMT may contribute to the phosphorylates ERK and AKT in breast cancer, however, this needs more detailed experiments to confirm. In summary, we found that down-regulation of NNMT expression significantly inhibited cell growth, decreased tumorigenicity in mice and induced apoptosis via the mitochondriamediated pathway. Although the definite mechanism of its role needs to be further studied, NNMT may become a promising candidate for breast cancer therapy. Figure S1 Down-regulation of NNMT expression inhibited cell growth and induced apoptosis in MCF-7/ADR cells. (A) Western blot were used to analyze NNMT expression in MCF-7/ADR, MCF-7/ADR/NC, MCF-7/ADR/NNMT shRNA 1# and MCF-7/ADR/NNMT shRNA 2# after infected for 48 h. GAPDH was used as an internal control. (B) Cell growth was analyzed using the MTT assay. As shown, remarkably low proliferation rates were observed in MCF-7/ADR/NNMT shRNA 1# and MCF-7/ADR/NNMT shRNA 2# cells compared to MCF-7/ADR/NC cells after 72 h after seeding the cells in plates. The absorbance values at each time point were compared to that of control group at 0 h, which was normalized as 100%. Values are expressed as means 6 SD of four independent experiments. (C) Apoptosis was detected by flow cytometric analysis using the Annexin V-PE/7-AAD Apoptosis Detection Kit after seeded for 48 h. The extent of apoptosis is expressed as the sum total percentages of annexin-positive populations. The percentage of apoptosis populations was increased in both cell lines infected with NNMT shRNA 1# and shRNA 2# compared to negative control cells. Values are expressed as means 6 SD of four independent experiments. **P, 0.01 vs. NC. (TIF)
6,088.6
2014-02-18T00:00:00.000
[ "Biology", "Chemistry" ]
Maximizing optical production of metastable xenon The wide range of applications using metastable noble gas atoms has led to a number of different approaches for producing large metastable state densities. Here we investigate a recently proposed hybrid approach that combines RF discharge techniques with optical pumping from an auxiliary state in xenon. We study the effect of xenon pressure on establishing initial population in both the auxiliary state and metastable state via the RF discharge, and the role of the optical pumping beam power in transferring population between the states. We find experimental conditions that maximize the effects, and provide a robust platform for producing relatively large long-term metastable state densities. Introduction Metastable noble gas atoms are used in diverse areas ranging from fundamental physics experiments [1][2][3][4], to applications in plasma display panels [5][6][7], ultralow-power nonlinear optical devices [8][9][10], and rare gas lasers [11][12][13][14][15][16]. In many of these applications, the production of high densities of metastable states is desirable, and techniques for efficiently promoting a substantial fraction of the ground state population to the metastable state are of paramount importance. Although relatively high metastable state densities (∼10 13 cm −3 ) can be achieved for short time scales using pulsed electrical discharge techniques [5][6][7], the realization of comparable long-term steady-state densities remains a significant challenge. Standard approaches include steady state RF excitation (see, for example, [17][18][19]), optical pumping techniques [20][21][22], and a recently proposed hybrid method which combines both [23,24]. In this paper, we investigate the experimental conditions needed to optimize this hybrid approach to maximize the production of metastable state densities. Our study specifically investigates the production of metastable xenon (Xe*), but the general results are expected to apply to other noble gas species as well. The energy level diagrams in Figure 1 provide a simplified comparison of the various methods used to promote the ground state, |g , of a noble gas atom to the metastable state of interest, |m . Here, |m lies roughly ∼10 eV above |g , and optical transitions between |g and |m are forbidden by electric dipole selection rules. Noble gas atoms also have a nearby auxiliary state, |a , which does have a direct optical transition from |g at a resonance wavelength in the deep UV ( ∼100 nm). Finally, we consider an upper-level state, |u , which has NIR optical transitions (∼800 nm ) between both |a and |m . Panel (a) of Figure 1 represents a traditional electrical discharge technique [17][18][19]. Here an RF (or DC) field is applied to a gas of atoms, resulting in various collisional mechanisms that transfer population (both directly and indirectly) from |g to |m . In contrast, panel (b) shows an "all optical" approach in which resonant VUV light (typically from an external lamp) excites |g → |a [20][21][22]. Population is then optically pumped from |a → |u using a narrowband laser, and spontaneous emission then causes a transition from |u → |m . Panel (c) shows an alternative "all-optical" approach that relies on a direct two-photon transition from |g → |u [25,26]. Panel (d) of Figure 1 shows the hybrid approach of interest here [23]. The key idea is that the same electrical discharge that creates population in |m (cf. panel (a)) also creates population in the auxiliary state |a . This electrically-produced auxiliary population can then be optically pumped from |a → |u → |m to supplement the existing electrically-produced |m population. At the same time, spontaneous emission from the dipole-allowed |a → |g transition provides a source of resonant VUV photons that can be subsequently absorbed by nearby ground state [20][21][22], (c) alternative "all-optical" technique [25,26], and (d) the hybrid technique of interest here [23]. The hybrid technique in panel (d) combines the electrical discharge of panel (a) with the optical pumping of panel (b). atoms. Because the overall number of atoms that are excited by the RF discharge in a typical experimental setup is much larger than the number of atoms in the cross-section of the NIR optical pumping beam, this effect gives a significant boost to the population that can be optical pumped from |a → |m . The combination of all of these effects results in the strong optical enhancement to the electrically-produced population of |m that was first observed in [23]. In this paper we investigate the experimental conditions that maximize this optically-enhanced electrical production of metastable states. We use a strong RF discharge and spectroscopic techniques to study and compare the initial populations of both the metastable state |m and the auxiliary state |a , and find that they are each maximized under the same experimental conditions. In addition, we experimentally observe that the metastable state population can be enhanced by an amount greater than the steady-state population of the auxiliary state due to the drastically different effective lifetimes of these states. The end result is a convenient and robust method to experimentally optimize the overall steady-state metastable state population using the hybrid technique illustrated in Figure 1(d). Metastable xenon Our study uses a low pressure (∼100 mTorr) gas of xenon atoms. An overview of the relevant energy levels is shown in Figure 2. Here, the metastable state |m (6s[3/2] 2 state) has a natural lifetime of ∼43 s [27] which is reduced to the order of ms due to collisions under our typical experimental operating conditions [28][29][30]. The auxiliary state |a (6s[3/2] 1 state) has a lifetime of ∼4 ns, which is significantly shorter than the lifetime of |m . In xenon, the UV optical transition between |a and the ground state |g (5P 6 state) is at 147 nm, and the NIR transition between |a and the upper state |u ( 6p[3/2] 1 state) is at 916 nm. This 916 nm transition is driven by a narrowband pump laser in our experiments. Once populated, |u can decay to the desired metastable state |m via spontaneous emission at 841 nm with a probability of 80%, or back to the auxiliary state |a with a probability of 20%. The lifetime of the upper state |u is ∼38 ns [31]. In addition to the four key atomic states of interest, we also utilize an extra upper state |u ( 6p[3/2] 2 state) to perform spectroscopic measurements that characterize the metastable state (Xe*) density as a function of various experimental parameters. As will be described below, the baseline steady-state Xe* density under optimized RF discharge conditions in our experiment is ∼10 11 cm −3 . Under the same conditions, the baseline density of atoms in the auxiliary state |a is ∼10 10 cm −3 . The goal of optically enhanced production of (τ~38 ns) Relevant energy level diagram and associated parameters for xenon. Here, the metastable state |m has an intrinsic lifetime of τ∼43 sec, while the auxiliary state |a has a lifetime of τ∼4 ns. In our experiments, the baseline densities (due to the RF discharge alone) of the metastable state and auxiliary state are ρ m ∼10 11 cm −3 and ρ a ∼10 10 cm −3 , respectively. The goal of the experiments is to optically pump as much of the |a population to |m as possible. The metastable state density ρ m is measured by performing spectroscopy on an 823 nm transition between |m and a second upper-level state |u . Xe* is simply to transfer as much of this |a population as possible into |m [23]. Intuitively, the requirements are (1) starting with a large population in |a , and (2) driving the process with a strong optical pumping beam at 916 nm. In the following sections, we summarize a series of experimental measurements that characterize the optimization of this process. The key parameters that influence the process are the overall pressure of the neutral xenon gas, and the strength of the 916 nm optical pumping beam. Figure 3 provides an overview of the pump-probe experimental setup. The pump and probe lasers are independent narrowband tunable diode lasers (Toptica DL Pro; linewidths ∼300 KHz) centered at 916 nm and 823 nm, respectively. For the spectroscopic measurements of interest, the lasers are scanned over ranges of ∼10 GHz, and the laser frequencies are measured using a calibrated wavelength meter with 100 MHz resolution (High Finesse WSU-30). The pump and probe lasers are combined into a single spatial mode using a single-mode fiber, and then launched into the xenon vacuum chamber as a common free-space beam with a diameter of ∼2 mm. A power meter that can be inserted into the free-space beam before the vacuum chamber is used to measure the laser powers. To avoid atomic saturation effects, the experiments are done with 823 nm probe beam powers of only ∼7.5 uW [32]. As will be described below, the 916 nm pump beam power is a key experimental parameter that is varied in the range of 0 -10 mW. Experimental setup The vacuum system is comprised of a standard 4.5" stainless steel ConFlat cube with glass viewports that let the pump and probe beams pass through the chamber. The chamber is typically pumped down to a pressure of ∼10 −6 3. Overview of the pump-probe setup used to characterize the experimental conditions that maximize the optical production of Xe*. The pump and probe lasers (at 916 nm and 823 nm, respectively) are combined into a common spatial mode using a single-mode fiber, and then launched as a free-space beam through a vacuum chamber containing a gas of xenon atoms. A xenon glow discharge plasma is produced using an RF coil mounted on the outside of a glass window on the vacuum chamber. The dashed-box inset represents a "zoom-in" on a cross-section of the experiment. The RF field excites atoms throughout the entire chamber, while the 2mm diameter pumping beam (and probe beam) only interact with a small subset of the atoms. The 2 key experimental parameters are the neutral xenon pressure and the 916 nm pumping beam power. manometer gauge fitted on the chamber is used to monitor the neutral xenon pressure. The neutral xenon pressure is a key experimental parameter that it is varied the range of ∼10 -600 mTorr in the experiments. A large 4.5" glass viewport is also used on one side of the cube to allow RF excitation of the xenon gas. An RF coil is mounted on the viewport, and driven with a standard tank circuit (consisting of the coil and capacitors). This system has an RF resonance frequency of 130 MHz, and is driven by a low power signal generator that is amplified to produce a glow discharge plasma in the chamber. For our full range of neutral xenon pressures, an amplified RF power of ∼20 W was found to maximize the initial Xe* density [10] (ie. the baseline |m population; via the process shown illustrated in Figure 1(a)). The inset to Figure 3 illustrates a cross-section of the gas of atoms under the influence of both the RF excitation field, and the optical pumping beam at 916 nm. This highlights the fact that while a large ∼10 cm diameter cross section undergoes RF excitation, only a small subset of the atoms interact with the optical pumping beam (∼2 mm diameter). These surrounding atoms provide a source of spontaneously emitted 147 nm photons that boost the baseline population of |a in Figure 2. A photodiode (PD) preceded by various filters is used to measure the transmitted laser powers during the spectroscopic measurements. For the baseline spectroscopic measurements of the |a to |u transition at 916 nm, or the |m to |u transition at 823 nm, only a single laser is used. For the full pump-probe experiments using both lasers, a narrow bandpass filter (centered at 823 nm) is used to block the transmitted 916 nm pump beam, while passing the 823 nm probe beam during the measurements. Experimental results With the absence of the RF field, nearly all of the atoms in the gas are in the ground state and the initial populations of |a and |m are roughly zero. The RF discharge contribution to the population of |m can then be determined by applying the RF field, and performing transmission-spectroscopy measurements on the |m → |u transition at 823 nm. An example result at a neutral xenon pressure of 15 mTorr is shown in Figure 4(a). Here, the 6 "dips" in transmission are due to absorption from the various isotopes of Xe [19]. We obtain an estimate of the Xe* density to be ρ m ∼2 x 10 11 cm −3 by fitting these transmission lineshapes using the models of reference [33], while taking into account the natural abundances of the various xenon isotopes and branching ratios of the hyperfine transitions [34]. This value of ρ m ∼2 x 10 11 cm −3 represents the baseline steady-state Xe* density (ie. |m population) that can subsequently be enhanced by optical pumping population from the auxiliary state |a . We estimate the baseline population of |a by performing analogous transmission-spectroscopy measurements on the |a → |u transition at 916 nm. Figure 4(b) shows a typical result using the same RF discharge conditions and xenon pressure of 15 mTorr. Here, the large central "dip" and smaller side dips are once again due to the various isotopes of Xe and the hyperfine splittings of the relevant states [35]. Fitting the lineshapes in Figure 4(b) provides an estimated baseline |a density of ρ a ∼4 x 10 10 cm −3 . Note that this population of |a results from both the RF discharge, as well as the absorption of 147 nm photons spontaneously emitted from surrounding atoms in the large-scale gas. Next, we repeat these independent measurements for a range of neutral xenon pressures ranging from 10 to 600 mTorr. The results are shown in Figure 5, where each data point represents a baseline density value extracted from fitting a transmission spectrum similar to those in Figure 4. The key result here is the nearly identical behavior of the |m and |a populations as function of the xenon pressure in our system. As the pressure is increased there are simply more atoms to promote from the ground state and the populations increase, while at higher pressures the role of collisions plays a more dominant role and limits the populations. Note that the trade-off between these effects results in the same optimal pressure of 15 mTorr for both the |m and |a populations in our system. At pressures higher than ∼600 mTorr (or lower than 10 mTorr), we were unable to produce a stable discharge in our system. The key result here is the similar dependence on pressure; optimizing the system for the largest baseline ρ m value also provides the largest baseline ρ a value. From a practical point of view, the significance of the nearly identical behavior in Figure 5 is that the same experimental conditions that give the largest baseline Xe* density also provide the largest auxiliary state density. Because the optical enhancement of Xe* production intuitively requires a large baseline auxiliary state population, this result suggests that the conditions needed for maximizing the optically-enhanced production of Xe* can be achieved by simply maximizing the starting baseline density of Xe*. In Figure 6 we investigate the role of the 916 nm optical pumping beam power, P pump , for transferring population from |a to |m . We perform a series of pump-probe experiments for a variety of neutral xenon pressures. Here, 823 nm (probe) transmission spectra similar to those in Figure 4(a) are measured, but now with the 916 nm optical pumping beam turned on (and with ω pump fixed at zero detuning). Figure 6(a) shows an example of results obtained at a relatively high neutral xenon pressure of 300 mTorr. As P pump is increased, more population is transferred from |a to |m , resulting in stronger absorption of the 823 nm probe beam (ie. increasingly deeper "dips" in the transmission spectra). In order to quantify this population transfer, we fit each of the spectra in Figure 6(a) to obtain estimates of the Xe* density ρ m as a function of P pump . We then repeat these pump-probe measurements for a several different neutral xenon pressures ranging from 15 to 600 mTorr, with the results summarized in Figure 6(b). In all cases, a dramatic increase of ρ m is seen as a function of increasing P pump . In many ways, the results shown in Figure 6(b) represent the main result of the study. We see that the goal of maximizing the metastable state density ρ m is achieved by using the neutral xenon pressure that maximizes the initial baseline values of ρ m (and thus ρ a ), and utilizing as much pump beam power as possible. A saturation-like effect is also seen for all cases of neutral xenon pressure [23,24]. The overall utility of this hybrid approach is seen most clearly for non-optimized neutral xenon pressures which result in very small initial baseline densities ρ m (for example, the 600 mTorr case in Figure 6(b)). Here, the application of pump beam powers of only a few mW can provide a relative increase in ρ m by roughly an order of magnitude [23]. However, one significant result which emerges from the data in Figure 6(b) is that these order-of-magnitude increases do not hold for more optimized cases of larger initial baseline densities (for example, the 15 mTorr case in Figure 6(b)). Rather, the data in Figure 6(b) show that the net increase in ρ m as function of P pump is roughly constant over a large range of neutral xenon pressures. Figure 7 examines this effect in more detail by plotting the net increase in metastable state density, ∆ρ m , as a function of neutral xenon pressure, for several different values of P pump . The data shows that over the range of xenon pressures from 15 mTorr to ∼300 mTorr, ∆ρ m is essentially flat, with a value determined solely by P pump . It is interesting to note that the value of ∆ρ m (for all cases in Figure 7) actually exceeds the initial steady-state baseline auxiliary state density ρ a . For example, using our maximum available pump power of P pump = 9.5 mW (upper trace in Figure 7), the change in Xe* density is roughly ∆ρ m ∼8.5 x 10 10 cm −3 over the range of xenon pressures from 15 mTorr to ∼300 mTorr. Over that same pressure range, the baseline auxiliary state density ρ a varies from a maximum of only 3.9 x 10 10 cm −3 down to 0.1 x 10 10 cm −3 (cf. Figure 5). This apparent excess in ∆ρ m is due to the fact that the metastable state effective lifetime (∼ms) is several orders of magnitude larger than the auxiliary state |a and upper state |u lifetimes (∼ns), resulting in a desirable continuous-wave laser-pumped steady-state "build up" of population in the metastable state [36]. In the small drop-off in ∆ρ m values for pressures higher than ∼300 mTorr is most-likely due to insufficient initial ρ a values in our set-up at higher pressures (cf. Figure 5). Summary and conclusions We have investigated the experimental conditions needed to maximize the optical production of metastable xenon using the hybrid method of Figure 1(d) [23,24]. Here, conventional RF-excitation is combined with optical pumping from an auxiliary state to provide a dramatic enhancement of the metastable state density. We find that the same RF discharge conditions that optimize the initial metastable state density also optimize the auxiliary state density. While these initial baseline densities (due to RF excitation alone) vary significantly with the neutral xenon pressure, we find that the subsequent net increase in metastable state density (due to optical pumping from the auxiliary state) is relatively constant over a large pressure range, and is determined solely by the pumping beam power. In addition, the net increase in the steady-state metastable state density can exceed the baseline auxiliary state density due to the vastly different lifetimes of these states. These results suggest that the hybrid technique of Figure 1(d) can provide a robust method for producing relatively large long-term metastable state densities. It is important to note, however, that the long-term densities achieved here (∼10 11 cm −3 ) are still lower than those obtained using short pulsed electrical discharge techniques (∼10 13 cm −3 ) [5][6][7]. Nonetheless, the techniques described here impact applications requiring high densities for longer continuous time scales [16], and studying the transient dynamics of this hybrid approach in the pulsed regime may also be of fundamental interest.
4,807.6
2020-06-05T00:00:00.000
[ "Physics" ]
The STONE curve: A ROC-derived model performance assessment tool A new model validation and performance assessment tool is introduced, the sliding threshold of observation for numeric evaluation (STONE) curve. It is based on the relative operating characteristic (ROC) curve technique, but instead of sorting all observations in a categorical classification, the STONE tool uses the continuous nature of the observations. Rather than defining events in the observations and then sliding the threshold only in the classifier (model) data set, the threshold is changed simultaneously for both the observational and model values, with the same threshold value for both data and model. This is only possible if the observations are continuous and the model output is in the same units and scale as the observations, that is, the model is trying to exactly reproduce the data. The STONE curve has several similarities with the ROC curve, plotting probability of detection against probability of false detection, ranging from the (1,1) corner for low thresholds to the (0,0) corner for high thresholds, and values above the zero-intercept unity-slope line indicating better than random predictive ability. The main difference is that the STONE curve can be nonmonotonic, doubling back in both the x and y directions. These ripples reveal asymmetries in the data-model value pairs. This new technique is applied to modeling output of a common geomagnetic activity index as well as energetic electron fluxes in the Earth's inner magnetosphere. It is not limited to space physics applications but can be used for any scientific or engineering field where numerical models are used to reproduce observations. Introduction Numerical models are a fundamental feature of research in the natural sciences. Models are often used to explain strange and interesting features in an archival data set in order to assess the physical processes responsible for that observational signature. They are also used for prediction, using some estimate of future initial and boundary conditions to determine the state of the system, or even a particular observational quantity, ahead of time. These are typical uses of models in every discipline of Earth and space sciences. There exists a large collection of metrics to assess the goodness of fit for these models to a particular data set. These metrics, for the most part, can be sorted into several major groupings, two of which are fit performance metrics and event detection metrics (e.g., Wilks, 2011;Joliffe and Stephenson et al., 2012;Liemohn, McCollough, et al., 2018). The former group, also called continuous metrics, is based on differencing each data-model value pair and includes many wellknown assessment equations such as root mean square error, correlation coefficient, mean error, and prediction efficiency (e.g., Hogan and Mason, 2012;Morley et al., 2018). The second group, also called categorical metrics, is based on categorizing the observations into events and nonevents and then assessing a model's ability to reproduce this classification. This is done through a contingency table (also commonly called a confusion matrix) in which each data-model pair gets two designations: determining if the observation is in the event state or not and similarly if the model value is in the event state or not. The similarity or difference of the data and model values is irrelevant, only the event/non-event designation matters. This second group includes other well-known assessment equations such as the probability of detection, false alarm rate, frequency bias, and Heidke skill score (see, e.g., Muller et al., 1944;Wilks, 2011). A feature of the event detection metrics is that the model does not have to cover the same range or even have the same units as the observations. The model could be anything that might predict the event state of the observations. Furthermore, the observations do not have to be a continuous-valued real number set, but could be pre-categorized into events and non-events (or a multi-level classification). The model could be a continuous-valued real number set or a discretevalued categorized set. When the data or model happens to be a continuous-valued real number set, then a threshold value for event identification is chosen, a threshold value that could be different between the observational events and the modeled events. An event detection metric that is often used for weather prediction (e.g., Mason, 1982), psychology (e.g., Swets, 1972), medical clinical trials (e.g., Ekelund, 2011), and machine learning (e.g., Fawcett et al., 2006) is the relative (or, originally, receiver) operating characteristic (ROC) curve (see review by Carter et al., 2016). This is an assessment tool that can be applied when the model values are continuous-valued real numbers, using not just one event identification threshold but many. The method is to sweep the event definition threshold for the model values from low to high, calculating two specific metrics, the probability of detection (POD) and the probability of false detection (POFD), and plotting these two arrays against each other. The threshold that yields the location on the ROC curve closest to the upper left corner (high POD and low POFD) can be considered a possible "best setting" for event prediction by this model. This is not the only location for an optimum pick of a final threshold along an ROC curve. Often the final choice will depend on the application and problem specific details. For example, recent developments have discussed the use of skill scores for different solar and space physics applications (e.g., Bobra & Couvidat 2015) and their location on ROC diagrams (e.g., Manzato, 2007;Azari et al., 2018). A further detailed discussion on skill scores and their relation to ROC diagrams can be found within Manzato (2005). An integral quantity sometimes used from the ROC curve is the area under the curve (AUC), which is an overall measure of goodness of fit for the model to the observational events across all of the possible model value event identification thresholds. The ROC curve has recently been used quite often in the Earth and space sciences to assess model performance at detecting events in an observational data set. It is used regularly in the atmospheric sciences, such as for regional ozone ensemble forecasting (e.g., Delle Monache et al., 2006), deciphering the microphysical properties of clouds (e.g., Gabriel et al., 2009), and forecasting summer monsoons over India (e.g., Borah et al., 2013). Earth scientists also employ the ROC curve for a diverse set of modeling activities, including the distribution of rock glaciers (e.g., Brenning et al., 2007), assessing triggering mechanisms of earthquake aftershocks (e.g., Meade et al., 2017), and snow slab instability physics (e.g., Reuter & Schweizer, 2018). This also includes land-air interactions, such as mapping of expected ash cloud locations after eruptions (e.g., Stefanescu et al., 2014), modeling rainfall-induced landslides (e.g., Anagnostopoulos et al., 2015), and statistically forecasting extreme corn losses in the eastern United States (Mathieu & Aires, 2018). The fields of space and planetary science have also started to employ this technique, such as for oblique ionogram retieval algorithm assessment (Ippolito et al., 2016), identifying energetic particle flux injections at Saturn (e.g., Azari et al., 2018), magnetic activity prediction (e.g., Liemohn, McCollough, et al., 2018), and identifying solar flare precursors (e.g., Chen et al., 2019). In short, the ROC curve has become an essential tool, among many that can and should be applied, for model assessment across many natural science disciplines. The ROC curve, however, only assesses the model's ability to predict a single observational event identification threshold. While this is desirable if the data were pre-classified as events or non-events, this imposes a simplification of the data set when the observations are also continuous-valued real numbers. That is, the ROC curve does not test the model's ability to predict events across the full range of the data. A family of ROC curves can be produced using different data-value event identification thresholds (and sweeping the model-value event identification threshold to produce each ROC curve), which is acceptable if the model is only being used to maximize the prediction of events. If the model, however, is trying to reproduce the exact values of the observations, then it is useful to conduct an assessment for which the data and model have the same threshold setting. The ROC curve, unfortunately, cannot easily test the model's ability to reproduce the observed events at the same threshold setting, sweeping through all possible event identification thresholds. There exists a need for a new metric. Like the ROC curve, this new metric should test a model's ability to predict observed events across the full range of possible model-value event identification settings, but rather than using a single observational event categorization, it should sweep through the same range of event identification thresholds as used for the model. Such a metric is proposed below, called the sliding threshold of observation numeric evaluation, or STONE, curve. This is based on the ROC curve but includes the desirable features described above. The work then presents an application of the STONE curve to two space physics data sets, the prediction of a geomagnetic activity index and energetic electron fluxes in near-Earth space. Similarities and differences between the ROC and STONE curves are discussed, as well as the interpretive meaning of features in the STONE curve. Method of Calculation The calculation of a STONE curve is rather similar to that of a ROC curve, with one major exception -both thresholds slide together, incrementing the two event identification thresholds simultaneously so that the same threshold value is used for both the data and the model at each setting from low to high across the range. Because this tool is for continuousvalued observations and model results, for which an "event" is an arbitrary designation, there does not have to be a pre-defined event threshold in the observations. In fact, it is desired that the model match the observations for all levels of "event" definition. Therefore, in the STONE tool, the two thresholds move together. This is illustrated in Figure 1, showing an arbitrary data set plotted against a model output that is trying to reproduce these values. In (a), only the blue curve shifts while the red curve is fixed at some level. In (b), both the red and blue thresholds shift together. As these lines shift, data points are converted from one quadrant to another. The purple dashed curve is the zero-intercept unity-slope line, for reference. Figure 1a shows the calculation scenario for the ROC curve, with the event identification threshold for the observations set to a fixed value and the threshold for the model results sweeping from low to high values. Annotations label the four quadrants of the chart, as defined by these two thresholds. As the model threshold changes, the points in the chart change quadrant. Specifically, two shifts occur: points in the "hits" quadrant (variable a) move to the "misses" quadrant (c) and points in the "false alarms" quadrant (b) move to the "correct negatives" quadrant (d). The ROC curve is defined from two metrics in the "discrimination" category (Murphy & Winkler, 1987) of data-model comparison techniques. Discrimination metrics are assessments that only use a portion of the data values within a specified range (and the corresponding model values). For event detection metrics, the usual practice is to use the event state of the observations to define the subsets of the data. In particular, the ROC curve uses POD and POFD, which have the following formulas: Where a, b, c, and d are point counts from the quadrants in the scatter plot. It is seen that these two formulas are mutually exclusive, POD only uses the hits and misses quadrants while POFD only uses the false alarms and correct negatives quadrants. Because the data threshold remains fixed for the ROC curve, the points either contribute to POD or POFD, regardless of the model threshold designation. For a very low model threshold setting, all of the points are in either the hits or false alarms quadrants, which sets both POD and POFD to one. As the model threshold is increased, points are converted from hits to misses and from false alarms to correct negatives, which monotonically decreases POD and POFD. For a very high model threshold, all of the points will then be misses or correct negatives, and both POD and POFD will be zero. Figure 1b shows the calculation scenario for the STONE curve. In this situation, both event identification thresholds move simultaneously. The four quadrants are still defined as with the ROC curve, but with both thresholds changing, the shift of points from one quadrant to another is not so simple. For a very low threshold setting, nearly all points will be hits and perhaps a few will be false alarms. Thus, like the ROC curve, the STONE curve also begins in the (1,1) corner of POFD-POD space (assuming a "low" starting threshold value). Also similarly, for a very large threshold setting, nearly all points will be correct negatives and perhaps a few will be misses, with the STONE curve ending in the (0,0) corner of POFD-POD space. Another similarity is that false alarms are converted into correct negatives as the threshold setting increases. The big difference between the ROC and STONE curve calculations, however, is that as the event identification threshold increases, a hit event can shift to any of the other three quadrants. If it is far above the data threshold but close to the model threshold, then the threshold increase will cause the point to shift from being a hit to a miss. If it is close to the data threshold but far away from the model threshold, then it will shift from being a hit to being a false alarm. If it is close to both thresholds, then there is a chance it will cross both lines during the incremental shift and jump from the hits regions to the correct negatives zone. Only the first of these three moves (hits to misses) occurs with the ROC curve calculation. In addition, misses are shifting to become correct negatives as the observational threshold is incremented to higher values, another move that is not part of the ROC curve calculation. The behavior of the POD and POFD values as a function of threshold, therefore, are not intuitively known and the STONE curve does not have to be monotonic between its (1,1) and (0,0) endpoints. Application of the STONE tool With this definition for the STONE curve, it can be used on a few example data-model comparisons to illustrate the similarities and differences with the ROC curve. Here, two comparisons will be shown. The first is for a model prediction of a geomagnetic activity index, originally presented by , and the second is for energetic electrons in near-Earth space, originally presented by Ganushkina et al. (2019). compared the output from experimental real-time simulations of the Space Weather Modeling Framework (SWMF) against the disturbance stormtime index, Dst (Rostoker et al., 1972). The SWMF is a collection of space physics numerical models simulating the Sun-Earth space environment , and in many other planetary environments (e.g., Jia et al., 2012;Ma et al., 2013;Dong et al., 2014;Liemohn et al., 2017). This geospace environment simulation has a very similar setup to that of Pulkkinen et al. (2013) While the individual points are analyzed as unique contributions, they are binned to produce the colored curves on the plot, demarking contours of 50 points within a 5-by-5 nT grid. Note that, because Dst is near zero for quiet times and shifts to negative values during storm times, events are defined as values below (i.e., more negative) a chosen threshold. As defined by Gonzalez et al. (1994), a typical designation for the Dst index measuring a storm situation is -30 nT or below for a weak storm and -50 nT or below for a moderate storm, so these two settings are used for the ROC curve observational threshold setting. These two thresholds are indicated in Figure 2a as horizontal dashed lines. and -50 nT, with events defined as the points below these lines. A purple dashed zero intercept unity slope line is also drawn, for reference. (b) STONE (red) and ROC curves (blue for -50 nT, orange for -30 nT observed event threshold) calculated from the scatter plot. Symbols are shown along all three curves at every 5 nT threshold increment. The diagonal dotted line with zero intercept and unity slope is shown for reference. Predicting a geomagnetic activity index The ROC and STONE curves are calculated as follows and shown in Figure 2b. To create a ROC curve, the model threshold setting is initially set to +10 nT and then swept in 1 nT increments to -120 nT. The data threshold for events is held fixed, at -50 nT for the blue curve and -30 for the orange curve. To create STONE curve (red line), this same model threshold variation is followed, but the data threshold is also swept from +10 to -120 nT. Symbols are shown along each of the plots every 5 nT of threshold increment. Some features of Figure 2b should be noted. It is seen that the ROC curves monotonically increase from (0,0) to (1,1). The ROC curve with a -50 nT event threshold is well above the zero-intercept, unity-slope line (the diagonal purple dotted line on Figure 2b), indicating that the model is reasonably good at reproducing moderate and stronger storm events recorded by the real-time Dst index. The closest approach to the upper left corner occurs at a threshold of -37 nT for the -50 nT threshold ROC curve and -17 nT for the -30 nT ROC curve, which indicates that the model somewhat underpredicts the strength of such storms. The STONE curve lies both above and below these two ROC curves, depending on the threshold. The STONE curve is coincident with each ROC curve at the locations where the ROC curve model threshold setting is equal to the observational threshold setting (-30 nT for the orange curve, -50 nT for the blue curve). They cross elsewhere, too, such as in the low-threshold (i.e., a threshold of near and above zero) region in the upper right region of the plot. It is seen that the STONE curve is not monotonic but includes a local maximum and local minimum at the "high threshold" settings (minimum at -28 nT threshold and maximum at -52 nT threshold). The nonmonotonicity is because POD increases at these threshold values. An increase in POD is achieved by more points leaving the misses quadrant than leaving from the hits quadrant. This is better understood by considering the distribution of points beyond a few threshold choices. Figure 3 shows histograms of the points above a particular data or model threshold setting. In particular, three threshold settings are displayed --30 nT, -40 nT, and -50 nTshowing the points at "higher" (more negative) Dst values in both the data and model (left and right columns, respectively). For Figure 3a, the counts are for all points below some horizontal line of an event identification threshold setting of the observations. For Figure 3b, the counts are for all points to the left of some event identification threshold setting for the model values. The calculated skew for these distributions is listed in each panel. In Figure 3a, it is evident, both qualitatively from the histograms and quantitatively from the skew values, that the distribution of model output values is significantly changing across these three observational threshold settings. For the more negative threshold, there are far fewer model values between zero and -50 nT. That is, across these threshold settings, many of the points in the misses quadrant were converted into correct negatives. In Figure 3b, the three distributions have essentially the same shape, with a large negative skew. These distributions do not undergo the same systematic alteration in their shape the way that the distributions in Figure 3a did. Putting these two features together, it means that more misses were removed than hits, and so POD increased as the STONE threshold was swept to more negative values between -30 nT and -50 nT. This resulted in a nonmonotonic wiggle in the STONE curve at these thresholds. Predicting energetic electrons in near-Earth space Ganushkina et al. (2019) compared real-time output from the inner magnetosphere particle transport and acceleration model (IMPTAM) with measurements from the magnetosphere electron detector (MAGED) on the geosynchronus orbiting environmental satellites (GOES) in geostationary orbit at 6.62 Earth radii geocentric distance over the American sector (Rowland & Weigel, 2012;Sillanpaa et al., 2017), specifically, with data from GOES-13, -14, and -15. IMPTAM, initially developed by Ganushkina et al. (2001) and used regularly for investigating the physics of plasma sheet electron transport (e.g., Ganushkina et al., 2013Ganushkina et al., , 2014, has been running in a real-time operational mode since February 2013, first in Europe and then a mirror site at the University of Michigan. Ganushkina et al. (2015) made an initial comparison of these model output values against a few months of GOES data, while Ganushkina et al. (2019) provided a far more robust validation analysis of the model, covering over 18 months (September 20, 2013through March 31, 2015. It is this second interval that will be used again for this study. . Scatter plot comparing GOES and IMPTAM 40 keV electron differential number fluxes (log base 10 of electrons cm -2 s -1 sr -1 keV -1 ) for (a) all MLTs and (b) the 03-09 MLT range. Color contours are shown every 50 points per bin (10 bins per decade in both data and model). The horizontal dashed lines show the ROC thresholds of 3x10 4 and 2x10 5 . A purple dashed zero intercept and unity slope line is shown for reference. The lower panels show STONE curves (red) and ROC curves (blue for 2x10 5 and orange for 3x10 4 ) for (c) the full MLT comparison and (d) the 03-09 MLT range. Symbols are shown every factor of 2 increase in threshold value. The diagonal dotted line with zero intercept and unity slope is shown for reference. Figures 4c and 4d show the ROC and STONE curves for these two data-model comparisons, the full set with values at all MLTs and the subset from 03 to 09 MLT, respectively. In both Figures 4c and 4d, the STONE curve again has a nonmonotonic shape at high threshold settings (above 4x10 5 ). Like the similar case for the Dst STONE curve in Figure 2b, this shows that, for these thresholds, more points are being removed from the misses quadrant than being removed from the hits quadrant. Figure 4d has another unusual feature in the STONE curve, seen as a nonmonotonicity in the x-axis values. This is from the POFD values increasing with increasing threshold (rather than decreasing, as they always do with a ROC curve). This is occurring for thresholds between 1x10 4 and 4x10 4 , just as the STONE curve crosses the orange ROC curve. Considering equation (2) above, the correct negatives in the denominator are always increasing with increasing threshold, as points convert to this quadrant from any of the other three quadrants. For POFD to increase, the false alarms had to increase faster than the correct negative point count. This is seen in Figure 4b as the points have a horizontal peak (highlighted by the flat, elongated color contours). Many points are being converted from the hits quadrant into the false alarms quadrant and, for these threshold settings, this conversion to false alarms outpaces the conversion of points into the correct negatives quadrant. This results in a ripple in the STONE curve at these thresholds. Figures 4c and 4d, the STONE curve is quite close to the two ROC curves, which are very similar to each other. This can be understood from the "flatness" of the cloud of points in the scatter plots in Figures 4a and 4b. The points are not well aligned with the zero intercept and unity slope line, revealing less than perfect agreement between the observations and model output. However, in terms of physics-based real-time modeling of near-Earth magnetospheric electron fluxes, this is actually quite good, arguably the best that is currently available. This means that all ROC curves will be close to each other, as any observational event identification threshold will have a relatively similar transfer of points between the quadrants. However, because the model is trying to exactly reproduce the observed flux values, the STONE curve can be calculated, and this new curve includes several nonmonotonicities. The wiggles and ripples in the STONE curve reveal thresholds where the distribution of points, in either the vertical or horizontal direction, are asymmetric, bi-modal, or otherwise non-Gaussian. The ROC curves cannot reveal this kind of information about the distribution of points in the scatter plot the way that the STONE curve can. Discussion The STONE curve introduced above is a new tool for assessing the ability of a model with a continuous-valued output to exactly match a continuous-valued data set. As illustrative example usages, it was applied to two recently-published data-model comparisons, a prediction of the disturbance storm-time index Dst and a prediction of energetic electron fluxes in near-Earth space. The STONE curve is quite similar to the ROC curve. It is based on the same contingency table calculations of POD and POFD, plotting these two values against each other for a range of event threshold settings. Like the ROC curve, it starts at (1,1) for low threshold settings and moves to (0,0) for high threshold settings. Also like the ROC curve, being above the zerointercept, unity-slope line indicates a prediction that is better than random chance. Curves are better when they are closer to the upper left corner in POFD-POD space, and a common choice for the best optimization point along along a ROC or STONE curve is that closest to this corner as this point reveals the best model threshold setting for optimizing discrimination performance. That is, both curves reveal a possible best model threshold setting for event prediction, the ROC curve revealing the best settings for a specified observational event identification threshold and the STONE curve revealing the best setting against the an identically defined observational event. Of course, this is "best" only if discrimination is what should be optimized for the particular application. A different threshold settings might be most favorable if other considerations outweigh discrimination, such as minimizing false alarms or maximizing a particular skill score. Another similarity is that the integral area of the ROC curve, AUC, is equally applicable to the STONE curve. AUC, a synthesis of the entire threshold-setting range into a single number, indicates the quality of the chosen model to predict the events identified in the observational data (see the detailed explanation of AUC in Fawcett (2006) or Ekelund (2011)). Being an integrated quantity, AUC is a complementary metric to the "best model threshold setting for event prediction" mentioned in the preceding paragraph because AUC uses information from all model threshold settings, even those with POFD-POD coordinates far from the "best setting" upper-left corner of the graph. Comparing AUCs for several STONE curves (i.e., using different models against the same data set) will provide a quantitative assessment of which of the models has the best system-level predictive capability against that data set. It could be that the model with the highest AUC is not the model with a point along its STONE curve closest to (0,1) in POFD-POD space. Such a case reveals that the first model, with the higher AUC, has the best model physics for reproducing the data set as a whole, but that the second model is actually best at predicting events with a particular threshold setting. Because it is calculated the same way, AUC can be used to compare STONE curves just like it is for ROC curves. A key difference between the STONE and ROC curves is that the STONE curve can have nonmonotonicities. These features, which can be wiggles with respect to either POD or POFD, reveal features of the model prediction of events that are not easily extracted from a ROC curve. This makes the STONE curve somewhat like a fit performance metric, even though it is an event-detection metric that disregards the difference between the data-model pairs. The nonmonotonicities in the STONE curve reveal information about the distribution of points in the data-model comparison. Specifically, they show the existence of an asymmetry, perhaps a non-Gaussian point spread like a skewed or bimodal distribution, for the pairs above that threshold setting. Combined with a histogram or even fit-performance data-model comparison formulas for this subset of either the data or model values, the nature of this distribution can be explored. Why not just start out by calculating fit performance metrics on these subsets? The answer is because the subset of interest would not have been known; the STONE curve revealed the thresholds where the distribution had a changing or non-Gaussian distribution. That is, it could be used to optimize the fit performance analysis by identifying the subset of the data or model that should be considered in more detail. Also, the STONE curve includes information not just within a subset of the data (discrimination) or a subset of the model (reliability), but includes information about the entire data-model comparison set, because POD and POFD use all datamodel pairs in the point counting in the quadrants. For one of the specific examples in the manuscript: continuous metrics will tell the user very little about the SWMF's ability to predict magnetic storms of -50 nT or less. A ROC curve is far more suited to this, and a STONE curve one step farther, revealing the ability of the model to predict Dst levels below any threshold (which could be accomplished by a large family of ROC curves). No continuous metric that does this type of assessment. If the detection of events is desired, then the STONE curve is an advantageous assessment tool in addition to standard continuous fit performance metrics. A useful follow-on study to this would be a detailed analysis of the features of the STONE curve to the underlying distribution of points in the data-model scatter plot. That is, by assuming known two-dimensional distributions of points of several different shapes and parameter settings, the connection between the distribution and the resulting features in the STONE curve can be isolated. Such an in-depth assessment of the STONE curve is beyond this initial description and illustrative usage of this metrics tool and is left as a future project. A key feature of the STONE curve is that it reveals the threshold (or range of thresholds) for which the model does best at reproducing similarly-defined events in the data. A single ROC curve cannot do this because it uses a fixed threshold for identifying events in the observations. When the data are continuously-values and the model is seeking to reproduce these exact values, then it is useful to examine the event detection capability of the model at the same threshold settings between data and model. A single ROC curve doesn't do this, except at one threshold setting. The STONE curve, therefore, is a better assessment tool for models that are trying to predict the exact value of a data set. The ROC is still a highly useful tool for event prediction and this study does not seek to replace it with the STONE curve. Indeed, the ROC curve is optimal for categorical data sets where the observations have been pre-classified as events and non-events. In this case, the STONE curve cannot be used because the data and model are on different scales, the former being a binary yes-no designation and the latter being either a real number range or its own categorical designation. The ROC curve can handle this difference in units while the STONE curve cannot. The two example data-model comparisons to which the STONE curve was applied are both from space physics. The first was an evaluation between a physics-based model of geospace, running in real time, with the real-time version of the Dst index, a measure of geospace activity (see its comparison with other similar indices in Katus & Liemohn (2013)). Many models exist for the prediction of Dst (see the review by Liemohn, McCollough, et al., 2018), with some models doing exceptionally well at reproducing the observed time series. While this chosen model for this comparison is arguably the best physics-based model for reproducing Dst (see, for comparison, the solar cycle storm-interval Dst comparison of Liemohn & Jakowski (2008)), it is not the best model available at predicting this index. In fact, many empirical models are substantially better at capturing the storm intervals of Dst. The second example was a comparison of a physics-based model of energetic electron fluxes in the near-Earth magnetosphere, running in real time, with real-time observations from a geosynchronous spacecraft. Magnetospheric charged particle fluxes are notoriously difficult to reproduce with physics-based modeling approaches (see, e.g., Morley et al., 2018), and even empirical models reduce the problem to remove the fast temporal dynamics, averaging over a day (e.g., Li, 2004) or an hour (e.g., Boynton et al., 2019). That is, these two examples represent state-of-the-art physics-based approaches to space weather nowcasting, but are not the best predictions of these two quantities across the field. It is worth stating here that there are many other metrics in existence for evaluating a scatter plot of data-model values like that shown in Figure 1. No one metric equation or technique does everything; each was designed to assess only a specific aspect of the relationship. That is, neither the ROC curve nor the STONE curve should be used as the sole assessment tool for a model against a particular data set. In practice, many metrics, from both the continuous fitperformance grouping and from the categorical event-detection grouping, should be applied to examine the quality of the model from a number of perspectives. It should be mentioned that this is not the first application of sliding both the observational and model event identification threshold. As one example of this, in their presentation and initial usage of the extreme dependency score (EDS), Stephenson et al. (2008) simultaneously moved both thresholds. Events become rarer with increasing threshold and that study examined the relationship of EDS as a function of this rarity -moving both thresholds together, as is done here for the STONE curve. A final note to make here is that this is not the first usage of the STONE curve. Both and Liemohn, McCollough, et al. (2018) used STONE curves in the plots labeled as ROC curves. It is clear that these panels are mislabeled because nonmonotonicities are seen in these lines. Conclusions A new data-model comparison assessment tool has been introduced, described, used, and interpreted -the sliding threshold of observations for numeric evaluation curve. Based on the relative operating characteristic curve, the STONE curve is created by plotting POD against POFD for a wide range of threshold settings. The main difference with the ROC curve is that the STONE curve requires the data to be continuous-valued real numbers and the model to be attempting to reproduce these exact values. The threshold is moved not only for the model, as is done for the ROC curve, but also for the observational event identification threshold setting, which is moved simultaneously with the model threshold setting. The STONE curve has many features in common with the ROC curve with one large exception -it can have nonmonotonicities in both the POD and POFD values. For the ROC curve, the points shift within the quadrants defining POD or within the quadrants used to define POFD, but not between these two mutually exclusive regions. The ROC curve is, therefore, always monotonic, sweeping from (1,1) to (0,0) in POFD-POD space. For the STONE curve, the motion of the observational threshold moves points from the POD regions to the POFD regions, allowing for these nonmonotonic features in the STONE curve. These wiggles and ripples, however, reveal information about the underlying distribution of points in the data-model scatter plot. Specifically, if the distribution is shifted, asymmetric, or bi-modal, the STONE curve will have a nonmonotonicity. Further investigation of the distribution, through a histogram, skew calculation, or other metric assessment, can reveal the true nature of the data-model comparison for this threshold setting. It is hoped that the STONE curve becomes a useful data-model comparison tool. It has been used with two space weather applications in this study but these are purely illustrative examples. A dozen studies using ROC curves across the Earth and space sciences were given in the Introduction above. Some of these studies were based on observations that were preclassified yes/no as events or not, and so the ROC curve is the proper tool for assessing the model's ability to predict those events. Some of these studies, however, and others like them, are based on models trying to exactly predict the observed data values, in which case the STONE curve might be a useful assessment tool. For any continuous-valued model trying to reproduce the exact numbers of a continuous-valued data set, the STONE curve can be calculated, perhaps, as shown for the two examples here, revealing additional information about the data-model comparison than can be obtained from the ROC curve alone. The STONE curve is a general purpose metric for use whenever a model is trying to exactly reproduce a continuous-valued data set. It can be used with both archival observations as well as for assessment of real-time nowcasting across the full breadth of science and engineering disciplines.
8,729
2020-01-30T00:00:00.000
[ "Physics" ]
Orthopyroxene rim growth during reaction of (Co, Ni, Mn, Zn)-doped forsterite and quartz: Experimental constraints on element distribution and grain boundary diffusion Mantle metasomatism is an important process in subduction zones in which fluids from the dehydrating oceanic slab interact with the overlying upper mantle resulting in a chemical alteration of the mantle. Consequently, this fluid-rock interaction may influence the mantle rock's physical properties such as the deformation behavior. In order to study element redistribution during mantle metasomatism in the laboratory, we used the simplified model reaction olivine + quartz = orthopyroxene, where olivine acts as representative for the upper mantle and quartz as proxy for the metasomatizing agent. We conducted piston-cylinder experiments at 1.5 GPa and 950 to 1400 °C, lasting between 48 and 288 h, on samples containing a mixture of quartz and one set of synthesized forsterite samples doped with either Co, Ni, Mn, or Zn. Additionally, we tested the influence of either nominally anhydrous or hydrous experimental conditions on the chemical distribution of the respective dopant element by using either crushable alumina or natural CaF2 as pressure medium. Results of the chemical analyses of the recovered samples show dopant specific partitioning between doped forsterite and orthopyroxene independent of the confining pressure medium; except for the runs in which Ni-doped forsterite samples were used. The observed Ni- and Co-enrichment in forsterite samples may be used to identify mantle rocks that underwent mantle metasomatism in nature. Introduction The formation of pyroxenite veins in peridotites can be explained by the interaction between peridotites and an external silicon enriched fluid or melt, e.g., in subduction zones where dehydration-derived fluids from the descending slab interact with the overlying olivine-rich mantle wedge (Bodinier et al. 1989;Borghini et al. 2020Borghini et al. , 2016Borghini et al. , 2013Cvetković et al. 2007;Hidas et al. 2021;Wulff-Pedersen et al. 1999). The interaction between peridotites and fluids or melts is called mantle metasomatism. In order to better understand natural processes, such as mantle metasomatism, mostly occurring in chemically complex systems, model reactions are needed that are on the one hand simple enough to assess the underlying chemical and physical mechanisms and on the other hand still comparable to natural systems. The model reaction of choice to investigate mantle metasomatism is orthopyroxene (Opx) rim growth between olivine (Ol), as a proxy for Earth's mantle, and quartz (Qz), representative for the Si-rich metasomatizing agent (e.g., Abart et al. 2004;Fisler et al. 1997;Gardés et al. 2011;Milke et al. 2001;Yund 1997 and references therein). Since Ol and Qz are not in chemical equilibrium, they will react to produce an orthopyroxene rim separating both reactants. The Opx forming reaction between Ol and Qz is Ol + Qz → 2 Opx. At sufficiently high temperatures that allow the diffusive redistribution of chemical components, an Opx rim phase with a thickness of a few micrometers forms within common laboratory time scales of several hours to days. Previous studies experimentally demonstrated that enstatite grows between forsterite (Fo) and Qtz in equal amounts into both directions from the initial Fo-Qz interface indicating that rim growth is mostly controlled by MgO diffusion through the Opx layer (Abart et al. 2004;Milke et al. 2007Milke et al. , 2001. Of course, processes such as element diffusion may be significantly faster in a fluid-rock system than in this simplified solid-solid reaction. Using natural olivines with Fo 90.1 , a previous experimental study demonstrates a particular chemical zoning pattern of Fe that evolved during Opx rim growth (Milke et al. 2011). Considering equilibrium partitioning at the Ol|Opx interface, En 91.4 will replace a Fo 90.1 (Seckendorff and Neill 1993). To ensure overall mass balance, the excess X Fe will be incorporated into Opx that replaces Qz resulting in En 88.0 next to the Opx|Qz interface. Thus, after some time, Fe liberated at the Ol|Opx interface diffuses from the Fe-poorer region next to Ol towards a Fe-enriched area next to Qz. This apparent paradox can be solved by considering the polycrystalline structure of the Opx rim. In the studied temperature range, grain boundary diffusion coefficients are about 3-5 orders of magnitude larger than those for volume diffusion (Joesten 1991). Therefore, obeying equilibrium partitioning at the Ol-replacement front and overall mass balance, Fe/Mg counter-diffusion through the rim phase takes place along grain boundaries that act as fast diffusion pathways. Because of quartz' limitation for cation exchange, Fe is not buffered at the Opx|Qz interface and any excess Fe will be incorporated into Opx replacing Qz (Milke et al. 2011). Since natural olivines can contain various other metal cations such as Ni in minor or trace amounts, Milke et al. (2011) observed a second chemical zoning pattern of Ni that is different to the Fe-zoning pattern. The authors demonstrated an enrichment front of Ni in the relict olivine grains ahead of the Ol-replacement front, but no zoning pattern within the Opx rim was detectable, because the Ni-concentration was below the microprobe's detection limit. Using a 1-dimensional diffusion model to predict the chemical distribution of Ni during Opx rim growth, the authors state that Ni-enrichment in Ol ahead of the Ol-replacement front would only be possible assuming that Ni-back diffusion into Ol is faster than the ratio of Ni volume diffusion in Ol and Ni diffusion in Opx grain boundaries. As a result, the Opx replacing a Nienriched Ol would successively incorporate more Ni in order to obey equilibrium partitioning at this interface. From these results the authors conclude that equilibrium partitioning of Fe, Mg, and Ni only plays a role in a narrow zone right at the Ol|Opx interface, but that the chemical distribution of these elements throughout most of the rim phase will be rather controlled by kinetic fractionation. Chemical data of olivines of cratonic areas show Niconcentrations in olivine of up to 3500 ppm, which is almost ten times higher than regular Ni concentrations in San Carlos olivines (Kelemen et al. 1998). The authors explain these elevated Ni-concentrations to be the result of metasomatism in the upper mantle. Strikingly the experimental results of Milke et al. (2011) on Opx-rim growth between natural San Carlos olivine and Qtz show the enrichment of Ni in the remnant olivine grains, which is well in accordance with the findings and the interpretation of Kelemen et al. (1998) based on chemical data from natural olivines. The present study aims in experimentally investigating the distribution behavior of other metal cations such as Co, Mn, and Zn during Opx rim growth as well as to test the proposed Ni-distribution pattern of Milke et al. (2011) by using synthetic forsterite doped with either Co, Ni, Mn, or Zn. Although less relevant for mantle metasomatism due to slab dehydration, but important for processes such as grain boundary diffusion, we also studied if differences in water fugacity influence the cation's distribution behavior and if Synthesis of doped forsterite and sample characterization Doped forsterite samples (d-Fo) were synthesized by using a flux crystal growth method (Bloise et al. 2009). Four different sets of doped forsterite, Co-, Ni-, Mn-, and Zn-doped forsterite samples, were produced for the experiments. Handselected d-Fo crystals were chemically homogenized prior to the experiments by putting each set in a Pt cup closed with a Pt lid and placed in a furnace at around 1200 °C and ambient pressure for three days. Because volume diffusion coefficients for the selected dopant elements range between 10 -14.57 m 2 /s to 10 -15.70 m 2 /s in a temperature range of 1200 to 1300 °C (Petry et al. 2004;Spandler and O'Neill 2010) existing chemical gradients in the 100 to 500 μm large crystals will be either erased or reduced. The oxidizing conditions during this homogenization process were not controlled. Chemical analyses of the starting material and of the recovered samples after the piston-cylinder experiments were performed using two different electron probe micro-analyzer (EPMA) -a Jeol JXA 8200 Superprobe and a JEOL Hyperprobe JXA-8500F microprobe. The acceleration voltages for the chemical analyses were set to 15 and 8 kV, respectively. Further details on the EPMA chemical analyses are given in Table 1. For the line measurements we used a step size of 1 μm for analyses using the Jeol JXA 8200 Superprobe and a step size of 0.5 μm for analyses conducted using the JEOL Hyperprobe JXA-8500F microprobe. Experimental methods The samples used for the piston-cylinder experiments consisted of crystals from one set of doped forsterite samples, respectively, embedded in a quartz matrix in a weight ratio of 1:20. The grain size of the synthesized forsterite crystals selected for the experiments ranged from 100 to 500 μm and the quartz grains had a uniform size of approx. 50 μm. It has been shown that Fe-Mg diffusion in olivine varies as a function of crystal orientation (Chakraborty 1997), yet, investigation of the anisotropy of diffusion for each dopant element in forsterite was beyond the scope of the present study. Further, this study aims to qualitatively show potential differences in chemical distribution features of the dopant elements, which are not affected by anisotropy. For the sake of simplicity, we therefore assumed isotropic lattice diffusion. To qualitatively test the influence of water on the distribution behavior of the dopant elements during Opx rim growth, we used two different pressure media. It has been experimentally demonstrated that samples showing higher water concentrations relative to the surrounding pressure medium loose water due to hydrogen diffusion from the sample towards the pressure medium (Truckenbrodt and Johannes 1999). In an analogous manner, the water content in the sample increases in case there is a chemical gradient in water concentration from the pressure medium towards the sample. Therefore, in order to create hydrous experimental conditions during the piston-cylinder experiments, natural CaF 2 was used as pressure medium (Fig. 1). Natural CaF 2 always contains some amount of water, which will diffuse from the hydrous pressure medium into the sample during the experiment. Effectively dry conditions were established by using crushable alumina, which acts hygroscopically, as inner pressure medium ( Fig. 1; Gardés et al. 2011). For the runs in which we used CaF 2 as pressure medium, the sample material was filled in platinum capsules and welded. The final sample size was approx. 1 cm in length and a few mm in width. For the runs in which we used crushable alumina we mechanically sealed the capsules, which had an outer diameter of 3 mm, an inner diameter of approx. 2.5 mm, and a height of 2 mm. Regardless of the pressure medium used, neither the powders nor the prepared samples were dried prior to the experiment. For the runs in which we used hydrous CaF 2 as pressure medium the sample name contains an 'h' and for the tests performed using crushable alumina the sample names contain an 'a'. Although we did not measure the water concentration initially present in the samples and were not able to monitor the evolution of the water concentration in the sample with time, we will call the samples that were surrounded with the hydrous CaF 2 as "hydrous" samples and those surrounded by crushable alumina as "anhydrous" samples. Every experiment was performed under constant pressure and temperature conditions. First, the sample was pressurized to 1.5 GPa. After leaving it under pressure for around 1 h, heating started to either 950 °C, for the hydrous runs, or to varying temperatures (1100 to 1400 °C) for the nominally anhydrous runs ( Table 2). The runs conducted under hydrous experimental conditions lasted 48 h, whereas the anhydrous runs were stopped after different durations due to unstable experimental conditions at elevated temperatures (Table 2). Based on the laboratory calibration using the quartz-coesite transition, a pressure correction of -15% was applied to account for frictional loss and the pressure accuracy is estimated to be ± 0.05 GPa (Gardés et al. 2011). The temperature was measured using a S-type thermocouple and the associated temperature error is assumed to be ± 20 °C (Gardés et al. 2011). In addition to the hydrous runs on Co-and Ni-doped forsterite samples that lasted 48 h, we conducted two experiments that were stopped after 12.5 and 168 h, respectively, in order to highlight the influence of the presence of Co and Ni on Opx-rim growth rates. After quenching and decompression, the recovered samples were embedded in epoxy, cut, polished, and carbon coated for the microstructural and chemical analyses. The Opx-rim widths were measured using ImageJ (Abràmoff et al. 2004). The number of measurements for each sample and the arithmetic mean of the measured rim widths are listed in Table 2. To investigate the chemical variations across the rims and within the relict doped forsterite, we conducted line measurements with a step size of either 0.5 μm or 1 μm and/or element distribution maps. Starting material Results of chemical analyses of the d-Fo crystals prior to annealing are given in Table 3 and their corresponding mineral formulae are listed in Table 4. The highest dopant concentration was obtained for Co-doped olivine with ~ 20 wt% CoO. Ni-doped olivine contained ~ 10 wt% NiO, and Zn-and Mn-doped olivine contained only ~ 1.5 and ~ 1.2 wt% ZnO and MnO, respectively. Although natural olivines do not show such high concentrations of either dopant cation, we decided to synthesize doped forsterite samples showing high dopant concentration (> 400 ppm) allowing for the detection of any chemical gradients within the Opx rim and the relict doped forsterite crystals. Chemical zoning The microstructures of the recovered samples reveal rims that formed between doped forsterite and quartz matrix (Fig. 2). The rim phase has been identified as Opx using EPMA chemical analysis. In back-scattered electrons (BSE) mode, the Opx rims show varying brightness contrasts implying variations in chemical composition (Fig. 2). Line measurements across the Opx rim, which developed around Co-doped forsterite, reveal that the darker inner part of the Opx rim adjacent to Co-Fo is depleted in Co relative to the brighter outer part next to Qz (red data points in Fig. 3a). Consequently, to ensure Opx stoichiometry, the Mgconcentration shows the exact opposite trend (black data points in Fig. 3a). The same Co-and Mg-concentration patterns are demonstrated in Co-and Mg-distribution maps of the nominally anhydrous sample Co-a-257 (Fig. 3b). Further, both data, the line measurement (0.5 μm step size) and the element distribution maps exhibit Co-enrichment in Co-Fo ahead of the Co-Fo-replacement front (Fig. 3). Another observation is the "bumpy" character of the Co-concentration across the Opx rim highlighted by the line measurement. Peaks in Coconcentration appear to correlate with the brighter Opx-grain rims including the Opx-grain boundaries and the valleys with the darker grain interiors (Fig. 3). The presence of numerous spheres, appearing bright in BSE mode, in samples hampers the investigation of existing chemical zoning patterns (Figs. 4a). Because most of the spheres are < 2 μm unambiguous quantitative chemical analyses were not possible. Element distribution maps of Ni-h-48 exhibit that these spheres are rich in Ni and contain S (Fig. 4b). Element distribution maps reveal that S contamination affected every set of doped forsterite samples and it is likely that S contamination is the result of the annealing in Pt-cups prior to the experiments. Microstructural analyses of the Ni-doped forsterite samples that lasted 12.5 h shows no precipitates within the rim but some at the Opx|Qz interface (Fig. 4a). The Opx rims in the sample on Ni-doped forsterite that was quenched after 168 h reveals rim parts exhibiting very few spheres mostly located at the Ni-Fo|Opx interface whereas other parts are highly decorated (Fig. 4a). Strikingly, the rim widths of these different regions are not significantly different. The BSE image showing the sample Ni-h-48 reveals no apparent gradient in Ni-and Mg-concentration of the bulk rim but rather on the grain size scale (Fig. 4a). Ni-h-48 exhibits Opx grains with brighter grain interiors relative to their surroundings, contrary to Co-h-48 (Figs. 4a and 3a). In sample Ni-a-120, which experienced nominally anhydrous experimental conditions, Ni-rich spheres are absent (Figs. 2d and 5a). A Ni-distribution map and a line measurement across the remnant Ni-Fo and the adjacent Opx rim with a step size of 1 μm demonstrate a Ni-enrichment front in Ni-Fo similar to the results of Milke et al. (2011) and a gradient in Ni concentration across the Opx rim with a decrease in Ni from the Ni-Fo|Opx interface towards the opposite Opx|Qz interface (Fig. 5). As for Ni-h-48, the Nimap presented in Fig. 5a demonstrates Ni-richer Opx interiors relative to Ni-poorer Opx rims. A line measurement with a step size of 0.5 μm exhibits that the Opx rims that formed around Mn-doped forsterite under hydrous conditions does not reveal a striking gradient in Mn-concentration (Fig. 6a). The outer part of the Opx rim next to the Opx|Qz interface appears to be slightly enriched in Mn. The same pattern is shown in the Mn-distribution map of the nominally anhydrous counterpart Mn-a-18 (Fig. 6c). A line measurement with a step size of 0.5 μm across the Opx rim that formed around Zn-doped forsterite in Zn-h-48 shows a clear enrichment in Zn in the Opx part next to the Opx|Qz interface (Fig. 6b, d). The same is true for Zn-a-288 that experienced nominally anhydrous experimental conditions. From the line measurement and BSE images of the Opx rims around Zn-doped forsterite, it appears that Opx-grain rims are enriched in Zn relative to Opx interiors (Figs. 2g, h and 6b). Both dopant elements, Mn and Zn, show the same distribution behavior with a Mn or Zn enriched Opx rim next to the in the rim Opx|Qz relative to the Opx part adjacent to the Fo-Mn or Fo-Zn replacement front. Rim and interface structure The rim widths X of all samples are listed in Table 2. After 48 h under 950 °C and 1.5 GPa, the rims that developed around Co-doped forsterite are with (14.4 ± 0.7) μm the widest and the Opx rims that formed around Mn-doped forsterite are with (9.2 ± 0.6) μm the thinnest. Because temperature and time were different for each sample, a direct comparison between the rim widths of the nominally anhydrous runs is hindered (Table 2). It is challenging to determine the Opx grain diameters using the BSE images, because in this setting and at this magnification we cannot unequivocally distinguish between grain rims and grain boundaries. Further, individual grains are difficult to recognize in Opx rims of the nominally anhydrous samples (Fig. 2). Therefore, we refrain from providing quantitative data on grain size, but compare the differences in appearance and the ratio between grain size to rim width between the hydrous and nominally anhydrous samples. Using the differences in brightness contrast in BSE mode to estimate Opx grain diameters, it seems that for the hydrous samples the ratio between Opx grain size to overall rim thickness is much smaller relative to the ratio between average Opx grain size and rim width of their anhydrous counterparts (Fig. 2). In Opx rims that formed around Co-, Ni-, and Zn-doped forsterite under nominally anhydrous conditions, individual Opx grains are often as large as the rim itself (Fig. 2b, d, f, h). Rims that grew in the Mn-a-18 sample show slightly smaller Opx grains relative to the other rims growing under nominally anhydrous conditions (Fig. 2b, d, f, h). Both interfaces, the d-Fo|Opx and the Opx|Qz interface, appear much smoother in samples that experienced nominally anhydrous experimental conditions relative to their hydrous counterparts (Fig. 2). In the hydrous samples, the Opx|Qz interfaces are more uneven than the opposite d-Fo|Opx interface (Fig. 2a, c, e, g). Rim growth rates The rate constant k is the slope of the best-fit line through the data points in a squared rim width (X 2 ) vs. time (t) plot (Fig. 7). Within the error limits, there is no difference in rate constant k and thus in Opx rim growth rate between the experiments on natural olivines from Yund (1997) conducted at 1.4 GPa and 1000 °C and our data on Co-doped forsterite samples (Fig. 7). After shorter run durations of 12.5 h, the squared rim width obtained from our experiment on Ni-doped forsterite does not deviate significantly from the data on Co-doped forsterite and that from Yund (1997) conducted at 1.4 GPa and 1000 °C, but deviates significantly after 48 and 168 h. With a rim growth rate of around (1.6 ± 0.2) μm 2 /h, Opx rims grow much slower around Ni-doped forsterite relative to rims that formed around Co-doped forsterite (6.7 ± 0.4) μm 2 /h and natural San Carlos olivines (7.2 ± 0.4) μm 2 /h under similar experimental conditions, i.e., 1.4 GPa and 1000 °C. In fact, the rim growth rate of (1.4 ± 0.2) μm 2 /h for Opx rim growth around Ni-doped forsterite matches much better the data of Yund (1997) obtained from experiments on San Carlos olivines performed under 0.7 GPa and 950 °C (1.4 ± 0.4) μm 2 /h (Fig. 7). Chemical distribution patterns Manganese and zinc display very similar distribution patterns with a Mn-or Zn-enriched part of the outer Opx rim next to Qz (Fig. 6) resembling the fractionation behavior previously described for Fe (Milke et al. 2011). Thus, analogous to the distribution behavior of Fe, equilibrium partitioning between (Mn, Zn)-Fo and (Mn, Zn)-Opx results in the release of excess Mn or Zn. Any excess Mn or Zn diffuses along Opx grain boundaries towards the opposite Opx|Qz interface where the dopants are not buffered and will be incorporated in the Opx replacing Qz. Similar chemical distribution patterns to those that form in Opx around Mnand Zn-doped forsterite as well as around natural Fe-bearing olivines (Milke et al. 2011) were previously reported in rim growth experiments where a titanite rim formed around Nbbearing rutile grains in a wollastonite matrix under hydrous conditions (Lucassen et al. 2012). It is expected that the distribution behavior and the underlying mechanisms during reaction rim growth are a general phenomenon in exchange reactions where the reactants chemically communicate via the grain-boundary network of the product rim. The rim that developed around Ni-doped forsterite in the Ni-h-48 sample does not show differences in chemical composition between the Opx part that replaced Ni-Fo and the Opx that consumed Qz (Fig. 2c, d), which is probably due to the precipitation of Ni-rich spheres within the Opx rim. Their occurrence either indicates that under hydrous experimental conditions Ni-saturation in Opx was surpassed or that the presence of minor S due to contamination of the Ni-Fo surface stabilizes Ni-rich spheres. A further investigation of the occurrence and the distribution of S-rich spheres was beyond the scope of the present study. Ni distribution patterns in the Ni-a-120 sample demonstrate clear Ni-enrichment fronts in the relict Ni-Fo grains ahead of the Ni-Fo-replacement front. This enrichment front indicates that the released Ni was favorably incorporated into Ni-Fo as already observed and described by Milke et al. (2011). Because we used a starting material showing a much higher initial Ni-concentration than Milke et al. (2011), we were able to measure the chemical zoning within the Opx rim showing a Ni-richer Opx part next to the Ol|Opx interface relative to the opposite Opx|Qz interface (Fig. 5). Thus, the observed chemical distribution pattern with a Ni-enrichment front in Ni-Fo and a decreasing Ni-concentration from the Ni-Fo-replacement front towards the opposite interface favors the scenario modeled by Milke et al. (2011) assuming a partition coefficient of around 4.1 (Podvin 1988) and a ratio of volume to grain boundary diffusion in Opx of approx. 1 (Fig. 5 in Milke et al. 2011). Strikingly, Co as dopant element shows a distribution behavior different to those of Mn, Zn, Fe and Ni. Co exhibits a combination of the Mn-Zn-Fe and the Ni-distribution behaviors. It appears that some excess Co, released during Co-Fo replacement by Co-Opx, has been incorporated into Co-Fo to produce an enrichment front in Co-Fo ahead of the replacement front like Ni. Like Mn, Zn, and Fe, the remaining Co diffuses along Opx grain boundaries towards the opposite interface where it is not buffered. Strikingly, differences in water fugacity do not influence the chemical zoning patterns; except for Ni showing the precipitation of Ni-rich spheres under hydrous conditions. Thus, differences in element distribution within the rim phase and the relict d-Fo must be dopant specific. In order to explain these Ni-distribution map (same general colour coding as in Fig. 3) and line scan of sample Ni-a-120. The line scan was acquired with a step size of 1 μm. Both data demonstrate Ni-enrichment in Ni-Fo ahead of the replacement front. Ni concentration in the Opx rim decreases from the Ni-Fo|Opx interface towards the opposite Opx|Qz interface. Grain boundaries appear to be depleted in Ni relative to their grain interiors. Sizes of symbols exceed errors different chemical zoning pattern, we need to look closer at the dopant specific partitioning behavior between doped forsterite and Opx. Element partitioning at the Fo|Opx interface In an exchange reaction, the preference of an element to be either incorporated in phase A or B is expressed by the partition or distribution coefficient K D . The large partition coefficient of ~ 4.1 for Ni between olivine and orthopyroxene ( K Ol∕Opx D,Ni ) indicates that Ni favorably concentrates in olivine (Gregoire et al. 2000), matching well with our observations that once released at the Ni-Fo-replacement front, Ni will be incorporated into Ni-Fo (Fig. 5). The partition coefficients for Zn and Fe are with 1.5 and 1.2 very close (Dupuy et al. 1987;Seckendorff and Neill 1993) and both dopant elements show a very similar distribution behavior during Opx rim growth (Fig. 6b). In both cases, the excess Fe or Zn diffuses along Opx grain boundaries towards the opposite interface forming an Opx enriched in Zn and Fe relative to the equilibrium Opx replacing Ol or Zn-Fo. Mn has with ~ 1.1 a lower K Ol∕Opx D,Mn than Fe and Zn and (Nishizawa and Matsui 1972), thus, the Mngradient within the Opx rim is less prominent than for Zn and Fe (Fig. 6). Strikingly, Co has a K Ol∕Opx D,Co of around 2.6 (Gregoire et al. 2000) and thus lies in between the values for Ni and Fe or Zn partitioning between Ol and Opx. This intermediate partition coefficient for Co between Ol and Opx could thus explain the mixed distribution behavior we observe in our experiments on Co-doped forsterite. Results of chemical analyses and element-distribution maps (same general colour coding as in Fig. 3) of the samples in which Mn-and Zn-doped olivines were used. a It appears that the outer part of the Opx rim is slightly enriched in Mn relative to the inner part next to Mn-Fo. c The same pattern, in this case much clearer, developed in the Opx rim under nominally anhydrous conditions. b, d Zn-h-48 and Zn-a-288 exhibit Zn-enrichment in the outer Opx part next to Qz. Sizes of symbols exceed errors Due to the higher preference of Co to be incorporated in Ol relative to Opx, some part of the Co released at the Co-Fo-replacement will be incorporated into Co-Fo and the other part diffuses towards the Opx|Qz interface. Further, it appears that Co is mostly located in the Opx rims and the adjacent Opx grain boundaries. This implies that Co preferentially concentrates in the Opx grain boundaries and starts to diffuse into the Opx-grain interiors to equilibrate the gradient in Co concentration between Opx grain interiors and Opx grain rims (Fig. 3). Milke et al. (2011) stated that the overall chemical distribution of Fe, Ni, and Mg is mostly controlled by kinetic fractionation throughout the Opx rim rather than by equilibrium partitioning at the Ol|Opx interface. Although this is true for Fe, Mg, Zn, the influence of kinetic fractionation on element distribution decreases with an increasing partition coefficient between Ol and Opx. Due to the large K Ol∕Opx D,Ni the enrichment of Ni in the relict Ni-Fo causes the Opx replacing Ni-Fo to successively incorporate more Ni and the evolving chemical zoning pattern is thus solely controlled by equilibrium partitioning. Rim structure The observed larger Opx grain size in Co-a-257, Ni-a-120, and Zn-a-288 relative to Mn-a-18, are most probably the result of grain coarsening with time (Yund 1997). The experiment on Mn-doped forsterite lasted only 18 h compared to other samples that remained at their respective maximum temperature for 120, 257, and 288 h. It has been experimentally shown that the development of an uneven interface, especially the Opx|Qz interface, correlates well with the water fugacity during the experiment (Milke et al. 2009) with nominally dry experimental conditions leading to smooth interfaces (Fisler et al. 1997;Gardés et al. 2011) whereas H 2 O concentrations as low as 20 μg/g already lead to an uneven Opx|Qz interface (Milke et al. 2013). Since the Opx|Qz interfaces in our nominally anhydrous samples are very smooth, relative to the Opx|Qz interfaces of the samples that experienced hydrous conditions (Fig. 2), we assume that the water fugacity was significantly lower in the nominally anhydrous runs than in runs in which hydrous CaF 2 was used as confining medium. Rim growth rates In silicate reactions, water plays an eminent role, as it acts as catalyst and transport medium supporting reaction progress. It has been experimentally demonstrated that the presence of water has a strong effect on grain boundary diffusivity governing the rate at which rims grow (Gardés et al. 2011;Götze et al. 2010;Joachim et al. 2012;Milke et al. 2009;Yund 1997; and references therein). The critical water concentration on which the kinetic property, e.g. diffusion coefficient or rate constant, depends can be as low as 20 μg/g H 2 O (Milke et al. 2013). When displayed in an Arrhenius plot, the compilation of rim growth data demonstrates two distinct trends (Fig. 8). The sole difference between both trends is the water concentration during the experiment and thus the trends are denoted as 'wet' when performed under hydrous conditions or 'dry' when initial water contents are kept at a minimum (Gardés (Yund 1997). b Below run durations of 48 h rim growth rates are, within error limits, very similar regardless of the dopant element, but deviate strongly after a run duration of > 12.5 h for Ni-doped Fo et al. 2012). Despite some scattering, rim growth data from previous studies does not plot in between these two trends or regimes indicating that the transition from 'dry' to 'wet' regime must occur over a narrow range in water concentration rendering their boundary sharp (Dohmen and Milke 2010). As expected, the rate constants corresponding to the hydrous samples that stayed under 950 °C for 48 h plot well within the 'wet' regime in the Arrhenius plot (filled diamonds in Fig. 8). Strikingly the data obtained from the nominally anhydrous samples neither uniformly fall within the 'dry' nor the 'wet' regime (hollow diamonds in Fig. 8). The rate constant of Mn-a-18 and Zn-a-288 plot either within or close to the 'wet' regime, the rate constant corresponding to Co-a-257 lies in between both regimes, and the data corresponding to the Ni-a sample clearly plots within the 'dry' regime (Fig. 8). The observed scattering of the data, corresponding to samples that experienced nominally anhydrous experimental conditions, probably reflects changes in experimental conditions, from hydrous at first to nominally anhydrous due to H 2 O loss during the experiment. Truckenbrodt and Johannes (1999) and Patino Douce and Beard (1994) have experimentally shown that above 1000 °C samples lose a significant amount of H 2 O of up to 80% after 6 d in the piston-cylinder apparatus. Since every sample that experienced nominally anhydrous conditions remained well above 1000 °C for several days (5-12 d), we expect that most of the H 2 O, which was initially present due to adhesion from air humidity, was lost during the runs; except probably for Mn-a-18 which stayed at 1100 °C for only 18 h. The fact that samples lose most H 2 O during pistoncylinder runs conducted above 1000 °C agrees with our microstructural observations on the Opx-interface structures showing smoother Opx|Qz interfaces in the samples that were performed using crushable alumina relative to the rim structures in samples in which CaF 2 was used as confining medium (Fig. 2). Another argument for a successive loss of water during the piston-cylinder experiments is that the samples, conducted at the lowest temperature of 1100 °C (Mn-a-18 and Zn-a-288), plot much closer to the wet regime, because outwards diffusion of water from the sample interior towards the confining medium highly depends on temperature with a positive correlation between temperature and diffusion rate. Therefore, water loss was less extensive in samples Mn-a-18 and Zn-a-288 relative to Co-a-257, performed under 1200 °C and hold for 10.7 d, and especially Ni-a-120 that experienced 1400 °C for 5 d. The observed deviation of the rate constants of the nominally anhydrous samples therefore reflects the rates at which the samples dried out due to the loss of water to the confining medium. Fig. 8 Compilation of previous Opx rim growth data and our experimental results displayed in a log(k) vs. 10 4 /T Arrhenius plot. Rim growth rates of the hydrous samples plot well within the 'wet' regime whereas the data corresponding to the nominally anhydrous samples scatters between the 'dry' and the 'wet' regime. The deviation of the rim growth rates obtained from the nominally anhydrous data is marked by the dashed ellipse. Modified after (Dohmen and Milke 2010) Fisler 1997 Yund 1997 Milke 2001 Milke 2007 Milke 2009 Gardés 2011 Co-h-48 Co-a-257 Ni-h-48 Ni-a-120 Mn-h-48 Mn-a-18 Conclusion and implications The most prominent model reaction for metasomatism in Earth's upper mantle is orthopyroxene rim growth between olivine and quartz. Natural olivine crystals can contain various amounts of minor and trace elements. During the interaction between the olivine-rich mantle and a SiO 2 -rich fluid or melt, these elements will distribute between the solid and the fluid phase. Consequently, the mantle olivine as well as the product of this fluid-rock interaction -orthopyroxene -may show particular chemical signatures. Our experiments on synthetic forsterite doped with the selected metal cations Co, Ni, Mn, Zn reveal that the chemical zoning patterns are dopant specific and depend on their respective partitioning behavior. Water fugacity does not affect the partitioning behavior and the resulting chemical zoning pattern in the rim phase. Because Co and Ni will be favorably incorporated into the doped Fo during Opx rim growth, enrichment in Co and/or Ni in natural olivines may be used to highlight metasomatism in Earth's upper mantle. Although less relevant for mantle metasomatism in nature, taking place in generally water-present settings and with Ni concentrations in olivine in the ppm range, we observe that in the presence of Ni Opx rim growth rates are slower relative to Opx rim growth around natural San Carlos olivines. However, more experiments are needed to test the influence of the presence of Ni on Opx grain boundary diffusion under hydrous and especially under nominally anhydrous conditions.
8,125.6
2022-02-25T00:00:00.000
[ "Geology" ]
SINGLE PHOTON IONISATION MASS SPECTROMETRY USING LASER-GENERATED VACUUM ULTRAVIOLET PHOTONS This paper provides an overview of the method of single photon ionisation mass spectrometry. A review of the theory of frequency upconversion using third-order 4- wave sum mixing in isotropic media and experimental results of third harmonic generation (THG) using a frequency tripled 355 nm Nd:YAG pump source are presented. Vacuum Ultra-violet (VUV) photons of wavelength 118 nm are detected in an acetone ionisation chamber. The emphasis of this paper is on the practical aspects of generating and detecting the VUV photons and using them for single photon ionisation (SPI) in the ion source of a mass spectrometer. Optimum gas pressures for THG in Xe and Xe/Ar mixtures are established. For a pump beam of well defined mode structure the optimum gas pressures are in excellent agreement with theory. The major loss mechanism is attributed to re-absorption of VUV by the tripling gas. SPI mass spectra of hexane and the biomolecule valyl-valine are presented illustrating the power of the technique. INTRODUCTION Single photon ionisation mass spectrometry is a technique which promises to provide a highly sensitive, non-selective analytical method for a wide range of molecular systems.There are an increasing number of reports of the application of single photon (VUV) ionisation mass spectrometry, which indicate that the technique provides a greater molecular ion intensity and less fragmentation than both electron impact and many multiphoton ionisation schemes [1,2,3].The g6neration of coherent VUV light is described in a variety of accounts of frequency up-conversion techniques scattered throughout the physics literature.Reviews on this subject have been given by Delone et al. [4] and by L'Huillier et al. [5].This paper seeks to bring together the relevant theory and a systematic account of experiments to generate a practical source of VUV for integration into a Time-of-Flight Mass Spectrometer for ionisation of sputtered organic species. Theory Single photon ionisation (SPI) removes the nonlinearity present in multiphoton ionisation (MPI) schemes.The exploitation of nonlinear optics however remains the most convenient method for conversion of existing primary laser lines to wavelengths in the VUV, which are suitable for SPI of a wide range of molecular species. Nonlinear optical effects are so called because the response of the media giving rise to them depends on the second-and higher-order powers of the optical fields acting on them.They occur due to energy transfer arising from wave interactions between propagating waves via coupling coefficients known as optical susceptibilities.An incident electric field Ei induces a polarisation (P) in a medium such that p-X(1)Ei -X(2)E/2 --[-X(3)E (1) where X: (n is the nth-order susceptibility of the medium.If the mag- nitude of the incident field is sufficiently large the induced polarisa- tions will generate observable optical fields at harmonic frequencies of the incident field.Harmonic generation whereby N7 photons of fre- quency (co) are converted to -y' photons of frequency (Nco) by a nonlinear medium (J0 is described by N.(w) + X "7 '(Nw) + X (2) In centrosymmetric media the lowest order term of eq.(1) contributing to frequency mixing is (3) and the dominant harmonic generation process is normally third-order in the number of incident photons.The power I(3co) generated at the third harmonic frequency is given by [6] I(3) NzIx(a3)()IzI()3F(L, b, A k) (3) F is a geometrical factor which reflects the net phase relationship between microscopic individual third-harmonic wavelets generated in different regions of the nonlinear medium of length L and number density N. Phase Matching If harmonic generation is to be efficient F must be controlled such that complete destructive interference does not occur over the wave interaction length.In the case of incident plane-waves of wave vector k()el the fundamental and third harmonic wavelets are collinear and their phase velocities are matched.In this case phase matching is achieved when the wave vector mismatch Ak--k(3)-3k() is zero as shown in Figure l(a), where the magnitude of the wave vector k() is 27rn()/A0, where n() is the refractive index of the tripling medium at frequency and A0 is the vacuum wavelength of the pump beam.This condition is met by ensuring that the refractive indicies of the tripling medium at frequencies w and 3 are equal. To obtain sufficient photon densities it is often necessary to focus the pump beam.The degree of focusing is reflected in the confocal parameter bkwo, where w0 is the minimum beam waist at the focus.under (a) plane-wave and (b) focused geometries [7]. If the fundamental beam is focused, then a phase-slip of r will occur between the pump beam and the third harmonic beam at the focus location.In addition, the three unit vectors i will have slightly different direction (Fig. 1 (b)).Conservation of wave vector momentum requires that the condition k(3w)e4 k(ov)el + k(w)e2 + k(ov)e3 is met and conservation of energy requires that w3 3v0.As a consequence, k(3ov) < 3k(w) and phase-matching is achieved when Ak is negative.To satisfy the condition Ak<0 and conserve energy requires that n(3v) < n(w), which is termed negative or anomalous dispersion.The requirement of negative dispersion limits the choice of media in which a given frequency may be tripled.In general, localised regions of nega- tive dispersion are to be found on the high frequency side of a real one- photon absorption. In a single component gaseous medium the phase mismatch is proportional to the number density of the nonlinear medium and is optimised by varying the pressure of the gas.In this case the phase matching factor F is dependent on N. From eq. (3) it can be seen that the maximum third harmonic power is obtained by maximising N2F.For simplicity a second power coefficient G is defined as G (bAk)2F. In a single component third harmonic power is maximised when G is maximised. If the number density of the negatively disperse component is kept constant, phase matching can be optimised by adding a second, positively disperse medium or by tuning through the dispersion curve.In a medium consisting of more than one component, only one of which necessarily contributes to the nonlinearity, each will contribute to bAk, which is independent of the total number density but de- pendent on the ratio of components.The number density of the nonlinear medium N and the phase matching factor for the mixture F are independent in this case and the conversion efficiency maximises where F maximises. Effects of Spatial Modes A fundamental laser mode TEMpi will generate a third harmonic beam consisting of a superposition of spatial modes with angular mode 31 and radial mode number in the range 0-3p (except 3p-l, which is zero for all bAk) [8].The phase matching coefficient F represents an integration of the amplitudes F(p) of the different radial modes in the third harmonic beam (where p= 0 to Pl +P2 +P3).p +p +p3 F--Z IF(p)I2 (4) p=0 A TEM0o fundamental (Pl +P2 +P3 0, 0) gives a phase matching coefficient F(0)= (-bAk)exp(O.5bAk).Applying eq. ( 4) gives the full expression for F in a nonlinear medium of length L in the limit of tight focusing (b<<L), F(L, b, Ak) -(TrbAk) 2 exp (bAk) Ak < 0 0 X > 0 (5) For a TEM00 pump beam third harmonic power displays a single- peaked dependence on bAk (Fig. 2).In a single component medium the THG conversion efficiency is optimised (maximum G) at G bAk FIGURE 2 Power coefficients as a function of bAk for TEM00 mode in a single component medium (broken line) and multi-component medium (solid line). bAk=-4.In a multi-component medium the tripling efficiency is optimised (maximum F) at bAk=-2.Fundamental beams with non-zero radial modes (p > 0) generate harmonics with a mixture ofradial modes, the power coefficients Fand G having a multipeaked dependence on bAk.The TEM 10 mode generates harmonic fields displaying mode structures TEMoo, TEM lo and TEM3o. The mode coefficients F(p) are expressed below in terms of lal 0.5bAk [8].The dependence of F and G on bAk is shown in Figure 3. Compared to a TEM00 pump field, the maximum value of F is smaller and that of G larger with a TEM10 fundamental mode.Both maxima occur at higher values of IbAkl, at 11.5 and 13.0 respectively. Experimental The primary laser source used in this work was a solid state Nd:YAG laser (Spectra-Physics DCR-11).Using an HG-2 harmonic generating unit the third harmonic of the Nd:YAG (355 nm) was obtained by generating second harmonics in a KD *P doubling crystal and mixing it with the residual fundamental in a second KD *P crystal.The third harmonic was then used as a pump beam to generate 118 nm, the ninth harmonic of the Nd:YAG laser.The maximum pulse energy at 355 nm was 50 mJ at 10 Hz in a 5 ns pulse.Xenon gas, which is negatively disperse in the region 117.2-119.2nm due to the transitions (5p-5d) [9], was used as a tripling medium.Phase-matching was investigated with argon, which is positively disperse in this region. Studying the VUV generation process by monitoring the SPI signal detected by the mass spectrometer introduces added complications.In order to obtain a reliable measure of the VUV flux, the ion extraction efficiency of the ToF must be optimised for each measurement.Due to the many parameters involved in determining the extraction efficiency, the amount of time required for the optimisation process makes this approach impracticable if the tripling process is to be studied in depth.For this reason it was felt necessary to design and construct an ap- paratus to generate and detect VUV photons on the "bench-top", without requiring the mass spectrometer.This apparatus is shown in Figure 4. The tripling cell consists of a six-way stainless steel cross piece, 10 cm in length.Extension pieces allow the length of the cell to be increased to 70 cm.The 355 nm laser light is focused in the centre of the cell through a fused silica window with a piano-convex fused silica lens.Xenon (99.996%) and argon (99.999%) are admitted to the cell through a fine leak valve.The pressure inside the cell is measured with a capacitance manometer of 1-1000 torr range (MKS Baratron 221A) and a Penning gauge (Edwards CP25K). The detection cell consists of a 13 cm long PVC tube containing two 10 cm 1.5 cm parallel-plate stainless steel electrodes 2 cm apart.One electrode is held at earth potential and the other connected to a digital picoammeter (Keithley model 485) biased at + 30 VDC. cell is operated with a 10 torr fill of acetone.The ionisation potential of acetone is 9.69 eV (127.9 nm) which is just below the energy of the VUV photons generated.Assuming that multiphoton ionisation of acetone in the diverging UV beam and photoelectron production at the chamber wall are negligible the current of photoelectrons attracted to the anode will give a direct measure of the VUV flux.These assumptions are supported by the observation that background photocurrents were < 1 pA in the absence of a tripling gas.The two cells are separated by a 0.5 cm x 2.5 cm diameter magnesium fluoride window and each can be independently pumped by a 170 L s -1 turbomolecular pump (Balzers TPU 170) backed with a rotary pump (Edwards M8). VUV is readily absorbed by nitrogen so it is very important to minimise the residual air content of the tripling apparatus.The tripling cell was evacuated to a pressure < 10 -5 torr prior to admitting the tripling gas.When using a two component mixture, a magnetic stirring bar can be used to aid mixing and to help minimise composition gra- dients within the cell. Results and Discussion A series of experiments were performed to optimise the sum-frequency process for generating 118 nm photons, some of these are described below. Effect of Confocal Parameter b on Conversion Efficiency In order to make most use of the available pump power it is necessary to establish the optimum focusing conditions.The effect on the conversion efficiency of changing the confocal parameter b was investigated.This was done by changing the focal length (f355) of the lens which focuses the 355 nm beam into the tripling cell.An f355 15 cm lens and an f355 50 cm lens were used for this experiment.The (nl 1) -n 2n + -From eq. ( 6) the confocal parameters for the f355 15 cm and f35 50 cm lenses were calculated as 0.2 cm and 2.3 cm respectively.The cal- culated minimum beam waist radii were 11 gm and 36 gm res- pectively. The phase matching curves for THG in pure Xe using f35 15 cm and f35 50 cm lenses are shown in Figures 5(a The use of the longer focal length lens improves the maximum con- version efficiency by a factor of two for equal UV pulse energies, even though the pump power density at the focus is an order of magnitude less compared to the shorter focal length lens. Using a longer focal length lens for THG in gas cells means that i) the focal volume, where nonlinear interactions occur with the highest intensity, expands in size and ii) the beam divergence is reduced.This means that the number of scattering centres that can contribute to THG increases and higher pulse energies can be used without inducing breakdown in the tripling medium.However, the distance between the focus and the end window must be increased to avoid exceeding da- mage thresholds, therefore re-absorption of VUV is expected to prove more of a problem.The length of the cell and the location of the lens were chosen so that the 355 nm beam focused at approximately the centre of the cell, midway between the lens and the entrance of the detector.This minimised the risk of burning tripling cell windows, providing the greatest scope for using higher pump powers. Theory predicts the power coefficient G is maximum at a value of bAk=-13 for tripling a TEM10 mode in a single component (see Fig. 3).The theoretical value (Pxe,opt) for the optimum Xe pressure in the From eq. ( 7) Pxe,opt is 32.9 torr at b 0.2 cm and 2.9 torr at b 2.3 cm. As the focal length of the lens increases the angle between the three unit vectors ei becomes less and the magnitude of the wave vector mismatch required to maintain phase matching decreases. Effect of Pump Power on VUV Intensity Boyle et al. report the onset of dielectric breakdown at a xenon pressure of 26 tort using 355 nm TEM10 pulses of > 1101 W cm -2 [12].This may explain why the conversion efficiency is observed to peak at about 25 torr xenon in Figure 5(a) rather than at the theoretical value of 32.9 tort using this focal length lens.With the f355=50 cm lens the experimental value of exe,opt is in excellent agreement with theory (Fig. 5(b)), suggesting that laser-induced gas breakdown is not a limiting factor using this experimental geometry. A power study using the f35s 50 cm lens found the VUV intensity to be linearly dependent on the cube of the UV pulse energy up to the limit of our available pump power density (1 1011 W cm-2).This suggests that the observed loss in conversion efficiency above 3 torr Xe using the b 2.3 cm lens is not a saturation or breakdown effect.Laser-induced gas breakdown has an extremely detrimental effect on conversion efficiency since the ionisation of the tripling medium not only reduces the population of ground state atoms, which constitute the nonlinear medium, but the production of photoelectrons introduces an additional large positive dispersion and destroys the phase matching of the mixture.The higher the gas pressure the lower the energy threshold will be for breakdown.Zych et al. report break- down in 5 torr of Xe at x 1012 W cm -2 at the focus of a Gaussian 355 nm beam [13].The mode structure of the DCR-11 laser (TEM10) means breakdown thresholds at a given pressure are larger than for TEM00 modes.This is because the intensity profile of the TEM10 beam is more disperse and the peak intensity which occurs at the centre of the Gaussian beam is absent. Phase Matching Figure 6 shows the THG phase matching curve for pure xenon using an f355=50 cm lens and 30 mJ pulse -1 at 355 nm and the cor- responding curve for 10 torr of Xe mixed with Ar.In the single component medium the conversion efficiency maximises at about 2.7 torr Xe, which is consistent with Figure 5(b).In a two component mixture the optimum Ar:Xe ratio is about 8.5.The peak photocurrent in the two component mixture is a factor 6.2 larger than that in pure xenon, illustrating the increased VUV intensity obtained by increasing the number density of the scattering centres. The peak photocurrents of 130 pA and 800 pA clearly deviate from the theoretical N 2 improvement on increasing the Xe pressure by a factor of 3.7.In fact the photocurrent only increases by about half as much as expected.This could be due to the increased re-absorption of VUV at higher pressures.With an f355 50 cm lens the breakdown threshold of x l012 W cm -1 in 26 torr Xe is not reached until pulse energies exceed 200 mJ, so gas breakdown can be ruled out as a loss mechanism in this case. Figure 7 compares phase matching 5 and 10 torr of Xe with Ar.The optimum VUV intensity is approximately doubled on doubling the Xe pressure and restoring the optimum phase matching by adding Ar. The optimum values for Pxe and PAr are in excellent agreement with the theory.This supports the assumption that gas breakdown is not occurring in 10 torr Xe.Pressure ratio (Ar:Xe) FIGURE 7 Phase matching 5 torr and 10 torr Xe with Ar (30 mJ pulse -1 at 355 nm, b 2.3 cm). The power coefficient F has its largest maximum at bAk=-11.5. The corresponding phase matched mixture of Pxe and PAr is given by PxeCxe --PArCAr bAkopt fl/b (8) where 11], CAr--5.5 X 10 -18 cm 2 Substituting in a value of exe--10 torr gives a value of PAr--81 torr for the phase matched mixture, which corresponds to a ratio (Ar:Xe) of 8.1:1.The theoretical ratio for a xenon pressure of 5 torr is 5.5 according to eq. ( 8). However, at high pressures, exe/eAr approaches -CAr/fxe which corresponds to a ratio (Ar:Xe) of 10.9:1. As in Figure 6, the improvement in the conversion efficiency is about half that expected from an N 2 relationship.Theory predicts that the VUV intensity should quadruple on doubling the number density of the xenon, provided the phase matching is not disturbed (eq.( 3)).The discrepancy between experiment and theory is again attributed to re- absorption of VUV. It is not known whether the majority of re-absorption is caused by Xe atoms or some other species.Likely candidates include Xe2 dimers which may be present at significant levels at rare gas pressures in the several hundred torr range.Besides re-absorbing VUV, the production of Xe2 dimers reduces the density of the nonlinear medium (Xe atoms) and provides an additional source of positive dispersion which will affect the overall phase matching.Heating the tripling cell would reduce the dimer population but would also shift the absorption spectrum and so the results of the experiment would be inconclusive.Another possible candidate is residual nitrogen, although this is ex- pected to be present in very low quantities due to the care taken in the construction and operation of the cell. Summary The experiments described above do not constitute an exhaustive optimisation of the conversion process of 355 nm to 118 nm light in xenon and argon.They do however identify important parameters in this process and illustrate the means by which a more detailed study may be carried out. The maximum VUV flux generated to date in our laboratory is 101 photons pulse-1.This was obtained with 1011 W cm-2, at 355 nm, using an f= 50 cm lens and 10 torr Xe phase matched with Ar.The transmission of 118 nm through the MgF2 window is estimated at 60% and the collection efficiency of the detector is assumed to be 10%, on the basis of calibrations performed on similar designs [14]. APPLICATION TO SINGLE PHOTON IONISATION MASS SPECTROMETRY Experimental For SPI mass spectrometry the tripling cell described above is attached to a Kratos Prism series Time-of-Flight mass spectrometer.The ToFMS has been described previously by Scrivener et al. [15].The MgFz exit window of the cell provides the vacuum interface to the mass spectrometer.A LiF piano-convex lens mounted in front of the ion source region serves to focus the laser radiation parallel to and mm above the surface of a sample stub, in the axis of the mass spectrometer. The focal length of the LiF lens is 17 cm at 355 nm, and 10 cm for the 118 nm beam.The combined "back focal length" can be calculated from eq. (10) bfl f(d-f)/(df +f2)) (10) where bfl combined focal length of 2-lens system at wavelength (measured from second lens element) fx focal length of lens (x) d lens separation Using an f355 50 cm lens to focus the UV beam into the tripling cell and a lens separation d=82 cm, the VUV cross sectional area (Al18) was calculated x 10 -3 cm 2 in the ion axis of the mass spectrometer. The residual UV beam cross sectional area (A355) was calculated as 3 x 10 -2 cm2, this value was confirmed by obtaining burn profiles on thermal paper.The 118 nm energy density was estimated to be of the order 0.1 J cm -2 at the ion source region, based on measurements of photocurrents detected in the acetone ionisation chamber. The sample potential was held at 2.7 kV, and the pass energy of the spectrometer optimised for 2.5 keV ions, formed mm above the sample surface. Results and Discussion Alignment of VUV Beam Due to the low efficiency of the frequency conversion process the flux of residual UV photons exiting the tripling cell far exceeds that of VUV photons.In order to minimise the MPI background signal and maximise the SPI efficiency it is necessary to carefully align the laser beam in the region of the ion source. To monitor the alignment proceedure a mixture of hexanes were chosen as analyte molecules.This sample not only displays a different fragmentation pattern for SPI and MPI, but exhibits a large photon absorption cross section at 118 nm and a high vapour pressure at room temperature so is easily introduced to the system in the gas phase.Pumping out the tripling cell results in an almost complete loss of molecular ion signal but has no affect on the intensity of the fragment ions, which therefore constitute an MPI background due to the residual UV (0.6 J cm-2). Figure 8(b) shows the same sample with the photon beam exiting the tripling cell directed through the edge of the LiF lens.The non-central alignment through the lens causes the UV and VUV paths to be spatially dispersed due to the different refractive indices of the lens material at 355 nm and 118 nm.It can be seen in Figure 8(b) that the parent/fragment ion ratio has increased significantly with respect to Figure 8(a) due to the movement of the residual UV beam out of the ion source region.A similar effect can be achieved by passing the beams at non-normal incident through a plane window. SPI Mass Spectrometry of Biomolecules To evaluate the application of the 118 nm source to the ionisation of very fragile biomolecules, a series of DL-dipeptides were analysed.These data are briefly introduced here and will be reported and discussed in greater detail elsewhere.Samples were deposited from methanolic solution onto clean copper stubs and thermally desorbed at approx 150 C in the region of the mass spectrometer ion source.A pressure of about 10 -8 torr was maintained in the analysis chamber during the experiment.A non- centrally aligned laser beam was employed for maximum MPI supression. Figure 9 shows the SPI mass spectra of the dipeptide valyl-valine obtained with 3000 laser pulses.The spectrum is clearly dominated by the immonium ion val-COOH at m/z 72.This fragmentation is typical of all amino acids under a wide range of photoionisation conditions. The molecular ion at m/z 216 is clearly seen, albeit < 4% of the intensity of the base peak.Parent ion intensities at < 10% of the base peak have been reported in SPI studies on other di-and tri-peptide systems [3].The structurally significant M-NH2COOH fragment at m/z 155 is also detected. The observed fragmentation is extremely unlikely to be the result of the absorption of more than one 118 nm (10.5 eV) photon due the very low peak flux density of VUV photons (11019 photons cm -2 s-l). The absorption of two VUV photons would deposit 21 eV of energy in the molecule, which is sufficient to ionise carbon and is expected to result in the formation of a high proportion of C1, C2 and C3 species. The immonium ion is the smallest fragment detected in the val-val SPI mass spectrum.The subsequent absorption of 355 nm photons from the wings of the residual UV beam cannot be ruled out as a mechanism contributing to fragmentation of ions formed by SPI.The current optical arrangement may not allow complete removal of the UV beam from the ion source region.A suitable filter could be used to reduce the UV flux exiting the tripling cell, but this would inevitably reduce the VUV flux also. For comparison, Figure 10 shows the MPI mass spectrum of val-val obtained by refocusing the 355 nm beam to produce an energy density of 670 J cm -2 (2x 1029 photons cm -2 s -1 peak flux density).Val-Val has no aromatic side chain to act as a chromophoric group for photon absorption in the near UV, so multiple photon absorption must pro- ceed via a coherent, nonresonant mechanism.This process requires high photon flux density and often leads to additional photon absorption within the molecular ion manifold and a high degree of subsequent fragmentation.This behaviour can be seen in the val-val MPI mass spectra, which contains ion signals from C1 and C2 species and in which no fragments larger than the immonium ion where detected under any circumstances.MPI mass spectra of Val-Val using 355 nm photons (670 J cm-2). CONCLUSION We have performed a series of experiments to determine the optimum conditions for frequency tripling a 355 nm laser beam from a Nd:YAG laser in a Xe gas cell.A measure of the absolute number of 118 nm photons generated is obtained using a home-made acetone ionisation chamber.Power densities up to 110 ]2 W cm -2 at 355 nm in the region of the nonlinear interaction provide no evidence for laser-in- duced gas breakdown in < 10 torr Xe.The optimum Xe pressure for third harmonic generation has been established under two different focussing conditions and agrees well with a theoretical treatment as- suming a TEM]0 pump beam mode structure.An f=50 cm focal length lens produces at least twice as much VUV at pump pulse ener- gies of 30 mJ relative to an f--15 cm lens.Experiments with phase- matched mixtures of Xe and Ar indicate that the dependence of the conversion efficiency on the number density of the nonlinear medium is less than predicted by theory, suggesting that re-absorption of VUV is a significant loss mechanism.Using the laser-generated VUV beam, SPI experiments have been performed in a Time-of-Flight mass spectrometer.The importance of spatially dispersing the VUV and residual UV beams is illustrated using a hexanes sample.The ability of the SPI technique to efficiently produce molecular ions and structurally significant fragments from fragile, non-chromophore containing analytes has been shown using the dipeptide val-val. FIGURE FIGUREWave vector diagrams for THG showing conservation of momentum FIGURE 3 FIGURE 3 Power coefficients as a function of bAk for TEMpo mode in a single component medium (solid line) and multi-component medium (broken line). FIGURE 4 FIGURE 4 Schematic diagram of tripling cell for THG and acetone ionisation cell for VUV detection. Figure 8 ( Figure8(a) shows the SPI mass spectrum of a hexanes mixture obtained with the laser beams aligned centrally through the second lens element.In addition to the molecular ion (C6HI) at m/z 86, fragment ions signals are detected for CaH-,C3H-and C2H-. FIGURE 8 FIGURE 8 SPI mass spectra of hexane (5x10 -6 mb static pressure) using a) centrally aligned laser beams and b) non-centrally aligned beams.
6,545
1997-01-01T00:00:00.000
[ "Physics" ]
Analysis of Conditional Randomisation and Permutation schemes with application to conditional independence testing We study properties of two resampling scenarios: Conditional Randomisation and Conditional Permutation schemes, which are relevant for testing conditional independence of discrete random variables $X$ and $Y$ given a random variable $Z$. Namely, we investigate asymptotic behaviour of estimates of a vector of probabilities in such settings, establish their asymptotic normality and ordering between asymptotic covariance matrices. The results are used to derive asymptotic distributions of the empirical Conditional Mutual Information in those set-ups. Somewhat unexpectedly, the distributions coincide for the two scenarios, despite differences in the asymptotic distributions of the estimates of probabilities. We also prove validity of permutation p-values for the Conditional Permutation scheme. The above results justify consideration of conditional independence tests based on resampled p-values and on the asymptotic chi-square distribution with an adjusted number of degrees of freedom. We show in numerical experiments that when the ratio of the sample size to the number of possible values of the triple exceeds 0.5, the test based on the asymptotic distribution with the adjustment made on a limited number of permutations is a viable alternative to the exact test for both the Conditional Permutation and the Conditional Randomisation scenarios. Moreover, there is no significant difference between the performance of exact tests for Conditional Permutation and Randomisation schemes, the latter requiring knowledge of conditional distribution of $X$ given $Z$, and the same conclusion is true for both adaptive tests. Introduction Checking for conditional independence is a crucial ingredient of many Machine Learning algorithms, such as those designed to learn structure of graphical models or select active predictors for the response in a regression task, see e.g.[1][2][3].In a greedy approach to the variable selection for the response, one needs to verify whether predictor X is conditionally independent of the response, say, Y , given Z (denoted by X ⊥ ⊥ Y |Z), where Z is a vector of predictors already chosen as active ones and X is any of the remaining candidates.When conditional independence holds, then X is deemed irrelevant; when the test fails, the candidate that 'most strongly' contradicts it, is chosen. Verification of conditional independence of discrete-valued random variables uses a specially designed test statistic, say, T , such as Pearson χ 2 chi-square statistic or Conditional Mutual information CM I.The value of the statistic, calculated for the data considered, is compared with a benchmark distribution.Usually, as a benchmark distribution one either uses the asymptotic distribution of T under conditional independence or its distribution (or approximation thereof) obtained for resampled samples which conform to conditional independence.More often than not, the asymptotic test is too liberal, especially for small sample sizes, what leads to acceptance of too many false positive predictors.That is why resampling methods are of interest in this context (for other approaches see e.g.[4][5][6] and references therein).The resampling is commonly performed by either permuting values of X on each strata of Z, see e.g.[7], or by replacing original values of X by values generated according to conditional distribution P X|Z if the distribution is known (we will refer to the former as Conditional Permutation and to the latter as Conditional Randomisation, [4]).Although the validity of resampling approach in the latter case can be established fairly easily (see ibidem), it was previously unknown for the conditional permutation approach as well as for the asymptotic approach in both settings.Based on the proved asymptotic results, we propose a modified asymptotic test that uses a χ 2 distribution with an adjusted number of degrees of freedom as the benchmark distribution.The major contributions of the paper are thus as follows: we (i) establish validity of the resampling method for conditional permutation approach; (ii) derive the asymptotic distributions of the estimated vector of probabilities and of the estimator of CM I under both resampling scenarios; (iii) compare asymptotic and resampled pvalues approach in numerical experiments.In numerical experiments, we show that for the models considered and a ratio of the sample size to the size of the support of (X, Y, Z) larger than 0.5, the test based on the asymptotic distribution with adjustments based on a limited number of permutations performs equally well or better than the exact test for both the Conditional Permutation and the Conditional Randomisation scenarios.Moreover, there is no significant difference in the performance of the exact tests for Conditional Permutation and Conditional Randomisation scheme, the latter requiring knowledge of the conditional distribution of X given Z.The same is true for both adaptive tests. As the null hypothesis of conditional independence is composite, an important question arises: how to control the type I error by choosing adequate conditionally independent probability structures.In the paper, we adopt a novel approach to address this issue, which involves investigating those null distributions that are Kullback-Leibler projections of probability distributions for which power is investigated. An important by-product of the investigation in (i) is that we establish asymptotic normality of the normalized and centered vector having a multivariate hyper-geometric or generalized hyper-geometric distribution for the conditional permutation scheme. Preliminiaries We consider a discrete-valued triple (X, Y, Z), where X ∈ X , Y ∈ Y, Z ∈ Z, and all variables are possibly multivariate.Assume that P (X = x, Y = y, Z = z) = p(x, y, z) > 0 holds for any (x, y, z) ∈ X × Y × Z.Moreover, we let p(x, y|z) = P (X = x, Y = y|Z = z), where p(z) = P (Z = z) and define p(x|z) and p(y|z) analogously.We will denote by I, J, K the respective sizes of supports of X, Y and Z: |X |= I, |Y|= J, |Z|= K.As our aim is to check conditional independence, we will use Conditional Mutual Information (CM I) as a measure of conditional dependence (we refer to [8] for basic information-theoretic concepts such as entropy and mutual information).Conditional Mutual Information is a non-negative number defined as We stress that the conditional mutual information is the mutual information (M I) of Y and X given Z = z, defined as the mutual information between P Y X|Z=z and the product of P Y |Z=z and P X|Z=z , averaged over the values of Z.As M I is Kullback-Leibler divergence between the joint and the product distribution, it follows from the properties of Kullback-Leibler divergence that I(Y ; X|Z) = 0 ⇐⇒ X and Y are conditionally independent given Z. This is a powerful property, not satisfied for other measures of dependence, such as the partial correlation coefficient in the case of continuous random variables.The conditional independence of X and Y given Z will be denoted by X ⊥ ⊥ Y |Z and referred to as CI.We note that since I(Y ; X|Z) is defined as a probabilistic average of I(Y ; X|Z = z) over Z = z, it follows that I(Y ; X|Z) = 0 ⇐⇒ I(Y ; X|Z = z) = 0 for any z in the support of Z.This is due to (1) as be an independent sample of copies of (X, Y, Z) and consider the unconstrained maximum likelihood estimator of the probability mass function (p.m.f.) ((p(x, y, z)) x,y,z based on this sample being simply a vector of fractions ((p(x, y, z)) x,y,z = (n(x, y, z)/n) x,y,z , where n(x, y, z) = In the following, we will examine several resampling schemes that involve generating new data such that they satisfy CI hypothesis for the fixed original sample.Extending the observed data to an infinite sequence, we will denote by P * the conditional probability related to the resampling schemes considered, given the sequence Resampling scenarios We first discuss the Conditional Permutation scheme, which can be applied to conditional independence testing.We then establish validity of the p-values based on this scheme, and the form of asymptotic distribution for the sample proportions, which is used later to derive asymptotic distribution of empirical CM I. Conditional Permutation (CP) scenario We assume that the sample (X, Y, Z) = (X i , Y i , Z i ) n i=1 is given and we consider CI hypothesis H 0 : X ⊥ ⊥ Y |Z.The Conditional Permutation (CP) scheme, used e.g. in [7], is a generalisation of a usual permutation scenario applied to test unconditional independence of X and Y .It consists in the following: for every value z k of Z appearing in the sample, we consider the strata corresponding to this value, namely CP sample is obtained from the original sample by replacing , where π k is a randomly and uniformly chosen permutation of P k and π k are independent (see Algorithm 1).Thus on every strata Z = z, we randomly permute values of corresponding X independently of values of Y .It is, in fact, sufficient to permute only the values of X to ensure conditional independence, which follows from the fact that for any discrete random variable (X, Y ) we have that X is independent of σ(Y ), where σ is a randomly and uniformly chosen permutation of the values of Y such that σ ⊥ ⊥ (X, Y ).The pseudo-code of the algorithm is given below.We consider the family of all permutations Π of all permutations π of {1, . . ., n} which preserve each of P k i.e. π is composed of π k 's, i.e. such that their restriction to every P k is a permutation of P k .The number of such permutations is Validity of p-values for CP scenario We first prove the result which establishes validity of resampled p-values for any statistic for the Conditional Permutation scheme.Let ).The pertaining p-value based on CP resampling is defined as Thus, up to ones added to the numerator and the denominator, the resampling p-value is defined as the fraction of T * b not smaller than T (ones are added to avoid null p-values).Although p-values based on CP scheme have been used in practice (see e.g.[7]) to the best of our knowledge, their validity has not been established previously, to the best of our knowledge. Theorem 1 (Validity of p-values for CP scheme) If the null hypothesis H 0 : X ⊥ ⊥ Y |Z holds, then where T = T (Xn, Yn, Zn) and The result implies that if the testing procedure rejects H 0 when the resampling p-value does not exceed α its level of significance is also controlled at α.The proof is based on exchangeability of T, T * 1 , . . ., T * B and is given in the Appendix. Asymptotic distribution of sample proportions for Conditional Permutation method We define p * to be an empirical p.m.f.based on sample (X * , Y, Z): p * (x, y, z) = , where π ∈ Π is randomly and uniformly chosen from Π. Similarly to n(x, y, z) we let n(y, z) = n i=1 I{Y i = y, Z i = z} and n(x, z) is defined analogously.We first prove Theorem 2 (i) Joint distribution of the vector (np * (x, y, z))x,y,z given i=1 is as follows: where (k(x, y, z))x,y,z is a sequence taking values in nonnegative integers such that x k(x, y, z) = n(y, z) and i=1 is given by the following weak convergence √ n p * (x, y, z) − p(x|z)p(y|z)p(z) x,y,z for almost all (X i , Y i , Z i ) ∞ i=1 , where Σ x ,y ,z x,y,z , element of Σ corresponding to row index x, y, z and column index x , y , z , is defined by We stress that (2) is a deterministic equality describing the distribution of np * : for k(x, y, z) x,y,z such that x k(x, y, z) = n(y, z) and y k(x, y, z) = n(x, z) (where n(x, z) and n(y, z) are based on the original sample) corresponding value of p.m.f. is given by the left-hand side, otherwise it is 0. Proof (i) The proof is a simple generalisation of the result of J. Halton [9] who established the form of the conditional distribution of a bivariate contingency table given its marginals and we omit it.(ii) In view of (2) subvectors , thus in order to prove (3) it is sufficient to prove analogous result when the stratum Z = z, i.e. for the unconditional permutation scenario.Note that since we consider conditional result given (X i , Y i , Z i ) ∞ i=1 ,the strata sample sizes n(z i ) are deterministic and such that n(z i )/n → P (Z = z i ) for almost every such sequence.The needed result is stated below. Theorem 3 Assume that n ij , i = 1, . . ., I, j = 1, . . ., J are elements of I × J contingency table based on iid sample of n observations pertaining to a discrete distribution (p ij ) satisfying p ij = p i. p .j .Then we have provided p ij > 0 for all i, j that where Σ = (Σ k,l i,j ) and Σ k,l i,j = p i. (δ ik − p .k )p .j(δ jl − p l. ). Remark 1 Let (X * i , Y i ) n i=1 be a sample obtained from (X i , Y i ) n i=1 by a random (unconditional) permutation of values of X i and p * (x, y) be an empirical p.m.f.corresponding to (X * i , Y i ) n i=1 .Then obviously (n ij /n) and (p * (x, y)) follow the same distribution and ( 5) is equivalent to Moreover, the elements of Σ can be written as (compare (4)) Remark 2 Matrix Σ introduced above has the rank (I − 1) × (J − 1) and can be written using the tensor products as (diag(α) − α ⊗ α) ⊗ (diag(β) − β ⊗ β), where α = (p i. ) i and β = (p .j ) j . The proof of Theorem 3 follows from a weak convergence result for tablevalued hypergeometric distributions and is important in its own right. Let R denote the range of indices (i, j): R = {1, . . ., I} × {1, . . ., J}.For Suppose that the law of Wr = (W (r) ij ) (i,j)∈R is given by Then, where Σ = (Σ k,l i,j ) and The proof of Lemma 4 is relegated to the Appendix.Theorem 3 is a special case of Lemma 4 with a r = (n i. ) i , b r = (n j. ) j , r = n on a probability space (Ω, F, P n ), where ) is a regular conditional probability. Conditional Randomisation scenario We now consider the Conditional Randomisation (CR) scheme, popularised in [4].This scheme assumes that the conditional distribution P X|Z is known, and the resampled sample is , where X * i is independently generated according to the conditional distribution P X|Z=zi and independently of (X, Y). The assumption that P X|Z is known is frequently considered (see e.g.[4] or [10]) and is realistic in the situations when a large database containing observations of unlabelled data (X, Z) is available, upon which an accurate approximation of P X|Z is based.Theorem 4 in [10] justifies the robustness of the type I error for the corresponding testing procedure.We note that the conclusion of Theorem 1 is also valid for CR scenario (cf.[4], Lemma 4.1).Let p * (x, y, z) where The proof which is based on multivariate Berry-Esseen theorem is moved to the Appendix.is a nonnegative definite matrix (see Lemma 6 in the Appendix).The inequalities between the covariance matrices can be strict.In view of this, it is somewhat surprising that the asymptotic distributions of CM I based on p * in all resampling scenarios coincide.This is investigated in the next Section. Asymptotic distribution of CM I for considered resampling schemes We consider CM I as a functional of probability vector (p(x, y, z)) x,y,z defined as (compare (1)) We prove that despite differences in asymptotic behaviour of n 1/2 (p * − p) for both resampling schemes considered, the asymptotic distributions of based on them coincide.Moreover, the common limit coincides with asymptotic distribution of CM I, namely χ 2 distribution with (|X |−1) × (|Y|−1) × |Z| degrees of freedom.Thus in this case the general bootstrap principle holds as the asymptotic distributions of CM I and CM I * are the same. Theorem 6 For almost all sequences (X i , Y i , Z i ), i = 1, . . .and conditionally on a.e., where p * is based on CP or CR scheme. Proof We will prove the result for the Conditional Permutation scheme and indicate the differences in the proof in the case of CR scheme at the end.The approach is based on delta method as in the case of CM I (see e.g.[6]).The gradient and Hessian of CM I(p) considered as a function of p are equal to, respectively, and where (H CM I (p)) x ,y ,z x,y,z denotes element of Hessian with row column index x, y, z and column index x , y , z .In order to check it, it is necessary to note that e.g. the term p(x , y ) = z p(x , y , z ) contains the summand p(x, y, z) if x = x and y = y , and thus ∂p(x ,y ) ∂p(x,y,z) = I(x = x , y = y ).The proof follows now from expanding CM I(p * ) around pci := p(x|z)p(y|z)p(z): where ξ = (ξx,y,z)x,y,z and ξx,y,z is a point in-between p * (x, y, z) and pci (x, y, z).We note that CM I(p ci ) = 0 as pci is a distribution satisfying CI and, moreover, the gradient of conditional mutual information D CM I at pci is also 0 as Thus two first terms on RHS of (11) are 0.Moreover, using continuity of H CM I (•) following from p(x, y, z) > 0 for all (x, y, z) and ( 3) it is easy to see that where Z = (Zx,y,z)x,y,z ∼ N (0, I) and λx,y,z are eigenvalues of a matrix M = H CM I (p ci )Σ.To finish the proof it is enough to check that M is idempotent, thus all its eigenvalues are 0 or 1, and verify that the trace of M equals (|X |−1)×(|Y|−1)×|Z|.This is proved in Lemma 3 in the Appendix.The proof for CR scheme is analogous and differs only in that in the final part of the proof matrix M is replaced by matrix M = H CM I Σ where Σ is defined in Theorem 5.However, its shown in Lemma 3 in the Appendix that M = M thus the conclusion of the Theorem holds also for CR scheme. Remark 4 We note that two additional resampling scenarios can be defined.The first one, which we call bootstrap.X, is a variant of CR scenario in which, instead of sampling on the strata Z = z i from the distribution P X|Z=zi the pseudo-observations are sampled from the empirical distribution of P (x|z i ).In order to introduce the second proposal, Conditional Independence Bootstrap (CIB), consider first empirical distribution pci = p(x|z)p(y|z)p(z).We note that probability mass function (p ci (x, y, z))x,y,z is the maximum likelihood estimator of p.m.f.(p(x, y, z))x,y,z when conditional independence of X and Y given Z holds.Then (X * i , Y i , Z i ) n i=1 is defined as iid sample given (X, Y, Z) drawn from pci .Note that there is a substantial difference between this and previous scenarios as in contrast to them X and Z observations are also sampled.For the both scenarios convergence established in Theorem 6 holds Fig. 1: Considered models (see [11]).However, we conjecture that validity of p-values does not hold for these schemes.As we did not establish substantial advantages of using either bootstrap.X or CIB over neither CP or CR scheme we have not pursued discussing them here in detail. Numerical experiments In the experiments, we will consider the following modification of a classical asymptotic test based on χ 2 distribution as the reference distribution.Namely, since it is established in Theorem 6 that 2n × CM I * is approximately χ 2 distributed for both scenarios considered, we use the limited number of resampled samples to approximate the mean of the distribution of 2n × CM I * and use the obtained value as an estimate of the number of degrees of freedom of χ 2 distribution.The adjustment corresponds to the equality of the mean and the number of degrees of freedom in the case of χ 2 distribution.Thus, we still consider χ 2 distribution as the reference distribution for CI testing; however, we adjust its number of degrees of freedom.The idea appeared already in [7].Here, the approach is supported by Theorem 6 and the behaviour of the resulting test is compared with the other tests considered in the paper. We will thus investigate three tests in both resampling schemes CR and CP.The test which will be called exact is based on Theorem 1 in the case of CP scenario and the analogous result for CR scenario in [4].The test df estimation uses χ 2 distribution with the degrees of freedom estimated in data-dependent way as just described.As a benchmark test we use the asymptotic test which uses the asymptotic χ 2 distribution established in Theorem 6 as a reference distribution.Choice of number of resampled samples B. As in the case of df estimation test the reference distribution involves only the estimator of the mean and not the estimators of upper quantiles of high order, we use a moderate number of resampled samples B = 50 for this purpose.In order to have equal computational cost for all tests, B = 50 is also used in the case of exact test.Note that applying moderate B renders application of such tests in greedy feature selection (when such tests have to be performed many times) feasible. The models considered are standard models to study various types of conditional dependence of X and Y given vector Z: e.g. in model 'Y to XZ' , Y conveys information to both X and Z whereas in model 'X and Y to Z' both X and Y convey information to Z. Model XOR is a standard model to investigate interactions of order 3. Below we will describe the considered models in detail by giving the formula for joint distribution of (X, Y, Z 1 , Z 2 , . . ., Z s ).Conditional independence case (the null hypothesis) will be investigated by projecting considered models on the family of conditionally independent distributions. • Model 'Y to XZ' (the first panel of Figure 1).Joint probability in the model is factorised as follows p(x, y, z 1 , z 2 , . . ., z s ) = p(y)p(x, z 1 , z 2 , . . ., z s |y), thus it is sufficient to define p.m.f. of Y and conditional p.m.f. of (X, Z 1 , . . ., Z s ) given Y .First, Y is a Bernoulli random variable with probability of success equal to 0.5 and conditional distribution of ( X, Z1 , . . ., Zs ) given Y = y follows a multivariate normal distribution N s+1 (yγ s , σ 2 I s+1 ), where γ s = (1, γ, . . ., γ s ), and γ ∈ [0, 1] and σ > 0 are parameters in that model.In order to obtain discrete variables from continuous ( X, Z1 , . . ., Zs ) we define the conditional distribution of (X, Z 1 , . . ., Z s ) given Y = y by assuming their conditional independence given Y and The variables X and Z i all have Bern(0.5)distribution and conditional distribution of Y follows • Model 'XY to Z' (the third panel in Figure 1) The joint probability factorises as follows p(x, y, z 1 , z 2 , . . ., z s ) = p(x)p(y) X and Y are independent and both follow Bernoulli distribution Bern(0.5). The distribution of Z i depends on the arithmetic mean of X and Y and the variables Z 1 , . . ., Z s are conditionally independent given (X, Y ).They follow Bernoulli distribution ) for i ∈ {1, 2, ..., s}, where α ≥ 0 controls the strength of dependence.For α = 0, the variables Z i do not depend on (X, Y ). • Model XOR The distribution of Y is defined as follows: where 0.5 < β < 1 and = 2 denotes addition modulo 2. We also introduce variables Z 3 , Z 4 , . . ., Z s independent of (X, Y, Z 1 , Z 2 ) .All variables X, Z 1 , Z 2 , . . ., Z s are independent and binary with the probability of success equal to 0.5. We run simulations for fixed model parameters (Model 'Y to XZ': γ = 0.5, σ = 0.5, Model 'XZ to Y': σ = 0.07, model 'XY to Z': α = 3, model XOR: β = 0.8.In all the models the same number of conditioning variables s = 4 was considered.The parameters are chosen in such a way that in all four models values of conditional mutual information CM I(X, Y |Z) are similar and contained in the interval [0.16, 0.24] (see Figure 2 for λ = 0 which corresponds to the chosen p.m.f.p(x, y, z)).We define a family of distributions parameterised by parameter λ ∈ [0, 1] in the following way: where p denotes the joint distribution pertaining to the model with the chosen parameters and p ci (x, y, z) = p(x|z)p(y|z)p(z) is the Kullback-Leibler projection of p onto the family P ci of p.m.fs satisfying conditional independence X ⊥ ⊥ Y |Z (see Lemma 4 in Appendix).Probability mass function p ci (x, y, z) can be explicitly calculated for the given p(x, y, z).Note that λ is a parameter which controls the strength of shrinkage of p towards p ci .We also underline that the Kullback-Leibler projection of p λ onto P ci is also equal to p ci (see Lemma 5 in the Appendix).Figure 2 shows how conditional mutual information of X and Y given (Z 1 , Z 2 , . . ., Z s ) changes with respect to λ.For λ = 1, p λ = p ci , thus X and Y are conditionally independent and CM I(X, Y |Z) = 0. The simulations, besides standard analysis of attained levels of significance and power, are focused on the following issues.Firstly, we analyse levels of significance of CM I-based tests for small sample sizes.It is known that for small sample sizes problems with control of significance levels arise, as the probability of obtaining the samples which result in empty cells (i.e.some values of (x, y, z 1 , . . ., z s ) are not represented in the sample) is high.This issue obviously can not be solved by increasing the number of resampled samples as it is due the original sample itself.However, we would like to check whether using χ 2 distribution with estimated number of degrees of freedom as a benchmark distribution provides a solution to this problem.Moreover, the power of such tests in comparison with exact tests is of interest.Secondly, it is of importance to verify whether the knowledge of the conditional distribution of X given Z which is needed for CR scheme, actually translates into better performance of the resulting test over the performance of the same test in CP scenario.The conditional independence hypothesis is a composite hypothesis, thus an important question is how to choose representative null examples on which control of significance level should be checked.Here we adapt a natural, and to our knowledge, novel approach which consists in considering as the nulls the projections p ci of p.m.fs p for which power is investigated. In Figure 3 histograms of p ci (x, y, z) for the considered models are shown.Although all 2 s+2 = 64 probabilities p(x, y, z) are larger than 0 in all the models, some probabilities may be very close to 0 (as it happens in 'XZ to Y' model).For model XOR all triples are equally likely and thus for all (x, y, z) p ci (x, y, z) = 1/2 6 = 0.015625.If there are many values of p ci (x, y, z) that are close to 0, the probability of obtaining a sample without some triples (x, y, z) for which p ci (x, y, z) > 0 is high.In particular, this happens in 'XZ to Y' model.In the following the performance of the procedures is studied with respect to the parameter frac = n/2 s+2 instead of sample size n.As the number of unique values of triples (x, y, z) equals 2 s+2 , thus frac is the average number of observations per cell in the uniform case and roughly corresponds to this index for a general binary discrete distribution.In Table 1 we provide the values of sample sizes corresponding to changing frac as well as the value of np min for s = 4, where p min is the minimal value of either probability mass function p(x, y, z) or p ci (x, y, z).As np min is the expected value of observations for the least likely triple it indicates that occurrence of empty cells is typical for frac as large as 20.In Figure 4 the estimated fraction of rejections for the tests based on resampling in case when the null hypothesis is true (λ = 1) is shown when the assumed level of significance equals 0.05.The attained levels of significance for asymptotic test are given separately in Figure 5. Overall, for all the procedures based on resampling the attained level of significance is approximately equal to the assumed one.The df estimation methods both for CP and CR do not exceed assumed significance level for the considered range of frac ∈ [0.5, 5]. Figure 4 indicates that distribution of CM I is adequately represented by χ 2 distribution with estimated number of degrees of freedom.This will be further analysed below (see discussion of Figures 5 and 6).In Figure 5 4 .For all the models except 'XZ to Y' for small number of observations per cell we underestimate the mean of 2n CM I by using the asymptotic number of degrees of freedom and in these cases the significance level is exceeded.This effect is apparent even for frac equal to 5. On the other hand in the model 'XZ to Y' the situation is opposite and in this case the test rarely rejects the null hypothesis.This is due to the overestimation of the mean of 2n CM I by asymptotic number of degrees of freedom in the case when many empty cells occur.Note that the estimation of the mean based on resampled samples is much more accurate (in Figure 5 obtained results is marked in blue).We also note that the condition np min ≥ 5 is frequently cited as the condition under which test based on asymptotic χ 2 distribution can be applied.Note, however, that in the considered examples and for frac ≥ 20, asymptotic test controls fairly well level of significance, whereas np min can be of order 10 −11 (Table 1).Moreover, for frac=20 and λ = 0.5 the power of asymptotic test is 1. In Figure 6 we compare the distributions of CM I with those of resampling distributions of CM I * and χ 2 distribution with the estimated number of degrees of freedom by means of QQ plots.For each of 500 original samples 50 resampled samples are generated by the Conditional Permutation method and quantiles of resampling distributions of CM I * are calculated, resulting in 500 quantiles, medians of which which are shown in the plot.Medians of quantiles for χ 2 distribution with an estimated number of degrees of freedom are obtained in the similar manner.Quantiles of the asymptotic distribution are also shown.Besides the fact that the distribution of CM I is better approximated by the distribution of CM I * , what confirms the known property of bootstrap in the case of CM I estimation (compare Section 2.6.1 in [12]), it also follows from the figure that the distribution of CM I is even better approximated by χ 2 distribution with estimated number of degrees of freedom.Figure 7 shows the results for the power of testing procedures for λ = 0.25, 0.5, 0.75 with respect to frac.Since asymptotic test does not control significance level for these models for λ = 1, the pertaining power is omitted from the figure.As for increasing λ, p.m.f. of p λ approaches the null hypothesis described by p ci the power becomes smaller in rows.As frac gets smaller, the power of the tests also decreases and this is due to the increased probability of obtaining empty cells (x, y, z 1 , . . ., z s ) in the sample, and because of that such observations are also absent in the resampled samples for Conditional Permutation scheme.CR is more robust in this respect as such occurs only when not all values of (z 1 , . . ., z s ) are represented in the sample.This results in better performance of the tests for CR scheme than for CP scheme (each estimated mean is based on B = 50 resampled samples and the simulation is repeated 500 times; the average of the obtained means and mean±SE is shown in blue.The number of degrees of freedom for asymptotic χ 2 distribution is a solid horizontal line. q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q Y to XZ XZ to Y XY to Z XOR for small values of frac (see also Figure 8).It follows that the procedures based on χ 2 distribution with the estimated number for of degrees of freedom are more powerful than exact tests, regardless of the resampling scenario used.Although the advantage is small, it occurs in all cases considered.The plot also indicates that exact tests in both scenarios act similarly and are inferior to tests based on asymptotic distribution with estimated dfs which also exhibit similar behaviour.We compare powers in CP and CR scenarios in Figure 8 in which ratios of respective powers for exact tests and df estimation tests are depicted by orange and green lines, respectively.The values below 1 mean that the CR has greater power.The differences occur only for small frac values.Both df estimation and exact tests have larger power in CR scenario than in CP scenario for frac ∈ [0.5, 2].The power for both methods is similar for frac ≥ 2, thus it follows that CP scenario might be used instead of CR, as it is as efficient as CR.Our conclusions can be summarised as follows: • The significance level is controlled by df estimation and exact tests both for CP and CR scenarios.It happens that asymptotic test does not control significance level even for frac larger than 10.Interestingly, although asymptotic case is usually significantly too liberal for small frac it also happens that it is very conservative (Figure 4, model 'XZ to Y'); Supplementary information.Appendix contains all proofs of the results in the paper, which have not been presented in the main body of the article. As we have proven the exchangeability of the sample and resampled samples given Zn, the test statistics based on them are also exchangeable given Zn.By averaging over Zn the property also holds unconditionally. In order to prove Lemma 4 we start with following simple lemma, which is crucial for our argument. Proof Assume that t i is a continuity point of F i .Then for i = 1, . . ., d, By Lebesgue's dominated convergence theorem, the latter term converges to 0 as r → ∞.Thus, by induction, the cumulative distribution function of (W where Σ is a (d − 1)-rank matrix with elements The univariate case is proved in [13, Th. 2.1].We could not find an appropriate reference for the general case.However, we refrain from giving a formal proof of the multivariate case, as it follows from the univiariate case in analogous way as Lemma 4 follows from Lemma 8 and we present a full argument below. We now prove Lemma 4. Proof First, observe that (6) can be rewritten as ∼ Hyp J (a (r) 1 , br), where Hyp J is defined in Lemma 8. Since |br|= nr, by Lemma 8, we have We have ij follows the hypergeometric distribution with parameters nr, a i , b (r) j by the law of large numbers, we have Observing that m (i) We apply Lemma 8 conditionally on (W k ) k<i , to obtain for i = 2, . . ., I, Z where Z i ∼ N (0, Σ i ) with By Lemma 7, we have where Z 1 , . . ., Z I are independent.By direct calculation, it is easy to see that Thus, where Σ = (Σ k,l i,j ).Σ k,l i,j denotes covariance of jth coordinate of ith consecutive subvector of the length J of Q with kth coordinate of the lth subvector.Thus Since no row is distinguished, in order to establish (7) it is enough to consider i = 1 and k ∈ {1, 2}.We have We prove now Theorem 5.The proof follows [14] and it is based on the multivariate Berry-Esseen theorem ( [15]). We define Σx ,y ,z x,y,z = n(Cov * (p * (x, y, z))x,y,z ) x ,y ,z x,y,z and Q . As p(x, y, z) > 0 for all (x, y, z), the matrix Σ−M is invertible, cf.e.g.[16].One element of the vector p * is omitted to ensure that the covariance matrix is invertible.As we have x,y,z p * (x, y, z) = 1, the full dimension matrix Σ is singular.Then we apply multivariate Berry-Esseen theorem ( [15]) and d = M − 1.We notice that as ptci → p ci and Σ−M → Σ −M a.s., where Σ −M denotes the matrix Σ without the last row and the last column, and for all j = 1, 2, . . ., M − 1 We prove now the lemma which is used in the proof of Theorem 6.We compute now ( M 2 ) x ,y ,z x,y,z . The first term in the first bracket is multiplied by the consecutive terms in the second bracket, then the second term in the first bracket and so on: We prove now two lemmas which justify choice of null distributions in the numerical experiments. Lemma 10 Probability mass function p ci (x, y, z) = p(x|z)p(y|z)p(z) minimises D KL (p||q) over q ∈ P ci defined as P ci = {q(x, y, z) : q(x, y, z) = q(x|z)q(y|z)q(z)}.is positive semi-definite.Now we define elements of matrix R(z) = (r x ,y x,y (z)) x ,y x,y as r x ,y x,y (z) = r x x (z)p(y, z)p(y , z) and we show that R(z) ≥ 0. Namely, for any non-zero vector a = (a(x, y))x,y it holds a R(z)a = where the last inequality follows as R(z) ≥ 0. However, (R) x ,y ,z Proof x,y,z = r x ,y ,z x,y,z = r x ,y x,y I(z = z )/p(z), thus for any non-zero vector a = (a(x, y, z))x,y,z we have that a Ra = x,y,z x ,y ,z ax,y,zr x ,y ,z x,y,z a x ,y ,z = x,y,z x ,y ,z ax,y,zr x ,y x,y (z)I(z = z )/p(z)a x ,y ,z = z   x,y x ,y ax,y,zr x ,y x,y (z)a x ,y ,z   /p(z) ≥ 0. is used for CI testing.We choose B independent permutations in Π, construct B corresponding resampled samples by CP scenario (X * n,b , Y n,b , Z n,b ) for b = 1, 2, . . ., B and calculate the values of statistic T * b = T (X * n,b , Y n,b , Z n,b y ,z x,y,z = I(z = z )p(z) p(x|z)p(y|z)p(x |z)p(y |z) − I(x = x )p(x|z)p(y|z)p(y |z) −I(y = y )p(x|z)p(x |z)p(y|z) + I(x = x , y = y )p(x|z)p(y|z) . Lemma 4 J Let ar = (a ) be two vectors with coordinates being natural numbers such that nr := |ar|= |br|. Fig. 3 : Fig.3: Histograms of values of probabilities p ci for the four considered models.The vertical dotted line shows the value of probability p ci when all triples (x, y, z) are equally probable. frac in the top row the attained values of significance levels for the asymptotic test are shown.That test significantly exceeds the assumed level α = 0.05.The reason for that is shown in the bottom panel of Figure 5.The red dots represent the mean of 2n CM I based on n = 10 5 samples for each value of frac and the solid line indicates the number of degrees of freedom of the asymptotic distribution of 2n CM I, which for s = 4 equals (|X |−1)(|Y|−1)|Z|= 2 Fig. 4 : Fig. 4: Attained significance level of the tests based on resampled samples for the considered model p ci corresponding to λ = 1, B = 50 with respect to frac. Fig. 5 : Fig. 5: Top panels: Levels of significance for asymptotic test.Bottom panels: comparison of the estimated and assumed number of degrees of freedom in testing procedures: mean of 2n CM I based on 10 5 samples generated according to p ci , mean of 2n CM I * Fig. 6 : Fig. 6: Q-Q plots of distribution of CM I versus asymptotic distribution (gray), exact resampling distribution (yellow) based on permutations and χ 2 distribution with an estimated number of degrees of freedom (green) under conditional independence for p ci .For the two last distributions medians of 500 quantiles for resampling distributions each based on 50 resampled samples are shown.Straight black line corresponds to y = x. Fig. 7 : Fig. 7: Power of the tests based on resampled samples for the considered model for λ = 0.25, 0.5, 0.75 and B = 50 with respect to frac. Fig. 8 : Fig. 8: Comparison of resampling scenarios.Fraction of rejections for CP divided by fraction of rejections for CR for both exact and df estimation tests for λ = 0.5 and B = 50. Lemma 7 Assume that as r → ∞, P (W p ci = p(x|z)p(y|z)p(z) and we define ptci (tci stands for true conditional independence) in the following way ptci (x, y, z) = p(x|z) n(y, z) n(z) n(z) n =: p(x|z)p(y|z)p(z), thus, since p * follows the multinomial distribution with an observation (x, y, z) having a probability equal to ptci (x, y, z), conditionally on the original sample we have that E * p * (x, y, z) = p(x|z)p(y|z)p(z) 5 ) Indeed, D KL (p||q) − D KL (p||p ci ) (Analysis of Conditional Randomisation and Permutation schemes We note that for any z the matrix R(z) defined as ( R(z)) x x = r x x (z) = I(x = x )p(x|z) − p(x|z)p(x |z) = x,y x ,y ax,yr x ,yx,y (z)a x ,y =x,y x ,y ax,yr x x (z)p(y, z)p(y , z)a x ,y Conditional mutual information of random variables X and Y given Z = (Z 1 , Z 2 , Z 3 , Z 4 ), joint distribution of which equals p λ = λp ci +(1−λ)p, and p and p ci are characterized by the chosen models and parameters (see text). Table 1 : Values of np min , where p min = min (x,y,z) p ci (x, y, z) or p min = min (x,y,z) p(x, y, z) with respect to n. frac values correspond to s = 4. • The power of estimated df test is consistently larger than exact test, both for CR and CP scenarios.The advantage is usually more significant closer to null hypothesis (larger λ); • There is no significant difference in power between df estimation tests in CR and CP scenarios apart from the region frac ∈ [0.5, 2].The same holds for both exact tests excluding frac ∈ [0.5, 1.5].Moreover, df estimation test for CP scenario has larger power than CR exact test.
10,645.6
2022-10-04T00:00:00.000
[ "Mathematics" ]
A NEW FRAMEWORK FOR GEOSPATIAL SITE SELECTION USING ARTIFICIAL NEURAL NETWORKS AS DECISION RULES : A CASE STUDY ON LANDFILL SITES This paper briefly introduced the theory and framework of geospatial site selection (GSS) and discussed the application and framework of artificial neural networks (ANNs). The related literature on the use of ANNs as decision rules in GSS is scarce from 2000 till 2015. As this study found, ANNs are not only adaptable to dynamic changes but also capable of improving the objectivity of acquisition in GSS, reducing time consumption, and providing high validation. ANNs make for a powerful tool for solving geospatial decisionmaking problems by enabling geospatial decision makers to implement their constraints and imprecise concepts. This tool offers a way to represent and handle uncertainty. Specifically, ANNs are decision rules implemented to enhance conventional GSS frameworks. The main assumption in implementing ANNs in GSS is that the current characteristics of existing sites are indicative of the degree of suitability of new locations with similar characteristics. GSS requires several input criteria that embody specific requirements and the desired site characteristics, which could contribute to geospatial sites. In this study, the proposed framework consists of four stages for implementing ANNs in GSS. A multilayer feed-forward network with a backpropagation algorithm was used to train the networks from prior sites to assess, generalize, and evaluate the outputs on the basis of the inputs for the new sites. Two metrics, namely, confusion matrix and receiver operating characteristic tests, were utilized to achieve high accuracy and validation. Results proved that ANNs provide reasonable and efficient results as an accurate and inexpensive quantitative technique for GSS. INTRODUCTION Geospatial site selection (GSS) has attracted increasing attention from experts because of the hazards and problems related to unsuccessful site selection.GSS is a framework that assists decision makers in evaluating available maps for the selection of sites according to their suitability for any specific spatial target Such as; landfills, schools, transportation stations, hazard zones, and new urban areas (Malczewski, 2004).The current framework used for determining suitable sites for any geospatial target is time consuming (Guiqin et al., 2009) and involves multifaceted procedures because of the incorporation of different geospatial data from several disciplines (Gorsevski et al., 2012).In addition, the requirements of current administrative systems and the need to reduce environmental, economic, social, and health costs must simultaneously be addressed (Nazari et al., 2012).Furthermore, the available areas that can be used for evaluation are limited, due to the "not in my back yard" phenomenon, particularly because locating hazardous sites such as landfills near residential areas is undesirable (Vasiljević et al., 2012).The application of GSS must thus consider all related criteria and variables (Ghobadi et al., 2013).GSS is a comprehensive representation of the collective stages of the overall framework for the selection of geospatial sites.In the present study, 75 models among the many existing models are reviewed (Ghobadi et al., 2013, Uyan, 2014, Guiqin et al., 2009, Saeed et al., 2012).The collective stages of these models are identified and subsequently summarized.Regardless of the singular variances among the different frameworks, the authors identify the five general stages of existing frameworks.(Figure 1): (1) criteria input (collection of spatial data, data derivation, and geo-processing), (2) reclassification (normalization of constraint maps and factors as exclusionary criteria), (3) selection of weights by evaluating them according to their attributes, (4) objective balancing, and (5) overlaying all inputs via decision rule algorithms.Aggregation is implemented by multiplying the criteria with their weights and summing up the results for each alternative (pixel in raster data format) to identify the suitability value or index for all areas. In the literature, geographic information system (GIS) has been employ in all of the stages, which involves in determination of the target locations.In the first stage, maps and criteria were prepare via GIS.In the second stage, the candidate criteria or maps are reclassified or normalized to consolidate the dimensions of the values between maps.In this stage, the suitability of the normalization method and its accuracy in comparison with that of the standardization method are ambiguous, possibly because of the limited GIS modeling software or spatial modules.In the third stage, maps should be ranked or weighted.Diverse quantitative and qualitative characteristics, such as ecosystem quality, infrastructure conditions, public acceptance, aesthetic quality, financial cost, and time consumption, affect the weighting operation.For this stage, weight selection techniques such as Delphi, conflicting bifuzzy preference relation, analytic hierarchy process (AHP), fuzzy logic, and expert knowledge (decision makers, experts, interest groups, questionnaires, and stakeholders) are widely used.However, the majority of previous methods are qualitative because its rely on human interference, which is prone to errors and constrains.In particular, humans ineffectively make quantitative estimates, although they can efficiently establish qualitative estimates.Moreover, individuals are susceptible to biased tendencies, such as in cases in which decision makers are required to perform statistical estimation or weighting.Given their dependence on human behavior, qualitative methods require high levels of knowledge and understanding of the nature of spatial data and the areas under assessment.In addition, these methods may lead to irrelevant generalizations, wastage of effort and time, and high costs because of the need to consult experts, which could be inefficient.In the fourth stage, the criteria and objectives must be balanced, the balance operation aims to provide multi scenarios in the output, thus it give more than one option for the decision makers.In addition, it provide flexible evaluation process in case of fast changes in requirements.In the fifth stage, the criteria are aggregate with a decision rule algorithm, which performs a vital function in the overall GSS framework.Decision rules are enable decision makers to evaluate the available alternatives (pixels) for selection on the basis for their suitability (Malczewski, 2004).In this stage, all the data are gathered to produce a suitability scale.Several studies have implemented various decision rules for GSS.Carrying out decision rules for GSS has been limited to rare, popular methods, such as Boolean logic, weighted linear combination (WLC), simple additive weighting, and ordered weighted averaging, all of which represent the concept of multicriteria decision analysis.Resent several studies utilized GIS and WLC technique as a hands-on mechanism for GSS Ghobadi et al., 2013, Uyan, 2014, Guiqin et al., 2009, Saeed et al., 2012), and Ahamad et al., 2011).Demesouka et al., 2013) proposed and developed a spatial decision support system to identify sites, which is proportionate to location appropriateness.Similarly, numerous studies have developed different methods for GSS (Ahamad et al., 2011).However, existing decision rules have certain limitations that lead to uncertainty and inaccurate results.These limitations include vagueness in testing the accuracy and validity of models. Figure 1.General stages of the GSS framework X = criterion, f = factor maps, c = constraint map, o = object Previous decision rules (stage 5) and weight selection (stage 3) are valuable in the GSS framework (Uyan, 2014).However, the considerable weaknesses and restrictions of the existing approach could reduce the accuracy of results (.These weaknesses can be solved efficiently and accurately with artificial neural networks (ANNs) (Quan andLee, 2012, Jiang andNan, 2006).ANN models first select the weights as the initial step and then implement, aggregate, and compare the results with target data according to certain metrics, such as coefficient of determination (R2), to determine their accuracy.In this way, ANN models quantitatively deal with large number of unknown inputs to determine the significance of each criterion.ANNs perform a substantial role because they can easily and flexibly determine weights by quantity, especially under challenging conditions that involve uncertain and unavailable information on decision progression, ambiguous human recognition and feelings, and inefficient frameworks and qualitative methods.In general, an ANN is a non-linear method established by McCulloch and Pitts (1943).It is extensively used in classification and pattern recognition applications (García-Rodríguez and Malpica, 2010).Thus, the utilization of ANNs has caught the attention of various scholars, particularly because of their functions and capability of dealing with complicated decision-making problems.The goal of ANNs is to establish a network or software through the training and weighting operation.Thus, a network tool may be able to forecast outputs from input data (Lee et al., 2012).ANNs can be incorporated into the GSS process without the need to understand the nature of complex factors (Kia et al., 2012).Therefore, the broadened understanding of GSS models has given rise to the need to improve the tools for information use.Furthermore, ANNs can deal with unfamiliar data, multivariate diminution, non-linear relationships, and complicated interrelationships among criteria (García et al., 2008).ANNs are decision-making rules that can achieve accurate valuation in the classification of oversized samples.An ANN subjectively assigns weights to different conditioning factors with negligible human interference.The weights indicate the relative importance of the factors (Li et al., 2012).ANNs are influential tools that are applicable in classification, prediction, and pattern recognition applications (Kia et al., 2012).ANNs have been applied in several areas, such as in the prediction and classification of variables, because of their advanced computing performance (Tayyebi et al., 2011).Recently, ANNs have been employed to solve compound spatial problems in non-GSS solutions, such as the prediction of dissolved organic carbon, landslide susceptibility mapping (Quan andLee, 2012) Pavel et al., 2008;Vahidnia et al., 2010, Conforti et al., 2014), mineral mapping, flood simulation (Kia et al., 2012) Rigol Sanchez et al., 2003).Additionally, Simulation of urban growth, karst water flow forecasting (Wu et al., 2008), groundwater prediction (Newcomer et al., 2013), and risk assessment of earth fractures (Wu et al., 2004).Therefore, ANNs can potentially be a new approach to improving the GSS framework, particularly its third and fifth stages (Li et al., 2012, Paraskevas et al., 2014).However, very rare studies has thoroughly investigated ANN for GSS particularly during the period since 2000 until 2015.The present study mainly aims to propose ANNs as new decision rules with a new framework to overcome human interference issues and the limitations of previous frameworks, as well as to improve the accuracy and validity of GSS models by upgrading the GSS framework, especially its third and fifth stages. THEORETICAL REVIEW OF ARTIFICIAL NEURAL NETWORKS (ANNs) ANNs are mathematical models that mimic human behavior by emulating the operations and connectivity of organic neurons (Song et al., 2010).According to Li et al., 2012), an ANN is a "computerized instrument capable to obtain, represent and calculate maps from multidimensional space of data through specific dataset representing spatial data."Several types of ANN architecture have been utilized in previous studies, the most popular one being the multilayer feed-forward network.Under such architecture, information is simply moved in a forward direction.Backpropagation is the most common algorithm employed to train the network.The implementation of ANNs to derive solutions to nongeospatial problems that involve different stages.in the from the literature, 107 articles cover several non-geospatial applications were reviewed, and The most common stages among them were extract, summarize and illustrated in Figure 2 (Conforti et al., 2014, Paraskevas et al., 2014).Regardless of the singular variances among the different frameworks, we explore the general frameworks (Figure 2), which can be divided into the following three stages: (1) data processing, (2) ANN modeling, and (3) evaluation or simulation.Data processing in this case includes specifying the input data, collecting spatial data, geoprocessing, extracting data, establishing the data set for ANNs, implementing standardization or normalization, implementing randomization, and dividing the data set into three sets for training, testing, and validation.ANN modeling includes the design of neural network architecture and parameters, training of the network using the training algorithm, and testing the network to ensure that the required RMS accuracy is met.In case of failure, the analyzer must change the ANN size (number of nodes in the first layer) or reselect the training sets.The final step in ANN modeling is the identification of the best neural network architecture for simulation or evaluation.In the last stage of the framework, the entire dataset is evaluated or simulated.This approach can be modified, upgraded, and developed to improve the suitability of ANN application in GSS. A NEW ANN FRAMEWORK FOR GSS The ANN framework is aimed at constructing a module that is capable of evaluating and generalizing yields from inputs; such objective has yet to be realized (Lee et al., 2012).To construct and implement ANNs in GSS, we propose four comprehensive stages that are developed from the previous GSS and ANN framework.The proposed framework is focused on constructing a network using quantitative methods while avoiding human interference to reduce financial cost by canceling the expertbased weight selection stage.In this way, high accuracy is achieved in the calibration and validation tests.The proposed framework is present in Figure 4, which illustrates the flow of the new framework, including the four stages, namely, (1) criteria input, (2) data processing, (3) ANN modeling, and (4) evaluation and spatial visualization.The criteria input stage involves input data identification, spatial data collection, geoprocessing, and spatial data derivation.All the tasks at this stage can be performed in the ArcGIS software using different spatial tools for either input or output (target data).The input data comprise various criteria, such as slope, elevation, land use, geology, and soil.The output or target data include the desired existing sites, which could be a point or a polygon.The input and output data must be shaped and converted to raster format.In constructing the raster layer especially for the target data and input criteria, the analyzer must ensure that the cell size of the raster layer clearly represents the small landmarks or ground features, such as schools, landfills, hazard sites, and new urban areas.To achieve an overlaid target, all the data layers must be assigned with the appropriate coordinate systems and with the same cell size and degree (number of columns and rows must be identical between the layers).The TIFF format is also acceptable because of its compatibility with MATLAB.The data processing stage involves the establishment of the data set for the ANN, which is initiated by extracting raster data through a sample point.The sample point needs to be constructed in ArcGIS to represent the input and output data, as well as the target and non-target locations.For the target location, several sample points must be constructed to facilitate the next training stage.Finally, the non-target location can be determined randomly from the areas outside the target location with the same number of desired sample points.The primary table (input data and target data or binary data) is then extracted using the model builder function of ArcGIS and the Extract Multi Values to Points tool to construct the final data set.Figure 3 illustrates the data extraction process for establishing the ANN data set. Figure 3. Data extraction process The primary table can be imported in MATLAB as a vector, including the input and target data, which require initial manipulation and implementation of the normalization process to unify the diminution scale among the input data while avoiding the over saturation of partial neurons (Jiang and Nan, 2006) and decreasing the difference in factorial magnitude (Song et al., 2010).Randomization must also be conducted to normalize the data distribution and feature selection via different statistical methods.In the ANN modeling stage, the ANN toolbox in MATLAB used for the various tasks, which include dividing the data set into three, namely, the training, testing, and validation sets.ANN modeling also involves the design of the neural network architecture and its parameters (number of nodes or hidden layer size), followed by the training of the network using training algorithms (trainlm, trainbr, trainbfg, traingdm, traingd, etc.).Generally, trainlm is the quickest training function; it is the default function for training inputs in feed-forward networks, but it tends to be inefficient in substantial networks (with a large number of weights), given long computation time and large memory required in spatial networks.In addition, the trainlm function performs better in nonlinear issues than in classification and recognition issues.Meanwhile, the trainrp and trainscg functions are excellent choices for training pattern recognition networks that involve large data sets.The memory requirements of the two functions are generally minimal, and they are considerably faster than standard gradient descent algorithms.After the network training, the network is tested to determine whether it meets the target MSE accuracy.If the goal is not met, the analyzer needs to change the ANN size (number of nodes) or the training algorithm or reselect the training sets.Thereafter, the best neural network architecture for the final simulation is identified.The final stage involves the evaluation or simulation of the entire data set through the new network, the results of which ranges from 0 to 1 in one column.These results must be reshaped to generate the final map.The matrix is then saved as ASCII with consideration of the coordinate systems, number of rows and columns, and cell size.The suitability map is finally presented in the GIS software. CASE STUDY: LANDFILL SITES This section is a practical implementation of section 3, to prove that ANN can be a new approach to improving the GSS framework. Study area Perak is a state located northwest of Peninsular Malaysia (lat 4°42'23.589''N, long 100°57'52.264"E (Figure 5).The total area of this state is nearly 21,035 km 2 , as shown in Figure 5.The Perak state area makes up 6.37% of the total area of Malaysia.In 2010, the total population in Perak was 2,258,428.Perak's climate is sunny and warm.The annual rainfall reaches 3,218 mm, and the relative humidity regularly exceeds 82.3% with a constant temperature ranging from 23 °C to 33 °C.Nearly half of the state elevations are flat while the other half are sloped, with elevations ranging from 1 to 3,978m. Stage 1 (criteria input and data collection) To develop the landfill GSS model, the relevant factors of landfill sites must be determined. Stage 2 (ANN data set processing) ANN data set processing is the first step in generating ANNs.Data processing involves three steps.First, data were extracted from the geospatial raster data through a sample point, as required in ArcGIS.Each landfill was represented by a sample point per 30 m in the ground to ensure that the cell size of the raster layer clearly represented the landfill area.A total of 4,082 points representing the landfill locations (1) were observed in the landfill area.In addition, 4,082 points representing nonlandfill locations (0) were determined randomly using Hawth's tools.In mining the data set, the thematic layer (input data and target data) was added to the ArcGIS model builder function.Afterward, the data set was extracted through the Multi Values to Points toolbox.The primary table of the data set was imported in MATLAB as a vector, including the input and target data.Second, the calculated normalized factors or non-dimensional parameters were scaled as continuous values ranging from 0 to 1 to ensure that all criteria are equally attended during the training stage and before the weighting process.The second stage was implemented through the following formula (Eq.10): where Third, all the data sets were randomized for random data distribution, which reduces the effect of regular data distribution and avoids bias in the training stage.All the data sets were randomized via a randomization function (randperm) (Allenmark et al., 2015).The ROC curve is another metric to determine evaluation accuracy via the area under receiver operating characteristic (AUC or AUROC) curve (García-Rodríguez andMalpica, 2010, Conforti et al., 2014). Figure 7. Combination of ROC curves for validation of training and testing data sets The AUC metric of discrimination indicates the capability of the ANN to categorize the samples properly and determine whether a site is suitable for landfill use.To determine the fit decision threshold, we illustrated via the ROC curve the performance of the ANN classifier, which defines model accuracy.This threshold metric of segregation between both classes takes values between 0.5 (no separation) and 1 (perfect separation).Hence, the upper left corner in the ROC figure depicts the perfect curve, which indicates the superior accuracy of the metric test and equivalent AUC (1); the area at the 0.5 point denotes inaccuracy.The following scales were employed to determine classifier accuracy: excellent = 0.9-1, good = 0.8-0.9,fair = 0.7-0.8,poor = 0.6-0.7,and fail 0.5-0.6 (Mehdi et al., 2011).Figure 7 shows that the resulting AUC of the ANN for the training data set is 0.979 while that for the testing data set is 0.975.These values reflect the high classification capability of ANNs. Stage 4 (evaluation and spatial visualization) Immediately after the training and testing of the ANN model, the model evaluated each sample unit of the entire data set to generate the map of landfill suitability.Thus, the entire input data on Perak were clipped via the Mask tool (study area border) and converted into TIFF in ArcGIS with a unified cell size of 30 m and an identical degree (number of columns and rows).Thereafter, the entire TIFF image was imported in MATLAB as matrices.Given the vector requirement of MATLAB for ANNs.The entire input data set was normalized, and given the enormous amount of data for evaluation (42,996,291), the data set was divided to six sections, with each section having 15,000,000 pieces of data.Entire sections of the data sets were fed into the ANN model and evaluated.The resulting suitability value was represented via an index range between 0 and 1 and then saved as a vector with six sections.The sections were combined in one vector column and then reshaped into a matrix to establish the suitability map.The matrix was saved as an ASCII file with consideration of the number of rows, columns, coordinate systems, and cell size.The file was then converted to a GIS raster data file for spatial visualization.The results are shown in a suitability map in Figure 8.The stability index of Perak for landfill sites was divided into 10 groups with varying levels of suitability (best suitability = 1, less suitability = 0). CONCLUSION The goal of this paper improve the past GSS through present a new efficient framework via ANNs for GSS issues using GIS tools.The efficient framework particularly present for: first, to be quantitative, overcome human interference issues and the limitations of previous frameworks, as well as to enhance the accuracy and validation.Second, to reduce the financial cost of previous models.A new framework were develop to producing suitability maps in the regional scale using ANN.multilayer feedforward neural network architecture with a Backpropagation learning algorithm were employing.A list of 32 factors utilized as input data set, which was integrate in GIS.Several Geo-processing and manipulation tasks were perform to prepare the final data set for ANN modeling.ANNs are advantageous because of their excellent data manipulation techniques, high tolerance to faults and failures as well as imprecise and fuzzy information, non-linearity, high parallelism, generalization, robustness, tolerance to noise information, and their capability of dealing with data sets quantitatively and efficiently.However, ANNs are a black box technique, particularly because of the limited understanding on how ANNs learn specific issues and implement rules different circumstances.The developed network was validate via a confusion matrix and ROC curves.Specifically, the existing target data and the evaluation target in the training data set and testing data set were compare.The results of the confusion matrix test showed overall accuracies of 93.3 for the training data set and 92.2 for the testing data set.The ROC curves showed an accuracy of 0.979 for the training data set, and the AUC for the testing data set was 0.975.The completion of the final landfill suitability map indicated that the ANN technique is successful in identifying new and suitable landfill sites.Finally, the results showed excellent reliability, particularly in terms of the use of the ANN technique to produce landfill suitability maps.The result of new framework revealed the applicability and efficiency of ANN modeling for GSS, especially its high modeling accuracy and capability of eliminating human interference with minimal cost. Figure 2 . Figure 2. General stages of the ANN framework Figure 4 . Figure 4.A new framework for implementing ANN in GSS Figure 5 . Figure 5. Study area (Perak state) These factors were determined according to the literature, including the Japan International Cooperation Agency Guideline 2005, the United Nations Environment Program, and the National Strategic Plan 2005.Thereafter, 32 input factors were identified to represent a variety of thematic layers (e.g., humidity, soil, geology, caves, dams, faults, aspect, slope, evapotranspiration, elevation, NDVI, land use, population, rivers, precipitation, production center, national roads, highways, local roads, schools, theatres, railways, museums, playgrounds, hospitals, natural parks, residential areas, airports, marine borders, coastline, local boundaries, and national boundaries).The data collection was separated into the input and target data.First, the input data were collected from different sources, such as the Malaysian Centre for Geospatial Data Infrastructure (MaCGDI) and the NASA website.Existing resources, such as those on slope and aspects, were also used to retrieve the input data.The data were Georeferenced according to the Malayan Rectified Skew Orthomorphic Projection.Second, the target data were collected and established from diverse sources, such as the MaCGDI.All the data gathered for the existing sites were treated, and the maps of the landfill sites (open and closed sanitation) were built thereafter.The maps were digitized as a polygon and then unified at the border in ArcGIS to develop the binary map (0 and 1, where 0 = non-landfill, 1 = landfill).The developed target map represented the target set for the ANN data set. 4. 4 . Stage 3 (ANN modeling)First, the data set was divided into two parts: 70% for training and 30% for validation.The training data set was also divided as follows via the patternnet function: 70% for training, 15% for testing, and 15% for validation.Second, the neural network architecture and its parameters were design.The network created in MATLAB via the (nnstart) GUI patternnet function (neural network pattern recognition toolbox)(Garcia-Breijo et al., 2011).The patternnet network is a specialized version of the multilayer feed-forward neural network architecture.Third, the network was trained using the trainscg function, which is the default function in the patternnet toolbox.The trainscg function is a training network capable of adjusting the values of bias and weight according to the scaled conjugate gradient backpropagation algorithm.The training algorithm start via initializing weights, then summing the inputs multiply weights, then pass it to transfer function to extract the scaled value, then compare the compute resuilt with the actual output via MSE and if the accuracy is not satisfied the goal another iteration perform and so on till reaching to the best accuracy.This process wholly accomplished quantitatively, which in another way mean avoiding human interference and reduce financial cost by canceling the expert-based weight selection stage. Figure 6 . Figure 6.Validation of training and testing data sets using the confusion matrix Fourth, we sought to acquire optimal accuracy by performing diverse experiments.The best network according to the evaluation was constructed by modifying
6,126.4
2015-10-19T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Automated Diagnosis of Cervical Intraepithelial Neoplasia in Histology Images via Deep Learning Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3–90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3–95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8–94.0%), and 92.6% (95% CI, 90.4–94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images. Introduction In 2018, cervical cancer ranked as the fourth most frequently diagnosed cancer and the fourth leading cause of cancer-related death in women worldwide [1]. Despite the decreasing incidence in developed countries due to active screening and vaccination for human papilloma virus (HPV), its prevalence and mortality are increasing in sub-Saharan Africa, southeastern Asia, eastern Europe, and South America. Histologically, the most common type is squamous cell carcinoma, and HPV is the virtually necessary (but not 2 of 15 sufficient) cause of cervical cancer [1]. For early detection, screening methods, such as the HPV test, cervical cytology, and colposcopy, are recommended. However, the gold standard for diagnosing cervical lesions is the microscopic evaluation of histopathology by a qualified pathologist [2]. Premalignant lesions of the cervix, cervical intraepithelial lesions (CINs) are proliferations of squamous cells driven by HPV infection, showing maturation abnormalities and/or viral cytopathic changes that do not extend beyond the basement membrane [3]. CINs are graded as CIN1, CIN2, and CIN3, according to the extent of abnormal proliferation in the atypical basal/parabasal-like cells and mitotic activity [3]; in CIN1, atypical proliferation and mitosis occur up to the lower third of the epithelium along with koilocytotic atypia with clearly retained features of maturation. CIN2 shows basal/parabasal morphology and mitotic activity extending into the lower two-thirds of the epithelium, but with maturation in the uppermost cell layers. CIN3 demonstrates full-thickness basal/parabasal-type atypia and mitotic activity without maturation in the top-most epithelial layers. Recently, due to the improved reproducibility and enhanced biological relevance, a two-tier terminology of low-grade squamous intraepithelial lesion (LSIL), which includes CIN1, and high-grade squamous intraepithelial lesion (HSIL), which may be subdivided into CIN2 and CIN3 is preferred in premalignant lesions of the cervix [2]. However, the CIN classification still has clinical importance. In the natural clinical course, LSIL has a low potential for progression and a high potential for regression, which it has been conservatively managed [4][5][6]. In contrast, HSIL was actively treated for cure due to a higher potential for progression and a lower potential for regression and the treatment was standardized irrespective of CIN2 and CIN3 [5]. In recent studies, the higher regression rates of CIN2 unlike CIN3 have led to the adoption of alternative conservative management strategies in women who wish to preserve fertility [6]. Consequentially, the updated guidelines by the American Society of Colposcopy and Cervical Pathology (ASCCP) in 2019 strongly recommended to qualify a histologic HSIL result by CIN2 or CIN3 for epidemiologic and clinical management purposes [6]. However, pathologists often encounter difficulties in accurately diagnosing and grading CIN [7]. The effects of inflammation, repair, pregnancy, and atrophy, as well as the inherent difficulty in distinguishing lesions with a morphologic spectrum, complicate it and may lead to substantial inter-observer and intra-observer variability [7][8][9]. The time pressure, workload, and limited experience of the pathologist may be other hindrances. With the increase in cervix specimens due to population growth, increased prevalence of cancers, and longer life spans, these obstacles will likely worsen in the future. In addition, due to the limited well-trained pathology workforce, the quality of pathology services is uneven nationwide and worldwide [10]. The use of automatic histology image classification can alleviate the scarcity in professional resources and heavy workloads. With the advancement of artificial intelligence (AI), machine learning techniques can be used as a major ancillary tool for diagnosing tumors in various organs based on the histological images. However, most recent studies have applied techniques for the detection and classification of invasive cancers [11][12][13][14][15][16] rather than intraepithelial or premalignant lesions. With regard to cervical lesions, some studies have been devoted to the creation of computer-assisted reading systems for assessing cervical cytology specimens [17,18] and only a limited number of studies have focused on examining CINs [19][20][21][22][23][24][25]. In this study, we aimed to develop and assess an optimal convolutional neural network (CNN) model for classification of CINs. Data Collection Female patients who were scheduled for colposcopic biopsy or conization due to suspicion of CIN at Kangnam Sacred Heart Hospital between 2015 and 2017 were retrospectively enrolled. This study was approved by the Institutional Review Board (IRB) of Kangnam Sacred Heart Hospital (IRB no. HKS 2018-03-013) and performed in accordance with the Declaration of Helsinki. One experienced pathologist (J-W.K.) reviewed Diagnostics 2022, 12, 548 3 of 15 the histological slides of tissue sections of the involved patients that were stained with hematoxylin and eosin (H&E) and p16 antibody (Roche E6H4 TM , catalog #725-4713) and obtained digital microscopic photographs of representative lesions at an objective magnification of 20× using a microscope (Olympus BX51; Melville, NY, USA) equipped with a digital camera (Olympus DP2). Photographic images were acquired in JPEG format with a resolution of 2560 × 1920 or 1280 × 960 pixels. Unsuitable blurred or defocused images were excluded from this study. Three experienced pathologists (J-W.K., M.H., and G-Y. K.) were independently re-reviewed the digitalized H&E images along with the corresponding p16 immunohistochemical images. Blinded to the results of other pathologists, according to the 2019 World Health Organization (WHO) classification [3] and the 2012 Lower Anogenital Squamous Terminology (LAST) standardization project [2] the three pathologists classified the images into four classes: CIN3, CIN2, CIN1, and non-neoplasm. On H&E images, CIN1 was defined as a proliferation of basal/parabasal-like cells and mitosis (not atypical) restricted to the lower third of the epithelium along with koilocytotic atypia within the middle and surface cells. CIN2 was characterized by atypical basaloid cells and mitotic activity extending into the upper half to upper two-thirds of the epithelium, but with retained koilocytotic changes or maturation on the surface. CIN3 was defined as full-thickness basal/parabasal-type atypia and mitotic activity without maturation in the top-most epithelial layers. To aid in the distinction of CIN2/CIN3 from mimickers of precancer and CIN1, p16 immunohistochemistry (IHC) was adjunctively used [2,3]. Out of 1305 H&E images, 199 (15.2%) with any discrepancy were excluded and only images categorized in the same class by the all three pathologists were involved in this study. Ultimately, 1106 microscopic images from 588 patients were included: 266, CIN3; 231, CIN2; 266, CIN1; and 343, non-neoplasms (Table 1). Overall 1106 588 989 542 117 68 CIN 3 266 183 236 165 30 19 CIN 2 231 108 210 97 21 11 CIN 1 266 143 234 129 32 14 Nonneoplasm 343 250 309 225 34 25 N, numbers; CIN, cervical intraepithelial neoplasia. Dataset Construction From the whole dataset, the test dataset was randomly split three times with a ratio of 10% to evaluate the performance of the trained CNN models. Random train/test set splitting was performed for each class using the patient ID as the key to avoid the simultaneous involvement of the same class image of one patient in both the training and test datasets. For each three splitting, the training set consisted of 90% of the whole dataset and was divided into the training dataset proper and the tuning (or validation) dataset, with a ratio of 80%:10%. Therefore, each CNN model was trained three times independently using three different split folders. Dataset Preprocessing All images were resized to 640 × 480 pixels reducing the resolution of the whole images and were normalized for each RGB color channel based on the mean and standard deviation values of the images in the ImageNet dataset. Data augmentation and histogram equalization were not performed as these methods did not improve the model performance in our pilot studies. Deep Learning Model Training Two CNN architectures were adopted: DenseNet-161 and EfficientNet-B7. The details of the CNN models are described in previous studies [26,27]. Briefly, DenseNet-161 is characterized by a dense block that uses the feature maps of the previous layers as the input of the current layer [26]. EfficientNet is characterized by an MBconv block that balances the width and depth of the CNN via reinforcement learning [27]. These models were pre-trained using the ImageNet Large Scale Visual Recognition Challenge dataset and fine-tuned using the training dataset of this study. In the first experiment, the CNN models were trained to perform four-class classification that classified images into CIN3, CIN2, CIN1, and non-neoplastic lesions. Then, the CIN2 and CIN3 groups of the whole dataset were merged into one group, representing HSIL. In the second experiment, the training/test set splitting was re-performed; the CNN models were trained to perform three-class classification and classified the images into CIN2-3 and CIN1, which represent LSILs, and non-neoplasms. The model was trained using the PyTorch platform with categorical cross-entropy as the loss function. The Adam optimizer was adopted with a β1 value of 0.9 and a β2 value of 0.999. The learning rate was 1 × 10 −4 , and the batch sizes were 15 and 5 for DenseNet-161 and EfficientNet-B7, respectively. The number of training epochs was set to 100, and the model with the minimum validation loss was chosen. The hardware platform was equipped with NVIDIA GeForce GTX 1080ti 6-way graphics processing units, dual Xeon central processing units, 128 GB RAM, and a customized water-cooling system. Saliency maps were produced to identify the regions of interest with a gradientweighted class activation mapping (Grad-CAM) [28]. Grad-CAM method can highlight a class-specific local features in the image using gradient information. Overall, there are three steps to generate a class-specific saliency map. First, it computes the gradient of the logit for predicted class with respect to the last CNN layer which not only learns high-level abstract features but also retains spatial information. Then, the gradients are global-average-pooled to estimate the importance of feature maps in the last layer. Lastly, along with the importance weights, feature maps are averaged. In order to highlight only features that actually increase the value of class logit, ReLU function is applied to the averaged feature map [28]. In this study, an implementation of Grad-CAM for PyTorchbased models was used (available at: https://github.com/jacobgil/pytorch-grad-cam; accessed on 10 June 2021). Human Performance Evaluation For the first test dataset, two other experienced human pathologists, who were blinded to the true labels, independently classified the images, and the performances were evaluated. Human performances were compared with those of the CNN models. Main Outcome Measures and Statistical Methods The primary outcome was the model performance for four-class classification, while the secondary outcome was model performance for three-class classification. The performance of the CNN model was evaluated using three different test datasets, and the performance was estimated using means and 95% confidence intervals (CIs). The performance was evaluated using the diagnostic accuracy and the area under the receiver operating characteristic (ROC) curve (AUC). For each class, per-class sensitivity, specificity, positive predictive value, and negative predictive value were also evaluated. Continuous or categorical variables are expressed as means or percentages with 95% CIs. Statistical significance was set at p < 0.05. Results A total of 1106 images from 588 patients were included in this study. The patients' mean age was 43.0 ± 12.4 years (range: 16-84 years). The patients' data are presented in Table 1. Non-neoplastic lesions comprised the majority class (343 cases from 250 patients, 31.0%) in the whole dataset, while CIN2 was the least common type (231 cases, 20.9%). The test dataset for the human performance evaluation comprised 117 images from 68 patients. Four-Class Classification Performance of Deep Learning Models and Human Pathologists The mean accuracies for the four-class classification (CIN3, CIN2, CIN1, and nonneoplasm) in the test dataset were 88.5% (95% CI, 86.3-90.6%) by DenseNet-161 and 89.5% (83.3-95.7%) by EfficientNet-B7, respectively (Table A1). The validation accuracy reached a plateau within 20 epochs during the model training, as shown in Figure 1. The overall accuracies for the four-class classification of human pathologists were 93.2% and 89.7%, respectively. The heatmaps for the confusion matrix of the best-performing models for the test dataset and human pathologists are presented in Figure 2. The per-class performances of the deep learning models are presented in Table 2 and Figure 3 depicts the per-class ROC curves for the best-performing CNN models. For both CNN architectures, the mean AUC was highest in discriminating non-neoplastic lesions (0.996 for DenseNet-161 and 0.996 for EfficientNet-B7). For both CNN architectures, the mean AUC was lowest in discriminating CIN2 lesions, but the individual AUCs remained high (0.947 for DenseNet-161 and 0.956 for EfficientNet-B7, respectively). In determining CIN3 lesions, EfficientNet-B7 showed a mean sensitivity of 97.5% (95.4-99.5%) and a mean specificity of 96.3% (94.1-98.6%). For the CIN1 lesions, EfficientNet-B7 presented a mean sensitivity of 85.2% (73.3-97.1%) and a mean specificity of 96.3% (95.1-97.6%). Results A total of 1106 images from 588 patients were included in this mean age was 43.0 ± 12.4 years (range: 16-84 years). The patients' da Table 1. Non-neoplastic lesions comprised the majority class (343 case 31.0%) in the whole dataset, while CIN2 was the least common type The test dataset for the human performance evaluation comprised patients. Four-Class Classification Performance of Deep Learning Models and Hu The mean accuracies for the four-class classification (CIN3, CIN neoplasm) in the test dataset were 88.5% (95% CI, 86.3-90.6%) by Dens (83.3-95.7%) by EfficientNet-B7, respectively (Table A1). The validatio a plateau within 20 epochs during the model training, as shown in F accuracies for the four-class classification of human pathologists wer respectively. The heatmaps for the confusion matrix of the best-perfor test dataset and human pathologists are presented in Figure 2. The per-class performances of the deep learning models are presented in Table 2 an Figure 3 depicts the per-class ROC curves for the best-performing CNN models. For bot CNN architectures, the mean AUC was highest in discriminating non-neoplastic lesion (0.996 for DenseNet-161 and 0.996 for EfficientNet-B7). For both CNN architectures, th mean AUC was lowest in discriminating CIN2 lesions, but the individual AUCs remaine high (0.947 for DenseNet-161 and 0.956 for EfficientNet-B7, respectively). In determinin CIN3 lesions, EfficientNet-B7 showed a mean sensitivity of 97.5% (95.4-99.5%) and a mea specificity of 96.3% (94.1-98.6%). For the CIN1 lesions, EfficientNet-B7 presented a mea sensitivity of 85.2% (73.3-97.1%) and a mean specificity of 96.3% (95.1-97.6%). . Per-class ROC curves for four-class classification the best-performing CNN models. F DenseNet-161 (a) and EfficientNet-B7 (b) with best performance, AUC was higher in discriminati non-neoplasm and CIN3 rather than in classifying CIN2 and CIN1. Histologic Review of Misclassified Cases in Four-Class Classification Using Best-Performin CNN Models No false-positive cases were included in the best-performing CNN models, bo DenseNet-161 and EfficientNet-B7 (Figure 2). False-negative cases were not observed b EfficientNet-B7; among 117 test cases, three CIN1s (2.6%) were classified as false-negativ cases by DenseNet-161. After a histological review, it appeared that the scarcity Figure 3. Per-class ROC curves for four-class classification the best-performing CNN models. For DenseNet-161 (a) and EfficientNet-B7 (b) with best performance, AUC was higher in discriminating non-neoplasm and CIN3 rather than in classifying CIN2 and CIN1. Histologic Review of Misclassified Cases in Four-Class Classification Using Best-Performing CNN Models No false-positive cases were included in the best-performing CNN models, both DenseNet-161 and EfficientNet-B7 (Figure 2). False-negative cases were not observed Diagnostics 2022, 12, 548 7 of 15 by EfficientNet-B7; among 117 test cases, three CIN1s (2.6%) were classified as falsenegative cases by DenseNet-161. After a histological review, it appeared that the scarcity of characteristic koilocytotic cells might have contributed to the misclassification ( Figure A1a). Eight (9.2%) out of 87 CIN cases were unsuccessfully graded by DenseNet-161; one CIN3 (1.1%) and two CIN2 (2.3%) cases were downgraded as CIN1 and CIN2, respectively, while one CIN1 (1.1%) and four CIN2 cases (4.6%) were upgraded as CIN2 and CIN3, respectively (Figure 2a). EfficientNet-B7 misgraded the five CIN2 cases (5.7%): four cases were classified as CIN1, while one case was classified as CIN3 (Figure 2b). None of the CIN1 cases were classified as CIN3, and none of the CIN3 cases were classified as CIN1. On histological review, histology of CIN3 cases downgraded as CIN2 was not sufficient to be classified as carcinoma in situ. The cases showed basal/parabasal-type atypia throughout the full-thickness of the epithelium ( Figure A1b). CIN2 cases downgraded as CIN1 had atypia extending to the lower half of the epithelium with koilocytotic changes in the upper half and maturation in the uppermost layers ( Figure A1c). The CIN1 case upgraded as CIN2 showed disoriented epithelium ( Figure A1d). One of the CIN2 cases upgraded as CIN3 showed atrophy ( Figure A1f). Three-Class Classification Performance of Deep Learning Models and Human Pathologists In the three-class classification discriminating the images into CIN2-3, CIN1, and nonneoplasm, the mean accuracies in the test dataset increased up to 91.4% (95% CI, 88.8-94.0%) by DenseNet-161 and 92.6% (95% CI, 90.4-94.9%) by EfficientNet-B7. The overall accuracies for the three-class classification of human pathologists were 95.7% and 92.3%, respectively. Figure 4 shows the heatmaps of the confusion matrix of the best-performing models for the test dataset and human pathologists. The per-class performances of the deep learning models in the three-class classification are listed in Table 3 accuracies for the three-class classification of human pathologists were 95.7% and 92.3%, respectively. Figure 4 shows the heatmaps of the confusion matrix of the best-performing models for the test dataset and human pathologists. The per-class performances of the deep learning models in the three-class classification are listed in Table 3. The mean AUCs for non-neoplastic lesions were 0.996 (95% CI, 0.992-0.999) for DenseNet-161 and 0.993 (95% CI, 0.985-1.000) for EfficientNet-B7. The mean AUCs for CIN2-3 and CIN1 were 0.981 and 0.974 for DenseNet-161 and 0.982 and 0.979 for EfficientNet-B7. In terms of determining CIN2-3 lesions, EfficientNet-B7 showed a mean sensitivity of 94.8% (92.7-96.7%) and a mean specificity of 93.4% (90.1-96.8%). Figure 5 shows the representative Grad-CAM images of non-neoplasms, CIN1, CIN2, and CIN3. Grad-CAM images were reviewed by a pathologist, and the region of interest of the deep learning model agreed with that of humans. The CNN model successfully detected squamous epithelium and recognized images from the transformation zone and exocervix, atrophic cervix, and cervicitis with erosion as non-neoplasms. In Grad-CAM images, CIN1, CIN2, and CIN3 characterized by presence of koilocytotic cells or hyperchromatic atypical cells with a high nuclear/cytoplasmic ratio and increased mitotic activity were depicted as highlighted areas. According to the distribution of abnormal cells, different layers of squamous epithelium were highlighted. detected squamous epithelium and recognized images from the transformation zone and exocervix, atrophic cervix, and cervicitis with erosion as non-neoplasms. In Grad-CAM images, CIN1, CIN2, and CIN3 characterized by presence of koilocytotic cells or hyperchromatic atypical cells with a high nuclear/cytoplasmic ratio and increased mitotic activity were depicted as highlighted areas. According to the distribution of abnorma cells, different layers of squamous epithelium were highlighted. Figure 5. Grad-CAM images by EfficientNet-B7. Normal squamous epithelium was highlighted in Grad-CAM images (a-d). Images from cervix interpreted as non-neoplasm by the EfficientNet-B7 include exocervix (a), metaplastic muco-sa from transformation zone (b), cervicitis and erosion (c and atrophic mucosa (d). In CIN1, layers with koilocytotic cells were mainly highlighted (e). The highlighted areas extended to the upper two-third of the epithelium in CIN2 (f) and full-thickness of the epithelium in CIN3 (g). Normal endocervical glands ((g), black arrows) were not highlighted 4. Discussion Figure 5. Grad-CAM images by EfficientNet-B7. Normal squamous epithelium was highlighted in Grad-CAM images (a-d). Images from cervix interpreted as non-neoplasm by the EfficientNet-B7 include exocervix (a), metaplastic muco-sa from transformation zone (b), cervicitis and erosion (c) and atrophic mucosa (d). In CIN1, layers with koilocytotic cells were mainly highlighted (e). The highlighted areas extended to the upper two-third of the epithelium in CIN2 (f) and full-thickness of the epithelium in CIN3 (g). Normal endocervical glands ((g), black arrows) were not highlighted. Discussion In recent years, AI has been used in the field of pathologic image diagnosis, and many studies have shown promising results in detecting and diagnosing cancers in a variety of organs, including the stomach [29], colon [15], breast [30], prostate [16], head and neck [13], brain [14], and lungs [31]. As for cervical cancer, with the advancement in the management of preinvasive lesions, the increasing diagnostic workload of cervical biopsy calls for the development of high-performance algorithms with high sensitivity and specificity. Practically, many pathologists experienced more difficulty and burden in accurately classifying preinvasive cervical lesions than in distinguishing between invasive and non-neoplastic condition [7]. Therefore, we focused on developing an optimized CNN system for CIN grading. For the classification of premalignant lesions of the cervix, LSIL and HSIL have been the preferred terminology in both tissue and cytology specimens due to the improved reproducibility and biological relevance of the two-tier system [2]. A recent systemic review and meta-analysis of studies from 1973 to 2016 [32] indicated that among CIN2 managed conservatively, 50% regressed, 32% sustained, and 18% progressed to CIN3. The regression rate of CIN2 was higher (60%) in particular in women younger than 30 years and observation has been acceptable for CIN2 in the women [6,32]. Consequently, the subdivision of HSIL into CIN2 and CIN3 is essential for making treatment decision for young women. Hence, we investigated the four-class classification (CIN3, CIN2, CIN1, and non-neoplasm) performance as well as the three-class classification (CIN2-3, CIN1, and non-neoplasm) performance. Two CNN architectures, DenseNet-161 and EfficientNet-B7, were adopted in our study. EfficientNet-B7 is a recently developed heavy model and a state-of-the-art architecture; it showed better performance than DenseNet-161 [27]. However, DenseNet-161 also showed excellent performance and presented good cost-effectiveness [26]. In the four-class classification, the mean accuracies for DenseNet161 and EfficientNet-B were 88.5% and 89.5%, respectively, and the performance was similar to that of human pathologists (93.2% and 89.7%, respectively). The mean AUC values of both CNN models were considerably high in all four classes (Table 1). Furthermore, the mean accuracies of both models for three-class classification were increased to 91.4% and 92.6% by DenseNet-161 and EfficientNet-B7, respectively, which are almost the same levels as those of human pathologists (95.7% and 92.3%, respectively). Compared to the four-class classification, the three-class classification had imbalanced classes due to merging CIN2 and CIN3. The numbers of CIN2/3, CIN1, and non-neoplasm were 497, 266, and 343 and the CIN2/3:CIN1 ratio was 1.87. Although the ratio of CIN2/3 to CIN1 (or non-neoplasm) increased, the model learned the features of CIN1 and CIN0 and showed high accuracy for the minority classes. In fact, the per-class AUCs for CIN1 and non-neoplasm in EfficientNet reached 0.993 and 0.979, respectively. Consequently, the mean accuracies of the CNN models for three-class classification were increased than those for four-class classification. In other similar studies, the imbalanced datasets were used without a balancing strategy [33,34]. In the light of previous other studies and our experimental results, we concluded that the imbalance was not a critical issue in our study. However, in both four-class classification and three-class classification by EfficientNet-B7, the mean F1 scores of the minority classes, CIN2 and CIN1, respectively, were 79.1 and 86.8, which were lower than those of the majority classes. Balancing datasets and building large datasets might increase the F1 scores in the future research. Through repeated validation and tests of two CNN models, we determined the optimal image preprocessing conditions (640 × 480 pixels size and normalization of each RGB color channel in the ImageNet dataset) in which the CNN models can achieve better performance. The most suitable resolution for deep learning of the histopathological images is not known yet. Since the preprocessing method for high-resolution and large-scale histopathological images causing memory limitation can induce important information loss, it must be handled carefully. Various methods to overcome loss of information have been reported recently [35]. In the early stage of this study, we tried three different image resolutions for our four-class classification, and observed that the resolution of 640 × 480 was not inferior to 1280 × 960 and 800 × 600 (data not shown). After the experiment, we decided to continue our experiments with 640 × 480 image resolution considering the training and evaluation speed. Since at 640 × 480 image resolution, the deep learning models showed a similar level to human performance in four-class classification, we can draw conclusions that the deep learning models perform effectively even after resizing the CIN images and the resolution of 640 × 480 is appropriate for this study. In addition, we found that data augmentation and histogram equalization did not improve the model performance. To overcome a small number of training datasets in CNN, image augmentation has been frequently used. In our pilot experiments applied horizontalflip and contrast limited adaptive histogram equalization (data not shown), the model performance did not significantly change depending on the augmentation methods. Thus, we concluded that the augmentation methods are not likely to boost the model performance in this study. Approximately 90% of CIN1 regress without treatment, and less than 1% progress to invasive cancer, whereas the risk of progression of untreated CIN2 and CIN3 to cancer is estimated to be 0.5-1% per year [4]. Notably, CIN3 is a direct precursor of invasive cervical cancer, and active treatment is recommended. By contrast, observation is the preferred approach for CIN1. Therefore, it is much more critical to quickly determine CIN1 or CIN3 than CIN2 or CIN3. Despite some misclassification of CIN2, EfficientNet-B7 perfectly discriminated CIN1 from CIN3 based on the four-class classification (Figure 2), and the results demonstrated its clinical applicability. We observed that the CNN models have a weakness in classifying CIN2 (75.2% and 73.0% sensitivities for DenseNet-161 and EfficientNet-B7, respectively). Considering that it is often challenging for pathologists to distinguish CIN2 from CIN1 and CIN3 and inter-observer, agreement is notoriously poor at this interface, even among experts [4], the performance of these CNN models is almost similar to that of human pathologists even in this respect. Moreover, the difficulty in classifying CIN2 can be attributed to its inherent nature, which is intermediate in the morphological spectrum of CIN. Due to the ambiguity of CIN2 diagnosis based on the H&E morphology, the LAST Project suggested that the addition of p16 immunohistochemical stain significantly improves the reliability of CIN2 diagnosis and advised the use of p16 staining to confirm the presence of a high-grade lesion when CIN2 is diagnosed based on H&E slide [2]. In future studies, analyzing H&E images along with the images of p16 immunohistochemical staining would be helpful to increase diagnostic accuracy of CNN models. For determining the CIN1 lesions, the mean sensitivities were 82.1% and 85.2% by DenseNet-161 and EfficientNet-B7, respectively, which were lower than those of CIN3 and non-neoplasms. On histologic review, the scarcity of characteristic koilocytotic cells in CIN1, severe inflammation, and metaplastic changes might have contributed to the inaccuracy of CNN classification. For more precise detection of koilocytotic cells, the CNN model needs to be improved. To reduce the false-positive rate, more variable non-neoplastic lesions, such as chronic cervicitis, metaplastic mucosa, and atrophy, should be included in the study set, and a repeat validation would be helpful. Automated screening machines have been developed for analyzing cervical cytology smears, and a few FDA-approved automated primary screening device are available [17]. However, it is more difficult to develop an automated tool for cervical tissue histology due to the complexity of the patterns observed and the structural associations between different tissue components. Keenan et al. [21] developed a machine vision system for histological grading of CIN using the KS400 macro programming language. It was a scoring system that analyzes geometric data, and 62.7% of the CIN cases with captured images were correctly classified [21]. Several previous studies have used multiclass support vector machines and gray-level co-occurrence matrices to analyze whole slide images (WSIs) or selected images [22,25,36]. Despite some promising results, the small data size of less than 100 cases with insufficiently validated or curated images and the extremely complicated methodology limited the applicability of the study results. Huang et al. [23] proposed a method based on the least absolute shrinkage and selection operator and ensemble learning support vector machine. They showed that the accuracy of normal-cancer classification was high (99.64%), but the accuracy of the LSIL-HSIL classification was 76.34%. A recent study that classified cervical tissue pathological images based on fusing deep convolution features has been published [37]. The researchers analyzed the dataset comprising smallsized images cropped from 468 WSIs, including those of normal tissues, LSIL, HSIL, and cancer. Resnet50v2 and DenseNet121(C5) showed excellent performance, with an average classification accuracy of 95.33%. Pathologic classification is an image-based method, and CNN is an optimized AI tool for image learning. Our study showed that CNN is a robust instrument for pathologic classification, but some things must be considered. For CNN to be developed and to work properly, collecting a large amount of accurate data is of utmost importance. Since CNN produces results very faithfully in the learned input, the quality of the CNN output absolutely depends on the quality of the input data. In order to develop a clinically relevant CNN model for pathologic diagnosis, a superb dataset from expert pathologists must be constructed. Recently, Meng et al. provided a public cervical histopathology dataset for computer-aided diagnosis, called MTCHI [24]. Pathologic diagnosis is sometimes equivocal and might be challenging to perform in some lesions in the gray zone or lesions with reactive changes. Therefore, pathologists should continue to improve and make an objective pathological diagnosis. In addition, high-quality H&E slide images is the need for using AI to perform a pathologic diagnosis. Although staining and mounting are automated, preparing pathology slides, sectioning, and embedding are still manually performed. Artifacts in the production process, such as tissue overlapping, tangential embedding, and poor sectioning, hinder the acquisition of focused images and cause AI to make diagnostic errors. We aimed to develop an artificial technique for classifying CIN from the WSI of cervical biopsy, but some practical difficulties were observed. In the WSIs of tissues, grading of intraepithelial neoplasia or dysplasia is much more complicated than finding lesions or cancer. Since CIN is a morphological spectrum, cervical biopsy specimens show large differences in disease degrees and mix of lesions. This makes it difficult for pathologists to precisely annotate according to the CIN grade in small biopsies. Compared with other tissues such as the breast, colon, and stomach, the specimen used for cervical biopsy are tissue strips or appear irregular in shape and often include a small amount of epithelium. Moreover, it is easily embedded in a disoriented or tangential manner. These were obstacles in making a standardized dataset using WSIs suitable for training and validation of the CNN model. In this study, we built a reliable dataset of CIN provided by three qualified pathologists and analyzed the CNN performance prior to its application in WSI. For enabling the future broad application of AI-based pathology in cervical biopsy, it is essential to build a large-scale multicenter dataset with a standardized protocol. It is another limitation of our study that there was still a gap between the training and validation accuracies, although we tried several strategies for image normalization, data augmentation, and loss function optimization. Novel approaches for these issues might improve the final model performances in the future. In conclusion, we built a reliable dataset for CIN classification and showed that EfficientNet-B7 and DenseNet-161 provided a promising performance in classifying cervical lesions on digital histology images. In terms of accuracy, EfficientNet-B7 had a functional advantage over DenseNet-161. Grad-CAM images used in the CNN models located the areas where CIN lesions can be found. Moreover, we realized that the accurate identification and classification of CIN by CNN relies entirely on the standardized diagnosis of pathologists, and the professional knowledge and analytical experience of pathologists are the cornerstone of technical advancement. An exquisite AI tool trained using a well-established and standardized dataset would be helpful in improving the pathology services worldwide. Informed Consent Statement: Patient consent was waived since the study used leftover specimens. Data Availability Statement: The data that support the findings of this study are available from the corresponding author up-on reasonable request. Acknowledgments: We would like to thank the members of the department of pathology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine for technical support; Hyo Jung Kim, Da Hye Kim, Eun Jung Kim. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Accuracies of deep learning models. Four-Class Classification Three-Class Classification Figure A1. Histology of misclassified cases by CNN models. A case with scarce koilocytotic cells but basal atypia was false-negative (a). CIN3 showing basal/parabasal-type atypia throughout most of the epithelium but not all was downgraded to CIN2 (b). CIN2 (c) downgraded as CIN1 showed koilocytotic changes in the upper half and maturation in upper most layers but had atypia focally extending to the lower half of the epithelium (black arrow). In CIN1 upgraded as CIN2, the epithelium was disoriented (d). CIN2 with koilocytosis (e) and atrophic CIN2 (f) were upgraded as CIN3. Figure A1. Histology of misclassified cases by CNN models. A case with scarce koilocytotic cells but basal atypia was false-negative (a). CIN3 showing basal/parabasal-type atypia throughout most of the epithelium but not all was downgraded to CIN2 (b). CIN2 (c) downgraded as CIN1 showed koilocytotic changes in the upper half and maturation in upper most layers but had atypia focally extending to the lower half of the epithelium (black arrow). In CIN1 upgraded as CIN2, the epithelium was disoriented (d). CIN2 with koilocytosis (e) and atrophic CIN2 (f) were upgraded as CIN3.
8,017.8
2021-09-09T00:00:00.000
[ "Medicine", "Computer Science" ]
Large negative dispersion in photonic crystal fiber by applying gold nanoparticles in square cladding In this paper, a new structure of photonic crystal fibers (PCFs) with large negative dispersion is presented in order to compensate the positive dispersion at a wavelength of 1.55 μm. The proposed PCF has square lattice structure. In the basic structure of the cladding, it consists of four square rings, so that each ring has a number of circular holes. In the final proposed structure, the two inner rings of the cladding are reshaped relative to the two outer rings. The first inner ring near the core has stellar shaped holes with gold nanoparticles in its center, and the second inner ring has circular holes with a smaller diameter than the two outer rings. The base material of this fiber is silica. The use of stellar shaped holes with gold nanoparticles cores causes a large amount of negative dispersion, which compensates the positive dispersion. Simulation results show that the minimum dispersion is − 4593 ps/(nm.km) at a wavelength of 1.55 μm which bring a significant improvement compared to other similar references. Introduction Photonic crystal fibers (PCFs) are one of the favorable issues which have attracted great deal of attention in optical communication applications. A PCF is a two-dimensional photonic crystal, containing a central defect region surrounded by multiple air holes. Many optical devices can be made using photonic crystals, such as optical switches (Takiguchi Apr. 2020;Rao et al. 2020), filters (Zare and Gharaati 2020;Mirjalili et al. 2020), and optical fibers . PCFs have different types of geometric structures such as square or hexagonal lattices (Lee et al. 2018;Faisal et al. 2018). PCF has unique features such as endlessly single mode propagation, highly configurable dispersion, high birefringence, optical components, wavelength division multiplier and also material processing compared to conventional optical fiber.Photonic crystal fibers have the ability to modulate dispersion, which is very important in the design of dispersion fibers. In fact, if a fiber has a negative dispersion in a wavelength range; it can compensate positive dispersion in this wavelength range. To minimize losses and reduce costs, the dispersion of fiber should be as small as possible (Monfared and Mojtahedinia 2014). Dispersion in a fiber occurs when an optical pulse moves through an optical fiber and its power changes over time, resulting in the pulse propagating over a wider period of time. Chromatic dispersion is one of the properties of optical fiber that causes different wavelengths of light to propagate at different speeds and move along the optical fiber (Monfared and Mojtahedinia 2014). In (Khoobjou et al. 2021) and (Khoobjou et al. Nov. 2020), PCF structures with low losses and dispersion have been proposed. Long-distance transmission causes the pulse to become too wide and information to be lost. It is necessary to use dispersion compensation fiber in the transmission path. A dispersion fiber with a negative dispersion value re-compresses the flattened pulse. The idea of using PCF to compensate for dispersion is proposed in Kumar et al. (2016). Many structures have been proposed for dispersion compensation in photonic crystal fibers. In (H. lei Li, S. qin Lou, T. ying Guo, W. guo Chen, L. Wang, and S. sheng Jian 2009), highly negative dispersion PCF with the central index dip in the low germanium doped core is presented. In (Biswas 2019), a modified hexagonal circular photonic crystal fiber with large negative dispersion is presented. Large negative dispersion of− 1044 ps/(nm.km) at the wavelength of 1.55 μm for the optimum geometrical parameters was reported. In (Dhanu Krishna et al. 2018), a PCF having a hybrid lattice structure with low values for the dispersion and confinement loss has been presented. Circular PCF with dispersion of 103.50 ps/ (nm.km) and confinement loss of 5.97e-6 dB/m was created by considering its first ring with octagonal hole lattice with eight holes. A simple circle-based star shape PCF is presented in Ahmed et al. (2019). Geometrical structure is very suitable for acquiring ultra low effective material loss and very large effective area. Higher core power fraction, single mode operation over the whole investigating range, negligible scattering loss of 1.235e-15 dB/cm has been calculated at f = 1.0 THz. Ref. (Monfared and Ponomarenko 2018) presents a PCF filled with carbon-disulfidefilled photonic crystal fibers (CS2-PCFs) with ultra-high nonlinearity and tunable nearlyzero flattened dispersion. The nonlinear coefficient is 7940 W -1 km -1 , total loss lower than 0.3 dB/m, the dispersion is 0.00007 ps/(nm.km) and a dispersion slope of 0.0000018 near 1550 nm wavelength. In (Kwasi Amoah et al. 2019), three zero scatter wavelengths (ZDW) is presented. This structure show a very influential spectral density compared to that of single or double ZDW PCFs. The chromatic dispersion is obtained − 220.39 ps/(nm.km), at the wavelength range of 1.53-1.8 μm. These characteristics can be used for applications such as supercontinuum generation, soliton pulse transmission, and detecting or sensing and optical communication systems. In (Naghizade and Mohammadi 2019), a 1 × 4 photonic crystal fiber power splitter (PCFPS) which have very low dispersion and very low loss is proposed. An optofluidic material was added in some of inner holes in addition changing their diameters to obtain an optimal case of dispersion and loss in PCF. The lowest dispersion below 2.5 ps/(nm.km) and loss below 0.025 dB/cm was reported in the paper. In (Fiaboe et al. 2019), ultra low confinement loss PCF with flattened dispersion is presented that applied hexagonal PCF structure with six rings of holes. There are three inner rings of elliptical holes arranged in a rhombic shape in its structure. The authors report very low dispersion (less than 10 ps/(nm.km)) at higher communication wavelength. Meanwhile, zero dispersion has been obtained at smaller optical wavelengths and the confinement loss is ultra low. A dual concentric cladding PCF was presented in Mohammadzadehasl and Noori (2018) that it has low-loss and near-zero ultra flattened dispersion. Ultra flattened dispersion of 1.69 ± 0.08 ps/(nm.km) was achieved in the wavelength range from 1 to 2 μm. The PCF has a loss below 10e − 14 dB/km. This design presents low dispersion with the lowest confinement loss and the largest effective area. In this paper, a novel photonic crystal fiber with a large negative dispersion coefficient is presented. Gold nanoparticles are used in the center of the proposed stellar structure in the inner ring of the fiber. As a result, the dispersion coefficient of the final proposed structure is a large negative value and this can compensate a larger positive dispersion value. The rest of this paper is organized as follows: In Sect. 2, the proposed PCF structure is presented. Simulation results and discussion are presented in Sect. 3. Finally, conclusions are presented in Sect. 4. The proposed structure One type of fiber dispersion is chromatic dispersion, which occurs because different components of the wavelength of a light pulse move at different speeds within the fiber. Chromatic dispersion is the most important dispersion factor in single-mode fibers and is calculated by Eq. 1 (Saitoh and Koshiba 2003;Begum et al. 2007). where D c is the chromatic dispersion coefficient, Δt c is time spreading, L is the fiber length, Δλ is the spectrum width, λ is the wavelength, C is the light speed in vacuum and Re(n eff ) is the real part of refractive index. The basic proposed structure of the PCF is shown in Fig. 1. Cladding has a square geometric shape. There are four square rings with circular holes in this structure. The diameter of the holes in the second ring (from the core side) is half the diameter of the other holes in the cladding. In this figure, d 1 is the diameter of the small circular holes, d 2 is the diameter of the big circular holes, and Ʌ is the distance between the centers of the two circular holes. The parameters of the basic proposed fiber are shown in Table 1. In the next step to develop the design, the final PCF is proposed in Fig. 2. As shown in Fig. 2, the first ring (from the core side) has changed than the basic design. This inner ring consists of five small circular holes which we called the stellar structure. Its central circle hole is filled with gold nanoparticles. In Fig. 2, d1 is the diameter of the small circular holes, d 2 is the diameter of the big circular holes, Ʌ is the distance between the centers of the two circular holes, ds is the diameter of the stellar shaped circular holes, and Ʌ s is the distance between the centers of the two circular holes in the stellar structure. The parameters of the final proposed structure are given in Table 2. By increasing or decreasing the (1) Fig. 1 The basic structure of square lattice PCF Table 1 The parameters of the basic structure fiber Parameter Symbol Value (μm) The diameter of the air holes of the second ring d 1 0.40 The diameter of the air holes of the other rings (except the second ring) d 2 0.80 The distance between holes in the square lattice structure Ʌ 1.00 Fig. 2 The final proposed fiber structure with a stellar-shaped inner ring and gold nanoparticles values of d 1 , d 2 , Ʌ, d s , and Ʌ s , the absolute value of the negative dispersion decreases and becomes out of the optimal state. The value of these parameters is optimized using the PSO algorithm in Lumerical software. Actually, in optimization configuration section of the software, the PSO algorithm is selected as the optimization solver. In fact, in this paper, the minimum dispersion value is obtained for the values parameters mentioned in Table 2. The minimum dispersion value at 1.55 μm is − 4593 ps/(nm.km). Simulation and results The basic proposed structure and the final proposed structure were simulated by Lumercial software. To simulate fiber optics, we first defined the structure, dimensions and materials in the proposed fiber optic in Lumerical software. Lumerical software applies the numerical solution methods (such as the finite element method) to solve the equations governing on the fiber optic (such as Snell's law and Maxwell's equations, etc.) and calculates and draws different curves such as dispersion curve. The mode field distribution and the refractive index of the basic square fiber are shown in Figs. 3, 4, respectively. The diameter of the air holes of the second ring d 1 0.4 The diameter of the air holes of the other rings (except the second ring) d 2 0.8 The distance between holes in the square lattice structure Ʌ 1 The diameter of air and gold holes in the stellar structure d s 0.18 The distance between holes in the stellar structure Ʌ s 0.2 Fig. 3 The mode field distribution of the basic square lattice fiber at the wavelength of 1.55 μm The dispersion diagram of the basic proposed structure is shown in Fig. 5. The dispersion versus wavelength is illustrated from 1.4 to 1.68 μm. According to the dispersion characteristic of the basic fiber in Fig. 5, the minimum dispersion equal to − 847 ps/(nm.km) at a wavelength of 1.57 μm. In the final proposed fiber structure, the inner ring has stellar-shaped holes with gold nanoparticles in their centers. The use of this structure causes a large negative value of Refractive index is the ratio of the speed of light in the vacuum to the speed of light in the environment. The complex reflective index is defined in lossy environments. The imaginary part of the refractive index shows its value in the lossy media. The lower the imaginary part of the reflective index makes the lower the dispersion. Complex index of refraction n eff is defined as where n is the real part of refractive index and k is the imaginary part of refractive index. The relationship between k and wavelength is: (2) n eff = n + ik where is the wavelength, is angular frequency and C is the speed of light in the vacuum. Lumerical software calculates and plots curve of k in terms of wavelength using the above equations (Figs. 4 and 7). The dispersion diagram of the final proposed PCF is shown in Fig. 8. The dispersion versus wavelength is illustrated from 1.5 to 1.68 μm. According to Fig. 8, in this structure, the minimum dispersion is more negative compared to the basic structure. The minimum dispersion has been transferred from -847 ps/ (nm.km) at a wavelength of 1.57 µm in the basic structure to -4593 ps/(nm.km) at a wavelength of 1.55 µm. Table 3 shows the results of the final proposed structure with other similar researches. The dispersion value of our proposed structure has a large negative value compared to Table 3, the negative dispersion value of the proposed structure has a significant reduction in the wavelength of 1.55 μm compared to the other studies. In (Adams and Henning 1990), it has been shown that the amount of losses in this wavelength is minimal and for this reason this wavelength is mostly used in fiber optic communications. The main purpose of our proposed structure is to have a large negative dispersion at a wavelength of 1.55 μm in order to compensate for the positive dispersion. The use of gold nanoparticles in the fiber cladding structure has caused this large amount of negative dispersion. In (Bulbul et al. Nov. 2020), a rectangle-based, porous-core photonic crystal fiber has been presented. The rectangular air holes are used in fiber core and cladding. Also, a novel PCF-based sensor has been presented in Al-Mamun Bulbul et al. (2021) to sense different chemicals and bio components. The use of metal nanoparticles (like our proposed structure) in the fiber cladding structure in these two references can help reduce the dispersion. Conclusion In this paper, a new PCF structure with a large negative dispersion value is presented. The geometric shape of the cladding is square. In the original design, four rings of circular holes are used so that the circle diameter of the second ring (from the core side) is half the diameter of the other circular holes. This structure was simulated using Lumerical software. The simulation results show a minimum dispersion value of -847 at the wavelength of 1.57 μm. The original design was completed by replacing the inner ring with five smaller star-shaped circles. In this proposed structure, gold nanoparticles are used in the center of the stellar structure. The simulation result shows a minimum dispersion value of -4593 at the wavelength of 1.55 μm, which shows a significant decrease compared to the basic structure. The use the stellar structure in the inner ring, as well as the use of gold nanoparticles in its center, creates a large negative dispersion in the PCF characteristic. As a result, we can use this proposed structure to compensate the positive dispersion. Funding For this study, no funding was received. Availability of data and material All data (used in this study) are available inside the paper. Code availability The proposed structure was simulated using Lumerical software. Conflict of interest The authors declare that they have no conflict of interest.
3,522.4
2021-03-22T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Flouting Maxim in BBC Series “Sherlock: A Study This research entitled “Flouting Maxims in BBC Series “Sherlock A Study In Pink”” aims at investigating the flouting maxims and analyzing the context of situation behind the flouting maxims produced by the character in Sherlock: A Study In pink. Sometimes speaker blatantly and deliberately fails to fulfill the certain maxims because the speaker wants to express the implicit meaning hidden behind the literal meaning. The data was taken from the utterance of the characters in “Sherlock: A Study in Pink”. The observation method is used in collecting the data since the data are obtained from spoken source like movie. The/ method and technique of analyzing data used in this study was descriptive qualitative method. By applying Grice's cooperative principle and his theories on flouting and theory of Context of Situation by Halliday and Hassan (1985), certain conversations show recurring uses of flouts and the reason by context of situation. From the analysis, the most flouted maxim from the collected data is maxim of relation and the least flouted maxim is maxim of quantity and quality. There are several utterances that has flouted more than one maxim. Another point is about the context of situation behind the flouting maxim occurred in the data. There are several context of situation that make the characters flouted the maxim based on the analysis. By knowing the context of situation, the participant can easily conclude the meaning behind the flouting maxim. Stated in Oxford Dictionary (Hornby, 2010: 320), conversation is an informal talk involving a small group of people or only two. A conversation can be stated as "success" when there is no misinterpretation or misunderstanding between the participants. Searle (1969:17) stated that the goal of spoken interaction or conversation is to communicate the things to the participants by getting them recognize the real intention behind every utterances. To have a good conversation, the participant must deliver the message clearly to the other participants. The participants sometimes use several kinds of long utterances to extend the message. If the speakers use the ambiguous utterance to the listener, the listener will not understand of what the speakers are talking about and it will make the listener of the utterance misunderstood the meaning. Inside a conversation, situation has become an important role in delivering the conversation meaning, because situation make a difference in both the way the participants of the conversation engage with each other and the way they interpret other participants utterances . Here is the example about how important situation is inside a conversation; a sarcasm that would not be understandable by the people who did not know the situation behind the utterance. Similar with how a person pouring someone"s heart out to a dear friend, is different from doing so with a young child. On the other hand, the cooperativeness between the participant must be done by both of the participant in order to make conversations run smoothly along the situation. The cooperative principle is also known as Grice"s Maxims, was divided into four kinds of maxims. It is describing specific rational principles observed by people who obey the cooperative principle; these principles enable effective communication. Grice stated that each participant of the conversation must obey four conversational maxims. Such as: maxim of quantity, maxim of quality, maxim of manner and maxim of relevance. (Wijana: 1996). In A certain cases in a conversation, Sometimes when the maxims were not obeyed, we cannot convey the information that should be understood by the other participants . So flouting maxim is the reason why some conversation can be understandable by the participants. Flouting a maxim is a particular salient way of getting an addressee to draw an inference and hence recover an implicature (Grundy, 2000:78). It means that when flouting occurs, it will make the listener think what the meaning of what has been said before. Flouting is not all the negative way, there are some cases when we have to flout the maxim to have a proper conversation. Problem of Study Base on the background of the study, there are two problems that we can formulate into: Research Method Research method is a systematic plan for conducting a research. A research method deals with the methods used in analyzing this study. In this point, the writing procedure must be done in order to obtain the expected and also qualified outcome. Research methods were divided into 4 points. They were (1) data source, (2) method and technique of collecting data, (3) method and technique of analyzing data, and (4) method and technique of presenting the result of data analysis. Data Source The data in this study were taken from a series broadcasted in BBC One UK entitled Sherlock: A Study in Pink on 25 th of July 2010. This series was inspired by a short story published in 1887 entitled A Study in Scarlet. The short story is a series of one famous character of all time Sherlock Holmes, written by Sir Arthur Conan Doyle. A Study in Pink screenplay was written by Steven Moffat and directed by Paul McGuaian. Method and Technique of Collecting Data The method of collecting data was documentation method. The data was obtained by downloading the movie from the internet. The techniques of collecting data in this study were: first, watching and listening to the movie carefully in order to comprehend the utterance and to see relationship between one character to another character. The second is reading the script play which was downloaded from the internet in order to catch the missing words. The third is selecting the relevant data by doing note-taking of the dialogue or utterance which related to Grice"s Maxims is compiled. This study also use the simple random sampling from the theory probability sampling as a method to choose the data that was discussed in the chapter three. Method and Technique of Analyzing Data The method and technique of analyzing data in this study are qualitative research method which is supported by quantitative method. Descriptive analysis has been conducted wiith calculating the total amount of the flouted maxim. To analyze the first problem, which is "What types of maxims are flouted in the BBC Series Sherlock: A Study in Pink?" this study use the Flouting of the four maxims or Flouting Maxim by Herbert Paul Grice (1975). To analyze the second problem, which is "What types of maxims are flouted in the BBC Series Sherlock: A Study in Pink?", this study use the theory of context of situation by Halliday and Hassan (1985) Method and Technique of Presenting The Result of Data Analysis The data were presented in descriptive method. There are several steps to presenting the result of analysis, First the data were numbered and then the data were presented in the form of dialogue. The data in the second section are accompanied with pictures of the scene where the dialogue occurred. For the analysis there are two points. The first point is the analysis of the first problem which is type of flouting maxim. The type of flouting maxims are described after the data were presented, in a form of paragraph and in descriptive method. The second point is the analysis of the context of the situation behind the flouting. The second part is to explain the context of situation and analyze the second problem. Result and Discussion There are two points of analysis, the first is the analysis of flouting in the form of four conversational maxim (maxim of quantity, quality, manner, and relevance) were found from the observation of the movie and with a little help from the script play. The second point is the context of situation based on the theory of Halliday. The data will be presented first, and followed with the analysis of the first problem and second problem. Afghanistan. Sorry, how did you know...? Analysis: The maxims that are flouted in the utterance of Sherlock were maxim of manner, maxim of quantity and maxim of relevance. The first utterance that flouted the maxim was stated by Sherlock. He asked "Afghanistan or Iraq?". And the second utterance is where John asked him to be more specific but instead he just repeat the same question. Stating "Which was it -Afghanistan or Iraq?". Both of the utterances has flouted the maxim of manner and maxim of relation or relevance. The utterances were flouted the maxim of manner because first, the utterances are ambiguous. Based on the context of the situation in the previous paragraph, there are no leads of what actually Sherlock asked for. This also based on the character background of Sherlock as we know that he did not know anything about John before but he just asked about the two country all of the sudden. For the viewers and the reader, this is really flouting the maxim of manner because what exactly Sherlock asked could not be known based on the utterance. The second utterance also when John asked him to be more specific, Sherlock just repeat almost the same question, make it more ambiguous for the people who did not know the background of the story. The second reason why the utterances considered as flouting the maxim of manner is because the conversation is not in orderly manner. This especially applied to the first utterance by Sherlock which was "Afghanistan or Iraq?". It is not in order because usually, when you met the person for the first time, all you have to say maybe some greeting or introducing yourself, or maybe asking about the name or your job. But in this conversation, Sherlock just straightforwardly asked about the nation or the country where John had served as an army doctor. Make the other speaker confuse. Why both of the utterances flouted the maxim of relation or relevance because as we can see from the context of ISSN: 2302-920X Jurnal Humanis, Fakultas Ilmu Budaya Unud Vol 22.3 Agustus 2018 768 the situation and the order of the conversation, the utterances from Sherlock can be seen as a "sentences that come from nowhere". No leads can actually connect with the question from Sherlock Holmes. No relation can be make from the previous conversation so for the people who did not know about Sherlock Holmes character, it only would give them a confusion. For maxim of quantity, the utterance seems to flout the maxim too because the utterance give less information than it should. It just straightforwardly asked about two nations without context. If the people who did not share the knowledge, it will be very confusing. The people will think about what actually he asked, is it about nationality?, about favorite countries? That"s the reason why the utterances considered flouting the maxim of quantity. LESTRADE : Why d"you keep saying suitcase? SHERLOCK : Yes, where is it? She must have had a phone or an organiser. Find out who Rachel is. LESTRADE : She was writing "Rachel"? SHERLOCK : No, she was leaving an angry note in German Of course she was writing Rachel; no other word it can be. Question is: why did she wait until she was dying to write it? LESTRADE : How d"you know she had a suitcase? Analysis: The utterance from sherlock that stated "No, she was leaving an angry note in German Of course she was writing Rachel; no other word it can be. Question is: why did she wait until she was dying to write it?" was the one that had flouted maxim based on Grice (1975) theory. The utterance that has been bolded was the utterance that has flouted the maxim of quality. Grice has stated that if the speaker want to obey the maxim of quality, the speaker had to make their contribution one that is true and do not stated something that you believe to be false. The bolded utterances can be categorized as sarcasm. Sarcasm is a literary and rhetorical device that is meant to mock with often satirical or ironic remarks with a purpose to amuse and hurt someone or some section of society simultaneously, which is mean sometimes in sarcasm, the utterance"s meaning contrasted with the actual meaning the speaker intended to deliver. The utterance can clearly be categorized as flouting maxim of quality because what sherlock uttered is actually what he believed to be false. The utterance considered as utterance that was not believed by the own speaker because of the context of situation. As we know, the first person who mention about RACHE was a german noun means "revenge" is Anderson and sherlock had found that this opinion from anderson is untrue. Another reason is that after uttered "No, she was leaving an angry note in German" sherlock had given some words that emphasize that he did not believe what he said in the first utterance by saying "Of course she was writing Rachel; no other word it can be". Which is completely opposite from the first utterance he stated. His tone and expression become another reaon why the utterance was said to be sarcastic and flouting the maxim of quality. Analysis: The conversation tenor took place between three people at the same time. The mode of the conversation is a face to face conversation that happened in one of the laboratories inside St. Bartholomew"s Hospital. The conversation happened between Sherlock Holmes and John Watson who was two prominent characters in the story and their colleague Mike Stamford. This conversation is one of the highlight scene in the series because this is the first time, the two main characters met, John Watson that in the series will be known as the partner of the private consultant detective Sherlock Holmes and share a talk between them. After Sherlock had done the experiment at the morgue, he came to one of the lab to do some experiment in order to solve the case. When he was in the lab, Mike came with one of his old friend, John Watson. John and Mike was talking about getting John Watson a flat mate or people who want to share a flat with John because he could not afford to pay the loan alone and Mike had suggested one of his colleague in the Bart"s, Sherlock Holmes who was also searching for a flat mate. Because of this reason both of them had agreed to go to Bart"s and find Sherlock Holmes. Here was how the background story of the conversation start to evolve. Sherlock did not really pay attention to Mike or John and keep doing his laboratory research. After several minutes passed, Sherlock Holmes asked Mike Stamford to borrow his phone but unfortunately Mike left his phone inside the coat so he did not bring them to the lab. John then let Sherlock know that he can borrow his phone. At the same time as Sherlock walk to John, Mike introduced John as his old friends. Unexpectedly Sherlock asked about Afghanistan or Iraq which made John bewildered as Sherlock typed in John"s phone. John then asked back about what the meaning or can you repeat the question once more by saying "Sorry?" Sherlock did not explain about the question, instead repeating it once more. He said "Which was it? Afghanistan or Iraq". Timidly and with confusion John said "Afghanistan. Sorry, how did you know ...?" John was confuse because how Sherlock, as a stranger at that time, knew about his past serving at the army as an army doctor and exactly asked about the country he was serving. But in the meantime, Mike had been smiling all the time because he knew Sherlock "Uniqueness" The context of situation above describe the meaning behind the flouted the maxim of Manner and Relevance. Sherlock wants to be to the point. From the character background, we can see that Sherlock is not that type of person that like to beat around the bush before starting the conversation. His ability to deduce someone based on what he sees also make the lack of communication in Sherlock life getting worse. As a normal human, if we tried to ask about a person"s past life, we should probably ask after we got closer, but Sherlock who has the ability to deduce one person"s life, he did not need to asked about trivial thing and instead just asked about something he was not sure for, in this case, the country where John had served as an army doctor. Conclusion Based on the collected data, it conclude that The most flouted maxim from the collected data is Maxim of Relevance or Relation The least flouted DOI: 10.24843/JH.2018.v22.i03.p29 ISSN: 2302-920X Jurnal Humanis, Fakultas Ilmu Budaya Unud Vol 22.3 Agustus 2018 770 maxim is Maxim of quantity and quality with the same total of data. Another point is about the context of situation behind the flouting maxim occurred in the data. For the most flouted maxim which is maxim of relation or relevance, the context situation behind it mostly because the speaker do not want to continue or answer the given questions or the speaker did not really care about the conversation. For the context of situation behind the flouting of maxim of manner mostly because the character is a straightforward person. Context of situation behind the maxim of quantity is mostly because the speaker want to give more information or the speaker want to describe their opinion. And the context behind the flouting maxim of quality is because the speaker want to convince the other person by giving temporary hypothesis and the speaker want to describe their opinion which is not true based on the storyline 7. Reference Grice, Herbert Paul. (1975), Logic and Conversation, New York: Academic.
4,125.8
2018-08-01T00:00:00.000
[ "Linguistics" ]
The Effect of Diversity Implementation on Precision in Multicriteria Collaborative Filtering This research was triggered by the criticism on the emergence of homogeneity in recommendation within the collaborative filtering based recommender systems that put similarity as the main principle in the algorithm. To overcome the problem of homogeneity, this study proposes a novelty, i.e. the diversity of recommendations applied to the multicriteria collaborative filtering-based document recommender systems. Development of the diversity recommendation was made by the two techniques, the first is to compare the similarity of content and the second is to use a variation of the criteria. The application of diversity, both content and criteria-based, was proven to provide a sufficiently significant influence on the increase of recommendation precision. I. INTRODUCTION The development of collaborative filtering based recommender systems always puts the aspect of similarity as the main reference in the algorithm, and the main parameter used to assess the performance is the accuracy of prediction.Therefore, most studies on the recommender systems are focused on improving the accuracy of predictions, including when developing a multicriteria collaborative filtering model [1] [2] [3].The implication of similarity implementation is the resultant recommendation that is homogeneous in nature.It is the advantage of collaborative filtering approach, but on the other hand it can also be disadvantage.The homogeneity of recommendation is due to process in collaborative filtering algorithm that does not involve the description or the content of recommendation object, so that the system does not accommodate the existence of new items [4].The adverse effect is the case where many objects whose content is very interesting for the users, but it were never promoted to be a part of a list of recommendations. Based on this fact, it is necessary to conduct a study with a focus on the development of recommendation diversity, but it remains in the corridor of multicriteria collaborative filtering. Recommendation diversity is very important to be taken into account because it is closely related to the level of user satisfaction.In fact, it is not only important in recommender systems, but is also very important in developing a model of information retrieval systems and social media with very rapid development.The ideas of diversity developed in this study are of two kinds, content-based diversity and criteria-based diversity, which were done on multicriteria collaborative filtering model.Besides prediction accuracy, there is other parameter used to measure the performance of recommender systems.The parameter is recommendation precision, which is defined as a number that indicates the percentage of items that were given a high predictive value by recommender systems, as well as by users.In this study, how much the influence of the implementation of recommendation diversity on the increase of precision in the multicriteria collaborative filtering applied to construct a scientific document recommender systems will be measured. The writing of paper is organized as follows.Section 2 describes the content-based diversity.Section 3 describes the criteria-based diversity.The testing of recommendation precision is presented in section 4, while the discussion of the results of test is written in section 5.The writing of the paper is concluded by section 6. II. CONTENT BASED DIVERSITY The objects of recommendation in this study are scientific documents with text format, making it possible to do an analysis of its content.The results of the analysis of a document can be compared with other document contents.The results of the comparison of the document contents generate similarity values that are then used as the basis of determining which documents that need to be recommended to the users, with the guideline that the higher the content similarity, the lower the document diversity [6] [7].The scenario for determining the diversity based on the document content can be explained as follows: www.ijacsa.thesai.orga) Documents whose contents are analyzed are those already included into the list of Top-N produced by a multicriteria collaborative filtering engine. b) Content analysis is sufficiently done on document abstract.c) Content analytic process is meant to find out or measure the similarity.d) One of the documents with high enough similarity values are chosen to be included into the list of the recommendation. The analytic process of document content is done in two steps, i.e. indexing process and similarity measurement. B. Document Indexing Document index is a set of terms representing the content.Each document is represented with bag-of-words.The process is started by transforming the document into a bag containing independent words.Each word is stored into a database that is arranged as an inverted index.The arrangement of inverted index required the involvement of linguistic processing with aim of extracting important terms by deleting stop-words and stemming.The definition of stop-words is 'words that have no relevance with main subject, although the words often appear in many documents'.The example of stop-words include a, an, all, also, after, although, because, beside, every, the, this, it, these, those, his, her, my, our, their, your, few, many, several, some, for, and, nor, bit, or, yet, so, if, unless, on, off, over, of, during and etc.Meanwhile, stemming is an operation to gain a form of the roots of word by deleting the prefix or suffix.By the technique, a group of suitable words, where words in the group are variants, will be gained.As an example, the words write, written, writer, writing are interchangeably used in term with the general stem of write. Forming the inverted index requires five steps, i.e.: a) The deletion of format and document markup with many tags and formats such as HTML document. b) Tokenization. The words in sentence, paragraph or pages are separated into token or pieces of a single word or stemmed word. Being included into the step is to delete certain characters such as punctuation mark and to change all the tokens into lower case. c) Filtering, i.e. to determine which terms will be used to represent document in order to describe the document content and distinguish them with other documents. The terms with the very high level of frequency in appearance cannot be used for the purpose because they are unable to be discriminator inter-documents or often called by a term of poor discriminator. Moreover, terms often appearing in many documents do not also reflect the definition of the topic of subtopic of documents.Therefore, the terms often used can be considered as stop-word and must be deleted.In order that the process of stop-word deletion goes fast, a book of stop-word or the stop-list of term that will be deleted must be arranged. d) The retrieval of term into a form of root.Document can be expanded by searching synonymous for certain term within it.Synonym is words that have similar meaning but morphologically seem different.The step is similar with stemming process, but what want to find is a group of relevant words.The main difference is that synonym does not share in use of term, but found based on thesaurus. e) The weighting of term.To do the weighting, it can be selected local or global weighting model or the combination of both.The model often used in several applications is a combined weighting by the multiplication of the local weight of term frequency and the global inverse document frequency, written by tf.idf.[8] C. The Measurement of Content Similarity To measure the similarity of text formatted document, the bag-of-words need to be converted first into the vector space model with each document represented as a multidimensional vector with dimensions in accordance with the chopped term in the database.Figure 1 shows an example of visualization of three-dimensional vector space models with the terms of T 1 , T 2 , and T 3 as well as two documents of D 1 and D 2 .The database of documents is represented by the matrix of term-document or term-frequency where each cell match with the weight given.The value of zero shows that the term does not appear in the document.Figure 2 is an example of termdocument matrix for the database containing n document and t term.Based on the term-document matrix formed and the weighting of tf-idf, the numeric value of the document can be known, thus the inter-document nearness can be calculated.The nearer the two vectors are, the more similar the two dua documents.The similarity of text content document can be calculated by using a cosine similarity formula.For example, two vectors representing documents d j and d k were given, so the content similarity between both documents was defined as [9]. To make understanding easier, the example of 3 documents was given by vector representation as follows : Thus, the inter document similarity value can be calculated as follows: (1) similarity D 1 and D 2 : (2) similarity D 1 and D 3 : (3) similarity D 2 and D 3 : From the three values of document similarity above, it can be known that document D 3 had similarity with the other two documents.The smallest similarity value was gained between document D 1 and document D 2 , so the document prioritized to recommend was D 1 . III. CRITERIA BASED DIVERSITY Referring to a construction of document recommendation system by using multicriteria collaborative filtering, actually a space is available to engineer at the step of recommendation generation to make sure the presence of diversity [2] [10].The new concept of diversity sufficiently bases at four individual criteria that was determined and used since earlier, different from the concept of document content-based diversity whose process was long enough and need the step of indexing.The construction of document recommendation system by using multicriteria collaborative filtering whose recommendation generation takes criteria-based diversity into account is shown in Figure 3. Figure 3 shows that the scenario run was still in collaborative filtering paradigm with four individual document criteria and one overall criterion.Only the modification of document selection procedures and its generation process are required, so that the recommended documents are more various. In the Multicriteria Collaborative Filtering, prediction process of ratings is done for each criterion [11].So, by using the four individual criteria and one overall criterion, actually five values of prediction results were generated and each can be used to generate recommendation.For each criteria (topic, novelty, recency, author and overall), a number of document Top-N with the highest predictive value was generated.After the step, one document was taken respectively and put into a list of document recommended to the users.Thus, there will be five document variations recommended based on different criteria, although it is still possible for the emergence of the same document. IV. EXPERIMENT Initially the precision is only used in information retrieval systems and has always been associated with another metric called as recall that is defined as follows: [5] For measuring the precision in this study, the term is modified to be "Precision in Top-N" that is defined as a percentage of documents with the high production value, becoming the most relevant N document for the users. In the testing, rating value will be categorized as high if the value was larger or equal to 4.0.The measuring of precision was done when the number of the users and documents reached 200x400, while the rating value used was the value for the overall criteria.The variation of the testing was based on the neighborhood size by determining a number of the users with the highest similarity value.In the testing, three neighborhood sizes were selected, including 5 users, 10 users, and 50 users.At the last option, the large neighborhood size led the meaning of nearest-neighborhood was bias and the load of computation become large also, but the measuring under the condition still need to do for the performance of system.Meanwhile, the Top-N values used were 5, 10 and 15. The testing was done under two different conditions.The first was when the system did not apply the recommendation diversity yet, and the second was after the system applied the diversity.There were three variations in recommender systems run in the testing, namely : classic collaborative filtering (CF), multicriteria collaborative filtering using cosinus-based similarity (MCF Cosinus) and multicriteria collaborative filtering using multidimensional distance-based similarity (MCF MD Distance).Result of the testing of precision before the implementation of diversity was presented in Table 1. From Table 1, it can be indicated that the larger the neighborhood size used, the higher the precision given by all of the recommendation systems models. TABLE I. PRECISION WITHOUT DIVERSITY IMPLEMENTATION In computation perspective, it can be concluded that the more the members accommodated in collaborative process, the more relevant and appropriate the resultant recommendations for meeting the need of the users.Observation on the size of Top-N document determined also resulted in the same information, where the larger the size of Top-N, the larger the precision value had by all the models.The highest value of recommendation precision was reached by MCF MD Distance at the neighborhood size of 50 users and with Top-N of 15 documents, i.e. 76.4%. Results of the measuring of recommendation precision after the content-based diversity applied were presented in Table 2. Actually, the precision value increased significantly.It can be clearly seen from comparison between them with results of the measurement of recommendation precision when the diversity was not applied as presented in Table 1. TABLE II. PRECISION WITH CONTENT BASED DIVERSITY The concept of content-based diversity can be applied in all of the recommendation system models, while the concept of criteria-based diversity can only be applied in multicriteria collaborative filtering model with a process scheme simply illustrated by Figure 3.The results of the testing on the effect of criteria-based diversity concept application on the increase 3. The results of the testing increasingly affirm that the larger the neighborhood size used, the higher the precision given by all recommendation system models.Moreover, the larger the Top-N values selected, the larger the precision value had by all models.The highest value of recommendation precision was reached by MCF MD Distance at the neighborhood size of 50 users and the Top-N of 15 documents, i.e. 77.5 %.V. DISCUSSIONS The idea of recommendation diversity was generated with the aim to provide the added value, making it possible for the users to get documents that are more relevant to their needs.It can be expected that after getting the relevant documents, the users will be satisfied and give the high value of rating on the documents.This was consistent with the theory of consumer behavior, explaining that when a person feels satisfied and so happy with the service, it will provide a high and sustained appreciation.The more the documents are given the high value of rating by the users, the more the increase of recommendation precision.For the reason, in this testing, the measurement of recommendation precision as recommendations are generated involved content-and criteriabased document diversity. The two concepts of diversity give a special feature in the process of generating recommendations, so that there is diversity within uniformity.The higher the level of the content similarity, the lower the level of document diversity.The main implication of the application of content diversity was that among the documents with high rating, some documents with relatively different contents are selected.For the criteria-based diversity, it is sufficiently determined based criteria variation.It means that among the documents with high rating, several documents with different criteria are selected. VI. CONCLUSIONS Based on the results of the measuring of recommendation precision, it can be concluded that the application of diversity in multicriteria collaborative filtering-based recommendation document system had a positive effect, namely, to increase the recommendation precision.It can be interpreted that basically the users want various recommendations, although generated by a system built on the collaborative filtering concept based on the principle of similarity.The results of the study indicate that each effort to develop the recommender systems should accommodate the idea of diversity in order to produce a kind of recommendation that is more relevant and able to meet the subjective needs of the users.Thus, the principle of similarity in the collaborative filtering can be enriched by the feature of diversity. Fig. 1 . Fig. 1.An Example of Three-Dimensional Vector Space Model Fig. 3 . Fig. 3.The Construction of Multicriteria Collaborative Filtering (MCF) Recommender Systems Model Applying the Criteria-Based Diversity. www.ijacsa.thesai.org also give sufficiently positive information.It can be seen at the results of recommendation precision measurement as presented in Table PRECISION (%) www.ijacsa.thesai.org of recommendation precision
3,757.6
2014-01-01T00:00:00.000
[ "Computer Science" ]
Air-Liquid-Interface Differentiated Human Nose Epithelium: A Robust Primary Tissue Culture Model of SARS-CoV-2 Infection The global urgency to uncover medical countermeasures to combat the COVID-19 pandemic caused by the severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2) has revealed an unmet need for robust tissue culture models that faithfully recapitulate key features of human tissues and disease. Infection of the nose is considered the dominant initial site for SARS-CoV-2 infection and models that replicate this entry portal offer the greatest potential for examining and demonstrating the effectiveness of countermeasures designed to prevent or manage this highly communicable disease. Here, we test an air–liquid-interface (ALI) differentiated human nasal epithelium (HNE) culture system as a model of authentic SARS-CoV-2 infection. Progenitor cells (basal cells) were isolated from nasal turbinate brushings, expanded under conditionally reprogrammed cell (CRC) culture conditions and differentiated at ALI. Differentiated cells were inoculated with different SARS-CoV-2 clinical isolates. Infectious virus release into apical washes was determined by TCID50, while infected cells were visualized by immunofluorescence and confocal microscopy. We demonstrate robust, reproducible SARS-CoV-2 infection of ALI-HNE established from different donors. Viral entry and release occurred from the apical surface, and infection was primarily observed in ciliated cells. In contrast to the ancestral clinical isolate, the Delta variant caused considerable cell damage. Successful establishment of ALI-HNE is donor dependent. ALI-HNE recapitulate key features of human SARS-CoV-2 infection of the nose and can serve as a pre-clinical model without the need for invasive collection of human respiratory tissue samples. Introduction Over the past 15 years, 90% of novel medical countermeasures that showed promise in preclinical animal and cell line models failed in human clinical trials: 50% for lack of efficacy, 30% for toxicity [1,2]. Importantly, the toxicity was not detected in non-human primates, the closest animal model to humans. This failure rate continues to this day. To improve our understanding of host-pathogen interactions, we need to advance our pre-clinical models to better reflect human physiology. Human adult stem cell-derived organoids fill the gap between animal and cell line pre-clinical models, and human clinical trials. Tissue stem cells 'remember' their tissue of origin, they generate the same cell types in a dish as they do in the body and recapitulate key features of architecture and function of the parent tissue [3][4][5][6][7][8]. Early studies by the Clevers [9] and Estes [10] laboratories exemplified the power of tissue stem cell derived organoids for modelling lung (respiratory syncytial virus (RSV)) and gut (human noroviruses (HuNoVs)) infection, respectively. In a similar vein, induced pluripotent stem (iPS) cell derived organoids proved invaluable for understanding the pathogenesis of Zika virus (ZIKV) in the brain [11,12]. Unlike respiratory and gastrointestinal tissues, which can be sourced from resected tissues from routine surgical procedures and non-invasive sample collection (e.g., nasal turbinate brush), human brain tissue is much harder to source, highlighting the importance of iPS technology for less accessible tissues. However, the advantages that these human organoids provide in modelling human viral infectious diseases remained largely overlooked in preference for decades old virus culture systems. Early during the COVID-19 pandemic when medical countermeasures were being assessed, virologists used classical cell lines such as Vero cells for neutralisation and antiviral studies. The Vero cell line was derived from African Green Monkey kidney epithelial cells [13] and has been the workhorse of virology laboratories since the 1960s. Vero cells contain genomic deletions of genes involved in the antiviral interferon response [14] and are thus highly susceptible to infection by diverse viruses, including SARS-CoV-2, yielding high virus titres. However, SARS-CoV-2 viral entry into Vero cells differs markedly from entry into human epithelial cells [15]. The SARS-CoV-2 spike glycoprotein mediates viral entry into Vero and primary epithelial cells by binding to the human angiotensin-converting enzyme-2 (ACE-2) [16,17]. However, entry into primary epithelial cells requires proteolytic cleavage by the cellular protease TMPRSS2 which initiates direct fusion between cellular and viral membranes, whereas entry into Vero cells is via an endosomal pathway [15]. Thus, drugs like hydroxychloroquine, which inhibits endosomal acidification, showed robust efficacy in Vero cells [18,19] but lacked efficacy in human clinical trials [20]. Had the original studies been performed in human epithelial organoids, hydroxychloroquine would not have been considered a viable candidate for further clinical trials. Furthermore, primary human cell responses to SARS-CoV-2 infection are not recapitulated in animal models, human continuous cell lines, or Vero cells [21,22]. Consequently, results with antivirals and neutralising antibodies obtained in these models fail to reflect responses in humans. Additionally, deep sequencing of SARS-CoV-2 isolates and culture supernatant preparations demonstrated that propagation in cell lines, such as Vero cells, led to mutation of the viral genome adapting the virus for growth in simple tissue culture cells [23,24]. Again, this confounds the validity of pre-clinical assays performed with such culture systems that a far removed from human tissue. Unsurprisingly, the global urgency to develop vaccines and antivirals to combat COVID-19 has exposed the shortcomings of decades old tissue culture methods used by virologists and has seen an exponential increase in the adoption of organoids from many human tissues to understand pathogenesis and test therapies [25]. The first report to demonstrate that human adult stem cell (hASC) derived organoids are productively infected by SARS-CoV-2 was a collaboration between the Clevers and Haagmans laboratories in the Netherlands, who showed robust infection of primary gut and respiratory epithelium [26]. Within a year of this publication, Chen and colleagues reported on human pluripotent stem cell (hPSC) derived alveoli-like and colon epithelium-like organoid-based screens for SARS-CoV-2 inhibitors [27]. hPSC derived organoids have the advantage of being a renewable source, but they do not recapitulate human tissue architecture to the degree achieved by tissue stem/progenitor cell derived organoids [9,28,29]. To circumvent these caveats, we revisit a well-established and characterized upper respiratory epithelium model [30][31][32]. The human nasal epithelium (HNE) is considered the first site of SARS-CoV-2 infection and expresses high levels of ACE2 [33], the cell surface receptor for SARS-CoV-2 [16,17], and is thus potentially the ideal tissue for testing prevention of virus entry into the body. To this end, we have established and characterized an air-liquid-interface (ALI) differentiated HNE model for SARS-CoV-2 infection. We demonstrated robust SARS-CoV-2 infection of ALI-HNE established from low passage progenitors from most donors (6/9) and observed increased cell damage by the Delta variant clinical isolate compared to an ancestral clinical isolate from January 2020 [34]. Unlike other human tissues, the non-invasive sample required (nasal turbinate brush) and commercially available media makes ALI-HNE an attractive model system for respiratory viruses. SARS-CoV-2 Infects ALI-HNE Established from Adult and Child Donors To evaluate ALI-HNE culture as a model for SARS-CoV-2 infection, turbinate brush samples were collected from adult and child donors and infected with SARS-CoV-2 clinical isolates (Table 1). ALI-HNE established from conditionally reprogrammed epithelial cells generated from the nasal turbinate brush samples from an adult (PDI-1) and child (PDI-7) donor yielded pseudostratified epithelium several layers deep. As previously described, the well-developed apical cilia were detected by staining for acetylated α-tubulin (AcTub) [35]. The orthogonal view of the immunofluorescent confocal microscopy Z-sections, and the movie generated from the Z-sections (Videos S1 and S2), clearly demonstrated deep layering with apical cilia ( Figure S1). Light microscopy live cell time-lapse imaging revealed beating cilia (Videos S3 and S4). Both adult (Figure 1a,b) and child (Figure 1c,d) ALI-HNE cultures were susceptible to infection by the Australian ancestral SARS-CoV-2 clinical isolate, VIC01 [34], at an MOI of 0.02. Virus infected cells were detected by staining for viral nucleoprotein by immunofluorescent confocal microscopy ( Figure 1). Extended data for the immunofluorescent confocal microscopy and staining with control antibodies is shown in Figure S2. Infectious virus, quantified by TCID 50 on Vero cells, was detected in the apical wash harvested at the indicated times but not in the basal medium (Figure 1a,c). Staining for ZO-1 revealed well-developed tight junctions (Figure 1b,d). Interestingly, spheroid organoids were formed within the pseudostratified epithelium (shown by ZO-1 staining of PDI-1, Figure 1b, and Videos S3 and S4). The extent of spheroid formation within the ALI-HNE was donor dependent where only a few were seen in PDI-1 (Figure 1b), for example, and many in PDI-4 ( Figure 2a). SARS-CoV-2 Infection of ALI-HNE Is Dose Dependent To further characterise the infectivity of SARS-CoV-2 clinical isolates on ALI-HNE, we focused on adults as this is a less limited resource than child donors (Table 1). Haematoxylin and eosin staining of formalin fixed paraffin embedded PDI-4 filters confirmed deep pseudostratified differentiation with apical cilia and the formation of spheroids within the epithelium. Cilia were on the luminal side of the spheroids (Figure 2a). ACE2 was expressed at the apical surface of the ALI-HNE (Figure 2b), consistent with the expression pattern in primary intestinal epithelium [26]. The orthogonal view of the confocal Z-sections placed ACE2 at the base of the cilia. Extended data for the immunofluorescent confocal microscopy and staining with control antibodies is shown in Figure S3. (b,d) Immunofluorescent confocal microscopy staining for α-tubulin (AcTub, green) and nucleoprotein (NP, red). Nuclei are blue (DAPI). Scale bar 50 µm. SARS-CoV-2 Infection of ALI-HNE Is Dose Dependent To further characterise the infectivity of SARS-CoV-2 clinical isolates on ALI-HNE, we focused on adults as this is a less limited resource than child donors (Table 1). Haematoxylin and eosin staining of formalin fixed paraffin embedded PDI-4 filters confirmed deep pseudostratified differentiation with apical cilia and the formation of spheroids within the epithelium. Cilia were on the luminal side of the spheroids (Figure 2a). ACE2 was expressed at the apical surface of the ALI-HNE (Figure 2b), consistent with the expression pattern in primary intestinal epithelium [26]. The orthogonal view of the confocal Z-sections placed ACE2 at the base of the cilia. Extended data for the immunofluorescent confocal microscopy and staining with control antibodies is shown in Figure S3. Notably, when PDI-4 and PDI-5 basal cells were embedded in BME2 matrix for spheroid organoid culture, and expanded and differentiated in PneumoCult TM Organoid media, differentiated cells formed spheroids with apical surface towards the matrix (PDI-4, Video S5). The spheroids with cilia on the outside of the organoid would sometimes spin (PDI-5, Video S6). Apical-out organoids provide an alternative HNE model as these types of cultures can be generated to be ever-expanding and cryopreserved at the expansion phase using previously described protocols [9]. Next, we tested the dose dependence of infection by infecting PDI-4 ( Notably, when PDI-4 and PDI-5 basal cells were embedded in BME2 matrix for spheroid organoid culture, and expanded and differentiated in PneumoCult TM Organoid media, differentiated cells formed spheroids with apical surface towards the matrix (PDI-4, Video S5). The spheroids with cilia on the outside of the organoid would sometimes spin (PDI-5, Video S6). Apical-out organoids provide an alternative HNE model as these types of cultures can be generated to be ever-expanding and cryopreserved at the expansion phase using previously described protocols [9]. Next, we tested the dose dependence of infection by infecting PDI-4 (Figures 3 and S4) and PDI-2 ( Figure S5) ALI-HNE with VIC01 at an MOI of 0.02 and 0.002. Infectious virus was detected at the indicated times in the apical washes by TCID 50 Figure S4 for PDI-4. The orthogonal view of the confocal Z-sections shows nucleoprotein was primarily apical at 48 h post infection ( Figure S5b,e). Next, we tested ALI-HNE susceptibility to infection when passaged in T25 flasks to expand the basal cells. Passaged PDI-1 cells showed delayed virus production despite differentiating well at ALI, while PDI-2 failed to differentiate at ALI after passage (Table 1). PDI-6 failed to differentiate at ALI on several attempts, while PDI-3 cultures had mixed cells type (ciliated and non-ciliated cells) and infected poorly or not at all with VIC01 ( Figure S6). Collectively, these data demonstrate that achieving good differentiation at ALI for robust SARS-CoV-2 infection requires low passage CRC basal cells. Next, we tested ALI-HNE susceptibility to infection when passaged in T25 flas expand the basal cells. Passaged PDI-1 cells showed delayed virus production despit ferentiating well at ALI, while PDI-2 failed to differentiate at ALI after passage (Tab PDI-6 failed to differentiate at ALI on several attempts, while PDI-3 cultures had m cells type (ciliated and non-ciliated cells) and infected poorly or not at all with VIC01 ure S6). Collectively, these data demonstrate that achieving good differentiation at for robust SARS-CoV-2 infection requires low passage CRC basal cells. The SARS-CoV-2 Delta Variant Infects ALI-HNE The nasal epithelium is considered the first site of infection during SARS-CoV-2 ogenesis, thus assessing infection of ALI-HNE cells by variants of SARS-CoV-2 as emerge might provide insight on the transmissibility of mutant viruses, termed var of concern (VOC). During the pandemic, we have tested the susceptibility of ALI-HN mutant SARS-CoV-2 variants (D614G, Alpha, and Beta) and demonstrated that each iation from the ancestral strain infects ALI-HNE (data not shown). Here, we focus o The SARS-CoV-2 Delta Variant Infects ALI-HNE The nasal epithelium is considered the first site of infection during SARS-CoV-2 pathogenesis, thus assessing infection of ALI-HNE cells by variants of SARS-CoV-2 as they emerge might provide insight on the transmissibility of mutant viruses, termed variants of concern (VOC). During the pandemic, we have tested the susceptibility of ALI-HNE to mutant SARS-CoV-2 variants (D614G, Alpha, and Beta) and demonstrated that each variation from the ancestral strain infects ALI-HNE (data not shown). Here, we focus on the Delta variant; it first emerged in India late 2020 and has swept across the globe, rapidly outcompeting the pre-existing lineages deemed VOC [36,37]. The Delta spike was shown to facilitate more cell-cell fusion kinetics and syncytia formation compared to Wuhan-1 [38,39]. Here we compared Delta and VIC01 infection of ALI-HNE at an MOI of 0.02. Infectious virus was not detected by TCID 50 assay at 24 h or 48 h post-infection with either virus with this donor (PDI-13); however, productive infection was observed at 6 days with both SARS-CoV-2 lineages. Delta infected cultures produced approximately 10-fold more virus than the VIC01 infected cultures (TCID 50 , Figure 4a). Delta infection was more cytopathic with syncytia and extensive nuclear damage (Figure 4b). Extended data for the immunofluorescent confocal microscopy and staining with control antibodies is shown in Figure S7. These data are consistent with recent observations with Delta infection of HNE cells and hamsters [38,39]. to facilitate more cell-cell fusion kinetics and syncytia formation compared to Wuhan-1 [38,39]. Here we compared Delta and VIC01 infection of ALI-HNE at an MOI of 0.02. Infectious virus was not detected by TCID50 assay at 24 h or 48 h post-infection with either virus with this donor (PDI-13); however, productive infection was observed at 6 days with both SARS-CoV-2 lineages. Delta infected cultures produced approximately 10-fold more virus than the VIC01 infected cultures (TCID50, Figure 4a). Delta infection was more cytopathic with syncytia and extensive nuclear damage (Figure 4b). Extended data for the immunofluorescent confocal microscopy and staining with control antibodies is shown in Figure S7. These data are consistent with recent observations with Delta infection of HNE cells and hamsters [38,39]. Discussion The failure to translate from a pre-clinical model to human clinical trial raises the cost per new drug to $US2.8 billion [1,2]. Furthermore, it means patients and clinical trial volunteers are treated with drugs that will not work in humans to prevent or treat disease. Alarmingly, toxicity in humans can be fatal as was seen with an HBV antiviral (fialuridine, FIAU) due to toxicity to a human mitochondrial gene, which was not seen in pre-clinical animal models, including non-human primates [40,41]. The promise of human tissue stem cell-derived organoids is to fill the gap between animal and cell line pre-clinical models and humans [3][4][5]8]. Organoids recapitulate key features of human tissue in a dish and thus offer physiologically relevant models of host-virus interaction. Indeed, the unprecedented global rush to develop medical countermeasures to combat the COVID-19 pandemic has revealed the shortcomings of classical tissue culture models used by virologists. Hydroxychloroquine showed promise as an antiviral in Vero cells, but did not prevent infection of primary epithelial cells and failed human clinical trial [18][19][20]. SARS-CoV-2 is thought to enter the body mainly via the upper respiratory tract, however, COVID-19 is a systemic disease affecting the gut, liver, heart, brain, endovascular system, etc. and organoids from diverse tissues and organs have been used to understand SARS-CoV-2 pathogenesis (reviewed in [42][43][44]). Given that current vaccines work systemically, SARS-CoV-2 must infect the body for the vaccines to reduce COVID-19 morbidity and mortality [45]. We anticipate that the goal of second and third generation medical countermeasures will be sterilizing vaccination and treatments against SARS-CoV-2 that prevent or control infection of the HNE, the main portal of entry into the body. To this end, we have shown that ALI-differentiated HNE were susceptible to infection by SARS-CoV-2 and recapitulated key features of human infection. The ALI-HNE express ACE2 on their apical surface; viral entry and release were via the apical surface. This is consistent with previous observations, demonstrating ACE2 and TMPRSS2 expression in human nasal epithelium [33,46]. Furthermore, we showed ALI-HNE established from adult and child donors were susceptible to infection. SARS-CoV-2 infection was dose dependent with faster viral production at an MOI of 0.02 than 0.002. Virus production was also donor dependent and required low passage nasal progenitor cells. Infection by the Australian ancestral strain, VIC01 [34], did not lead to overt cytopathic effect. In stark contrast, infection of the same ALI-HNE with the Delta variant led to extensive syncytial formation. Enhanced fusogenicity of the SARS-CoV-2 Delta variant that we observed is consistent with recent reports [38,39]. The enhanced pathogenicity phenotype of the Delta variant revealed by the ALI-HNE might underlie the increased transmissivity of this variant in human populations and its global predominance [36,37]. The Omicron SARS-CoV-2 variant emerged in South Africa in late 2021 [47] and is now rapidly spreading globally. Omicron has >30 mutations in the spike protein. Infection studies in lung organoids show compromised replication and pathogenicity [48]. The pathogenicity phenotype of the Omicron variant in ALI-HNE and donor-to-donor variation remains to be fully characterized, but a recent study using a commercial source of ALI-HNE shows rapid viral kinetics and the potential for TMPRSS2-independent entry [49]. The latter might underlie Omicron's enhanced intrinsic transmissibility via the upper respiratory tract, while the compromised replication and pathogenicity in the lower respiratory tract might underlie the decreased mortality and morbidity [47,50]. Collectively, these data show the ALI-HNE is a faithful model of SARS-CoV-2 infection and can predict pathogenicity of mutant SARS-CoV-2 variants. The non-invasive sample required and the relatively straightforward culture conditions (i.e., standard humidified CO 2 incubators and commercially available media) mean that variants can be screened in real-time as they emerge once a Biobank of basal progenitors is established. Our study shows that donors need to be pre-screened to ensure that their nasal turbinate brush samples yield well-differentiated ALI-HNE and support robust, productive SARS-CoV-2 infection. Adoption of this culture system into a 96-well format will facilitate high throughput screens for medical countermeasures. Furthermore, in future studies, ALI-HNE from different donors coupled with omics analyses might reveal the molecular mechanisms underlying donor variation in drug responses and antibody neutralization. Organoid platforms like the ALI-HNE make personalized COVID-19 countermeasures a reality. Finally, the rapid adoption of organoids established from diverse human tissues is likely to not only reveal novel avenues to prevent or control systemic COVID-19, but also shed light on alternative routes of entry into the body. The occurrence of SARS-CoV-2 infection, despite wearing a face mask to protect the respiratory route of entry, raises the possibility that the virus can enter the body via an alternative route such as the eye [51][52][53]. The human ocular tissue expresses ACE2 and TMPRSS2 [54,55] and SARS-CoV-2 infection of human ocular tissue has been demonstrated [55]. Consequently, these studies in diverse human organoid models guide public health measures-e.g., personal protective equipment-as well as rigorously test the efficacy of COVID-19 medical countermeasures. Procurement of Human Material and Informed Consent Study approval was received from the Sydney Children's Hospital Network Ethics Review Board (HREC/16/SCHN/120) and the Medicine and Dentistry Human Ethics Sub-Committee, University of Melbourne (HREC/2057111). Written consent was obtained from all participants (or participant's guardian) prior to collection of biospecimens. Primary Nasal Epithelium Culture and Differentiation De-identified, cryopreserved human nasal epithelial cells were received from the Molecular and Integrative Cystic Research Centre (miCF RC), University of New South Wales, New South Wales, Australia where they were harvested from nasal turbinate brush samples with donor consent and cultured under conditional reprogram conditions (CRC) as previously described [30,31,35,56]. To initiate mucociliary differentiation at the air-liquid interface (ALI), cryovials of cells (500,000 cells/vial) were thawed and seeded onto Transwell inserts (6.5 mm Corning, Kennebunk, ME, USA; three inserts per vial) pre-coated with collagen type I (PureCol-S, Advanced BioMatrix, San Diego, CA, USA). Cells were incubated submerged in PheumoCult TM -ExPlus (STEMCELL Technologies, Vancouver, BC, Canada) until confluent, typically 4-7 days, then switched to ALI conditions by removing apical media and adding PneumoCult TM ALI medium (STEMCELL Technologies) to the basal chamber. The basal medium was replaced 3 times per week for 3-4 weeks during which time beating cilia and mucous production were monitored by light microscopy. To establish matrix-embedded organoids, cryovials of cells (500,000 cells/vial) were thawed and resuspended in 500 µL BME2 (Cultrex Reduced Growth Factor Basement Membrane Matrix, R&D Systems, Minneapolis, MN, USA) on ice and 50 µL domes added per well of 24 well plates. Plates were incubated at 37 • C for 20 min to allow the BME2 to set, then 500 µL of PneumoCult TM Airway Organoid Seeding Medium (AOSM, STEMCELL Technologies) was added per well to cover the domes. AOSM was changed every other day for 7 days. After 7 days, the medium was replaced with PneumoCult TM Airway Organoid Differentiation Medium (AODM, STEMCELL Technologies) to initiate mucociliary differentiation over 3-4 weeks. AODM medium was replaced 3 times a week. Beating cilia and mucous production were monitored by light microscopy. To expand the basal progenitors, cryovials of cells (500,000 cells/vial) were thawed and seeded onto tissue culture flasks (T-25, Greiner; 1 vial/flask) pre-coated with collagen I (PureCol S). The cells were maintained in PheumoCult TM -ExPlus (STEMCELL Technologies) until 80-90% confluent with medium change every other day. Cells were detached with TripLE TM Express Enzyme (Gibco, Life Technologies, UK), seeded into Transwells as above (150,000 cells/filter), and differentiated at ALI as above. Live Cell Imaging Images were captured on a Nikon TiE microscope running Nikon NIS Elements Version 5.2 using a 40× PlanApo NA0.75 objective. A CoolSNAP Myo CCD camera set to 4 × 4 binning to achieve 10 frames per second capture was used to generate 10 s movies of cilia beating in the cultures (movie 3 and 4). Alternatively, images were captured on an Olympus CKX41 microscope running CellSens software using SC30 camera. SARS-CoV-2 Propagation and ALI-HNE Infection Human SARS-CoV-2 clinical isolates BetaCoV/Australia/VIC01/2020 [34] (referred to as VIC01)) and BetaCoV/Australia/VIC18440/2021 (referred to as Delta) were propagated on Vero cells (ATCC) in DMEM (Gibco), supplemented with 1 µg/mL TPCK-Trypsin (Trypsin-Worthington), HEPES, Glutamax, penicillin (100 IU/mL), and streptomycin (100 IU/mL) at 37 • C in a humidified CO 2 incubator. Vero cells were seeded in cell culture flasks and infected at a multiplicity of infection (MOI) of 0.01. Supernatant was harvested 72 h later and clarified by low-speed centrifugation. Viral inoculum was aliquoted and stored at −80 • C until use. Infectious virus titers in stocks and samples were determined using TCID 50 ; briefly, 10-fold serial dilutions were added to 2 × 10 4 Vero cells seeded 24 h prior in a 96-well plate. Plates were incubated at 37 • C in a humidified CO 2 incubator for 3 days and then examined for cytopathic effect (CPE). The TCID 50 was calculated according to the method of Reed and Muench [57]. Work with infectious SARS-CoV-2 virus was performed in a Class II Biosafety Cabinet under BSL-3 containment at the Doherty Institute for Infection and Immunity. To infect ALI-HNE with SARS-CoV-2, virus was added to the apical surface at an MOI of 0.02 or 0.002 in 30 µL of inoculum per insert (assuming~300,000 cell at the ALI-HNE surface [35]). After virus adsorption for 2 h at 37 • C, the inoculum was washed off with PBS containing calcium and magnesium (PBS ++ ). Two hundred microliters of PBS ++ was then added to the apical surface and harvested after 10 min at 37 • C, before being stored at −80 • C. Apical PBS ++ washes were harvested in the same way at the indicated time points. Apical PBS ++ wash samples and basal medium collected at time of medium change were assayed for infectious virus by TCID 50 as above. Immunofluorescence and Confocal Microscopy At the indicated experimental endpoints, the cells were washed thrice with PBS ++ at room temperature. Cells were fixed with 4% paraformaldehyde (Electron Microscopy Sciences, Hatfield, PA, USA) for 30 min at room temperature. The fixative was aspirated and neutralized with 100 mM glycine in PBS ++ for 10 min at room temperature. Cells were incubated with permeabilization buffer (PB, 0.5% Triton-X in PBS ++ ) for 30 min on ice. The PB was washed off with 3 washes of PBS ++ , 5 min each. At this stage, the filters were excised from the inserts using a sharp scalpel, cut in half (for test and control primary antibodies), and transferred to Eppendorf tubes and incubated for 90 min at 4 • C in immunofluorescence buffer (IF, PBS ++ with 0.1% bovine serum albumin, 0.2% Triton, 0.05% Tween 20) containing 10% normal goat serum (BB, block buffer). At the end of incubation, the BB was removed, and primary antibody diluted in BB added. Following 48 h incubation at 4 • C, the primary antibody was washed off with IF buffer, three times, 5 min each. Fluorophore conjugated secondary antibody and Hoechst, diluted in BB, were added and tubes incubated for 3 h at room temperature. List of antibodies used in is Table S1. Secondary antibody was washed off with IF buffer, five times, 5 min each. Filters were transferred to slides, incubated for 30 min at room temperature with DAPI, washed once with PBS, and mounted in FluoroSave Reagent (EMD Millipore, Billerica, MA, USA); coverslips were sealed with nail polish. The confocal microscopy imaging was acquired on the Zeiss LSM 780 system. The acquired Z-sections were stacked and processed using ImageJ software. Orthagonal views were generated in ZEN 3.1 Software by ZEISS Microscopy. Immunohistochemistry The Transwell inserts were washed in PBS ++ , 3 times, 5 min each and the well and insert flooded with 10% neutral buffered formalin (Australian Biostain, Traralgon, VIC, Australia). The fixative was rinsed off with PBS, 3 times, 5 min each. The inserts were dehydrated through an ethanol graded series (35%, 50%, 70%, 95%, 100%, 100%) 10 min each. This was followed by histolene for 10 min, then liquid paraffin (58 • C) was added to the wells and Transwell inserts, and incubated for 1 h, after which time the paraffin was replaced and the 1 h incubation repeated. The plates were removed from the incubator to allow the paraffin to solidify; the membrane with paraffin attached was excised from the insert and embedded into paraffin block and processed using standard histological procedure. Sections (5 µm) were cut and hematoxylin and eosin stain performed using a standard histological protocol.
6,378.4
2022-01-01T00:00:00.000
[ "Biology", "Medicine" ]