text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Does giving students feedback on their concept maps through an on-screen avatar or a humanoid robot make a difference? Active or engaged learning is often seen as a way to improve students’ performance concerning STEM topics. When following such a form of self-directed learning, students often need to receive feedback on their progress. Giving real-time feedback on an individual basis is usually beyond the teacher’s capacity; in digital learning environments, this opens the door for exploring automated feedback. In the current study, a posttest only design was used to investigate the effect of providing students with different forms of automated feedback while they were creating a concept map about photosynthesis in an online inquiry learning environment. Participants were high school students ( N (cid:2) 138), divided over two experimental groups. In one group, feedback was given by a humanoid robot and in the other group via an avatar. The effects of the different feedback forms were compared for the two groups in terms of the frequency with which students consulted the feedback, concept map quality, and students’ attitudes. Results showed that the robot group consulted feedback more often than the avatar group. Moreover, the robot group had higher scores on a scale measuring enjoyment than the avatar group. Both of these differences were statistically significant. However, the average quality of the concept maps created by both groups was similar. due to directive and non-interactive teaching methods which has spurred a search for more engaging science learning experiences (e.g., [13]).These engaging or active learning approaches include, for example, inquiry learning with online labs and/or the use of interactive concept maps for knowledge expression.Overall, it is important for this active way of learning to be supported, so that students actually profit from being in charge of their own learning process [14].For inquiry learning with online labs, this support may consist of online scaffolds [43], while students may be provided with feedback when creating a concept map [25].In previous work we developed an online automated feedback mechanism for digital concept mapping in the context of an inquiry learning process, in which students received feedback through an on-screen avatar [1].In the current study we introduce a humanoid robot to deliver the same feedback as the avatar does, to explore if some of the disadvantages of the avatar (e.g., students ignoring the avatar) are overcome. In inquiry learning, students follow an inquiry cycle that resembles a scientific inquiry process.In this cycle, processes such as setting up hypotheses, designing an experiment, and drawing conclusions are central [34].Through these processes students are active processers of information and engage in extending and adapting their knowledge base, which is assumed to lead to deeper knowledge [12,17].Inquiry processes, however, are rather complex, and require good structuring and guidance in the learning environment in order to be effective [27].Additionally, for inquiry learning to be effective, students need adequate initial knowledge to build upon [19,20].For example, to be able to create informative hypotheses, learners need to have sufficient knowledge of the variables in the domain involved [24].One way to create such an initial knowledge base is to have students produce a concept map in the starting phases of the inquiry cycle. Concept maps are used to display relationships between concepts.In other words, concept maps are 2-dimensional diagrams that enable information to be organized by visualizing concepts and their organization [21].Concept maps are used as a tool in teaching and learning, as well as in evaluating conceptual understanding and knowledge.In the context of inquiry learning, creating concept maps can be considered part of the orientation and conceptualization phases of the inquiry cycle, which occur at the beginning of an inquiry cycle [34].Students re-activate prior knowledge by creating a concept map, and after receiving feedback, they can also construct a solid foundation for the subsequent inquiry process.However, giving feedback to each individual student takes too much time in a face-to-face educational environment and thus may be not doable by the teacher.Furthermore, the timing of feedback is also important.If a student's expression in a concept map needs corrective feedback, the feedback should be provided as soon as possible so that the learner can react accordingly.If feedback is not provided until the end of the task, learners will not be able to correct their concept map during the inquiry process [35].Automated feedback may be a solution for this, and some tools have been designed for that purpose (e.g., [22]).More specifically for our context, Anonymous [1] developed an automated feedback tool that is part of the Go-Lab ecosystem, an online sharing and learning platform for inquiry-based learning [15].Anonymous [1] demonstrated that their tool could effectively assess the quality of concept maps and provide accurate and helpful feedback on a number of specific shortcomings that are frequently visible in students' concept maps.However, their results showed that students with feedback available frequently did not consult it or did not fully utilize the feedback provided to them.In the Anonymous [1]'s study, feedback was given via a virtual agent (avatar).In the current study we investigate what happens if the feedback is given in a way that is more attractive and engaging than through an onscreen avatar, that is, via a humanoid robot. The use of robots is becoming an increasingly common practice influencing different aspects of daily life [7].More specifically, the use humanoid robots has become popular in the educational field [37].Research into the use of robots in education has highlighted the positive influence of robots on the cognitive and affective dimensions of learning, attributing this impact mostly to the robots' ability to display social behavior that encourages learners to participate in the learning process [5].Research has also indicated an increase in learners' achievement of cognitive learning objectives following the robots' presentation of content [28].Furthermore, it has also been noted that robots can display socially supportive behavior and provide personalized aid by naming the learners and referring to previous interactions [5].Their distinguishing characteristics of repeatability (i.e., the ability to consistently perform specific tasks or behaviors), humanoid appearance, intelligence, sensing capability, flexibility, interaction, body motion capability and adaptability allow robots to interact with learners in varying roles, as teaching assistant, peer, teacher and/or teaching resource/material in the class [4,8]. In the literature, there are studies comparing robots and avatars in terms of various variables in different domains.Pan and Steed [33] conducted a study to compare users' trust in expertise in avatar-, video-, and robot-mediated interaction.They analyzed participants' advice-seeking behavior in limited advice and risk situations as an indicator of trust.They found that participants were less likely to choose advice from the avatar, regardless of whether the avatar was an expert or not.In the study, the avatar scored the lowest on the trust assessment, while the robot and video were rated similarly.van den Berghe et al. [41] conducted a study to compare children receiving a programming training with a non-humanoid robot or an avatar.In the study, they compared learning to program a robot or an avatar.According to results of the study,although no differences in self-reported motivation or cooperation during the training were found, children showed higher learning outcomes when learning to program a robot rather than an avatar.Moreover, in the study of [11] on emotional storytelling using virtual and robotic agents, it was reported that the physically embodied robot garnered greater narrative attention from listeners compared to a virtual embodiment.Additionally, the study found that human voice narration was favored over the current text-to-speech technology.Furthermore, the results revealed a multifaceted relationship between the emotional content of the story, the facial expressions of the narrating agent, and the emotional responses of the listener.Notably, the empathetic engagement of the listener was demonstrated through observable facial expressions. In the research, humanoid robots are used most frequently in foreign language teaching and in the field of special education [32].However, robots can also be suitable supports for science instruction.Robots can provide motivation for students to learn science, relieve students' anxiety and create a fun learning environment.For instance, humanoid robots can be interactive feedback providers that may improve interest and motivation to learn scientific subjects.Only a few articles have explored this type of activity in science education (e.g., [3,9]).In the current study, students were asked to create a concept map in an early stage of an inquiry process and received feedback on their concept map either from an avatar or a humanoid robot.Both avatar and robot followed the same rules for generating and delivering feedback.Our study examines the following research questions: What effect, if any, does provision of feedback by a humanoid robot as compared to an avatar? 1 Method In this study, students were asked to create a concept map on the topic of photosynthesis.Students were randomly assigned to one of two groups.While a humanoid robot provided feedback for the concept maps of the students in one condition (HRC), the feedback was provided by an avatar in the other condition (AC). Participants In total, 138 students (58% male) from two Dutch secondary schools participated in the experiment.The students were all in their second year and aged around 13 years.Participants from each class of each school were randomly assigned to one of the two conditions: the humanoid robot condition (n 73, 64.4% male) or the avatar condition (n 65, 50.8% male). Learning Environment An inquiry learning environment (ILS, Inquiry learning Space) for the topic of photosynthesis in biology that was designed for our previous study Anonymous [1] formed the basis for the ILS in the current study.The ILS in the current study focused on the starting phases of the inquiry learning cycle.The ILS we used had a basic text-based introduction on photosynthesis as the starting phase, it then proceeded by presenting an example concept map about fruits and vegetables to demonstrate how to use the concept mapping tool, and in its third phase gave a concept mapping tool to create the concept map about photosynthesis.Students were asked to create a concept map on photosynthesis from scratch without any predefined concepts available.After that the ILS presented students with a brief questionnaire about participants' experiences with the avatar or humanoid robot while they were creating their concept map.All materials were presented in the students' native language, what is presented in this article are translations. The concept map was at the center of the current experiment.Figure 1 displays an 'expert' concept map for the topic of photosynthesis, which was used as the reference concept map in this study. Feedback Feedback was based on the algorithms from our previous study [1].The concept mapping tool used those algorithms to evaluate every action by a student (adding new concepts, editing existing concepts, creating a relation between concepts, etc.) while creating their concept map.Both humanoid robot and avatar used the same algorithms. After each change to the concept map, a list of all possible feedback was created and sorted, and the student was presented with the most relevant feedback prompt.The types of feedback students could receive were: (a) a suggestion to add a specific concept to the concept map, (b) a suggestion to add a proposition (link between two concepts) to the concept map, (c) a question asking the student if an irrelevant concept in their concept map is really necessary, (d) a question asking if an irrelevant proposition is necessary, (e) a suggestion to change the direction of a proposed relation, (f) a suggestion to change a label, or (g) a suggestion to add an intermediate concept in a proposition.These suggestions and questions were based on a comparison between the reference map (Fig. 1) and the student concept map.In mapping the student concept map onto the reference map, synonyms (e.g., 'sugar' replacing 'glucose' and 'O 2 ' replacing 'oxygen') and potential typos (based on Levenshtein distance, [26,29]) were taken into account. The timing of the feedback was based on the student's actions in the concept map.This means that feedback was presented within a certain timeframe after a meaningful change (excluding, e.g., changing the position of a concept) or as a result of the student's responses to the preceding feedback.The relevance of feedback prompts was based on a combination of the type of feedback and the state of the student concept map.When starting their concept map, students were guided towards adding the main concepts and propositions.When the concept map had reached a certain threshold, feedback was aimed at making specific improvements to the concept map.As the feedback algorithm relied on being able to identify content in the student concept map, fixing potential Fig. 1 The reference concept map for photosynthesis typos and removing irrelevant information was prioritized.In all cases, students had the option of immediately implementing the prompt (e.g., adding, removing, or relabelling concepts and propositions) or ignoring and suppressing it.Each specific prompt was only suggested to a student once.Feedback prompts (including all the possible prompts that were not presented), student concept maps, and changes to those concept maps were saved in the learning analytics logs for later use.The intention of the feedback was to assist students in developing effective concept maps.If a student successfully creates an effective concept map independently, then they may not require any feedback.Therefore, it is understandable that students who produce inadequate concept maps will receive more feedback-as this aligns with the purpose of providing constructive feedback. Humanoid Robot Feedback A NAO model humanoid robot was used as a feedback provider.The robot was placed on the table in a standing position next to the computer that the student used.A picture of the humanoid robot's setting is given in Fig. 2. The robot was in alive mode, which means that it tracked the student's face with its head to make eye contact.To alert students to available feedback (determined based on the feedback rules outlined in the previous section), the humanoid robot raised its right hand and waited for 10 s.If a student touched one of its hands within 10 s, the robot gave the feedback by speaking to the students in their native language.If the student did not touch one of the hands, this was taken as a sign that no feedback was wanted.In that case, the robot lowered its hand.If the feedback was a question requiring an answer from the student, the robot listened to the student's response.When the robot got a response, it transmitted the response to the concept mapping tool, which then acted according to the response.For instance, suppose that the student had the concept 'orange' in their concept map, and assume that the feedback from the robot was "Do you need the concept 'orange' in your concept map?".If a student responded orally, "Yes", the tool did not perform any action; if the response was "No", the tool deleted the concept 'orange' from the concept map. Avatar Feedback The feedback process for the avatar was essentially identical to that for the humanoid robot, but the avatar had a different physical presence.To alert students to available feedback, the avatar popped up in the bottom-right corner of their computer screen.If the student clicked on the avatar, indicating that they would like to see the feedback, the feedback was presented next to the concept map as a speech balloon coming from the avatar.Possible responses were identical in function to those used in the HRC, though the specific wording might Students' Use of Feedback To answer the first research question, interaction logs such as occurrences of feedback being offered and consulted were saved.We calculated how often students used the feedback that was offered, by dividing the number of times feedback was consulted by the number of times it was offered. Concept Map Quality In order to evaluate the quality of concept maps, the first author and research assistants used four criteria based on the relevant literature [38] in a hand-coding process.The criteria used were: (1) topic-relevant concepts, (2) relevant propositions, (3) correct concepts, and (4) correct propositions.The team performed the hand-coding process jointly and aimed to reach consensus on the quality of the concept maps.This strategy facilitated an iterative and collective assessment procedure, which enhanced both dependability and validity of scoring.Furthermore, the first author and research assistants compared the concept map with a reference concept map for correctness. Topic-relevant concepts and propositions were simply the number of concepts and propositions-nodes and edges-present in the student concept map and relevant for the topic in general (in this case, photosynthesis).The inclusion of a greater number of relevant elements in a student concept map was assumed to be associated with greater understanding of the topic, with students being able to name more concepts relevant to the topic and identify more of the connections between them. Evaluation of the remaining two criteria, correct concepts and propositions, was done by comparing the students' concept maps to a reference concept map (see Fig. 1).A concept was marked as correct when it was present in the reference map; a similar approach was used for propositions.It should be noted that when comparing propositions, the causal direction and label for the proposition were ignored.For example, the proposition "photosynthesis requires water" was considered equivalent to the proposition "water is used in photosynthesis", as both propositions connect photosynthesis and water. For each criterion, we counted the number of concepts or propositions in each student concept map.Next, we classified the quality of the concept map based on a four-point scale ranging from excellent through good and fair to poor, with the use of the rubrics shown in the Appendix.To have a numerical indication of the quality of the concept map, each of these four criteria was transformed into a numerical score (see the Appendix).The total concept map quality score was determined as the average of the scores obtained for these four criteria. For example, Fig. 4 shows a concept map created by a student (Translated from the student's native language).This concept map contains 11 concepts and 10 propositions.Regarding the first criterion, we gave a rating of Excellent (10 points) because that rating applies when there are 9 or more relevant concepts, and the student's concept map contains 11 relevant concepts.Similarly, for the second criterion, we gave a rating of Excellent (10 points) because the map contains 10 relevant propositions.For the third criterion, we gave a rating of Fair (6 points), because although the majority (64%) of the ideas are correct, some concepts are incorrect, such as substances of interest, ground, sun, men, and animals, and some concepts are missing, such as roots and leaves.Finally, for the fourth criterion, we gave a rating of Good (8 points), because three of the 10 propositions are false (70% correct).For example, while the proposition linking glucose and plant is in the reference map, in the student map there are separate propositions linking photosynthesis and plant, as well as photosynthesis and glucose.Averaging the scores for the four criteria gives a total score of 8.5 for the student concept map in Fig. 4. Attitude Test Sisman et al. [37] developed and validated a scale for measuring attitudes towards robots, which consists of the subscales Engagement, Intention, Enjoyment, and Anxiety.The scale items were created as five-point Likert-type questions (from 1 strongly disagree to 5 strongly agree),according to a reliability analysis, the internal consistency coefficient for the whole scale was 0.90 [37].For this study, the scale was slightly adapted.The item "I enjoy lessons that are handled using a robot" means that the lesson was taught by the robot.However, the robot just gave feedback in this study, it did not teach.Therefore, that item was removed.A new item, "I like getting feedback from the humanoid robot" was added to the Enjoyment subscale, in line with similar feedback studies in the literature [10].This item directly refers to receiving feedback from the robot.A parallel version of the questionnaire for the use of an avatar was created based on the opinion of two domain experts.The new version of the scale is given in Table 1.The scale was translated into Dutch and checked and corrected by two English and Dutch language experts. Cronbach's alpha for the total Attitude scale in this study was 0.92.For the Engagement subscale α 0.79, for the Anxiety subscale α 0.73, for the Intention subscale α 0.81, and for the Enjoyment subscale α 0.86.Moderate skewness and kurtosis were observed for the Enjoyment, Anxiety and Attitude (sub) scales (see Table 2). Procedure The experiment was conducted at two schools in the east of the Netherlands.In each school, two NAO humanoid robots were used.All students were randomly assigned to the HRC or AC groups before the experiment.The logistical arrangement differed slightly between the two schools.In the first school, two rooms were set up for the AC and one larger room for the HRC.The rooms for the AC each had one notebook computer, and the other larger room for the HRC had two humanoid robots and two notebooks that were next to the robots.The larger room was used by two participants at the same time.The space was large enough to isolate the participants from each other and the space was isolated in such a way that the participants could not see or hear each other.In the second school, students in the AC worked in a room with two notebooks, so that two participants used the space at the same time.The other two rooms each had a notebook and a humanoid robot that was standing next to the notebook.In both schools, on the morning of the actual experiment, one of the researchers informed students in each participating class as a group about the goals of the experiment, extent and purpose of data gathered, and students' right to withdraw at any time.The participants were also given an introduction to concept maps and the concept mapping tool in general.They were also instructed on how they would be called to the experiment individually throughout the day.The experiment was organized in such a way that four students (two from each condition) could participate in the study at the same time.The participating students were asked to direct the next students, or one of the researchers directed the next students to the relevant rooms.Students used their student number to log in to the ILS.One of the researchers or a research assistant gave a quick introduction about the experiment.Students were provided with a brief refresher on photosynthesis in the first phase of the ILS.In this section, there was a short paragraph about what photosynthesis is and the process of photosynthesis accompanied by two visuals, one showing a very general impression of the role of photosynthesis in food production and the other one showing a plant in the soil with a sun in the sky and indicating that in a leaf of the plant carbon dioxide goes in and oxygen comes out.Following that, one of the researchers or a research assistant introduced the use of the concept mapping tool for the example topic of fruits and vegetables.After this, students had 15 min to complete the concept map on photosynthesis. After 10 min, the student was informed by the experiment leader that there were 5 min left.After 15 min, students were instructed to stop working on their concept map and to fill out the questionnaire. Ethical Consent The experimental procedure was approved by the ethical committee of the University of Twente.Both secondary schools have agreements with all parents covering research for the purpose of improving education.The topic of the learning environment was aligned with students' regular curriculum to minimize the impact on students.Given these circumstances, students' passive consent was deemed appropriate. Students were informed of the purpose of the experiment, the extent and purpose of data gathered during the experiment, and their right to withdraw from the experiment at any time.Contact information for the researchers was provided directly to the students, and it was made clear that they could also direct any questions to their teacher -who would contact the researchers if needed.Data collection took place in Data Analysis Multiple analyses were conducted.First, to analyze students' consultation of feedback, logfiles were analyzed with regard to feedback interaction frequency.As data were not normally distributed, a Mann-Whitney U-test was used to investigate between-condition differences. Second, the quality scores for the concept maps were determined on the basis of the criteria presented above.To test for significant differences in total quality scores for the concept maps between the HRC and the AC, an independent samples t-test was carried out.Third, students' attitudes were compared between the HRC and AC conditions using analysis of variance (ANOVA; Attitude scale) or multivariate analysis of variance (MANOVA; on subscale level), as the four attitude subscales were moderately correlated.Subsequently, estimated marginal means were computed to investigate the direction of potential differences.Prior to all analyses, descriptive statistics were computed for all variables analyzed. Results The study was conducted at two schools, but for technical reasons data on avatar and robot feedback access were not available for the first school.For that reason, results for the attitude questionnaire and concept map quality are based on data from two schools, whereas data on the use of feedback are only available for one school. Students' Accessing of Feedback Students' access of feedback was calculated from the logfiles, where we determined how often students clicked the avatar to see the feedback or tapped the hand of the robot to hear the feedback.The consultation rates pertaining to the feedback messages were computed for both conditions.Descriptive statistics are given in Table 3.To determine whether these feedback consultation rates data were normally distributed, a Shapiro-Wilk test was performed.The result (W 0.914, p < 0.01) showed that the data were not normally distributed.Therefore, we performed a non-parametric Mann-Whitney U-test to determine whether a significant difference existed between the HRC and AC conditions.A total of 62.31% feedback messages were consulted by the students in the HRC, whereas only 19.51% of feedback messages were consulted by the students enrolled in the AC.There was a significant difference between the conditions in terms of consultation rate in favor of the HRC (W 172, p < 0.001, r 0.35).Statistical power was determined to be 0.51, sensitivity analysis yielded a required effect size of 0.44 to achieve 0.8 power. Concept Map Quality With skewness and kurtosis values between − 1.5 and 1.5, these data followed a normal distribution [40].On average, students scored 4.80 points (SD 2.42).Skewness and kurtosis were − 0.08 and − 1.01 respectively.The means and standard deviations of the total scores and the scores for each criterion are presented in Table 4 for both conditions.Results of independent samples t-tests are also given in Table 4.The results showed no significant difference between the total scores for the two conditions (p 0.838), and no significant differences on the separate criteria.Therefore, we can conclude that there was no difference in terms of concept map quality between the two conditions.Sensitivity analyses yielded that, given 0.8 power, an effect size of 4.8 would have been required. Attitudes Towards Robot and Avatar The Attitude questionnaire was administered at both participating schools.Table 5 presents descriptive statistics summed over both schools. To test for significant differences in students' attitudes towards the robot and avatar, an ANOVA was carried out on the total score (Attitude), yielding no significant difference: F(1, 136) 0.39, p 0.532.Assuming statistical power of 0.8, sensitivity analyses yielded a required effect size of 0.24.As described in Table 6, the subscales correlate significantly.Consequently, MANOVA analyses with the subscales as dependent variables were performed, yielding a significant between-condition difference: F(4, 133) 6.38, p < 0.001, V 0.06.This significant between-condition difference stemmed from the Enjoyment subscale, where participants in the HRC had significantly higher scores than their peers in the AC: t(136) -2.47, p 0.015.No significant differences were found for the subscales Engagement, Intention and Anxiety.For the MANOVA analysis we achieved a statistical power of 0.62.Sensitivity analysis yielded a required Pillai's trace V of 0.08, given a statistical power of 0.8. Conclusion and Discussion In the current study, students had to construct a concept map and received feedback during the construction process.This is a common approach to help improve the quality of the concept map and traditionally this feedback is given by humans, either the teacher (e.g., [36]) or fellow students (e.g., [16]).Feedback given by humans is hard for the receiver to ignore, but also time-consuming for the feedback giver.Therefore, alternatives in the form of automated feedback have been developed.However, this feedback, often presented by prompts or avatars, is easier for students to ignore than human feedback Anonymous [1].The current study sought to find out if offering students feedback from a humanoid robot meant that students consulted the feedback more frequently compared to feedback coming from an avatar and looked into the effects of the humanoid robot versus an avatar on students' experiences and the quality of their concept maps.Within the scope of RQ1, our results indicate that students tended to consult the robot feedback more frequently than the avatar feedback.This may be because the robot's embodied presence is more noticeable and trustable than a non-physical object such as an avatar.Bainbridge et al. [2] investigated the impact of a robot's physical presence on human evaluations of the robot as a social collaborator.Their results indicated that individuals were more inclined to complete trust-related activities when they interacted with the robot in vivo rather than through live video transmission, the latter being similar to a screen avatar.In addition, the fact that the humanoid robot raised its hand when there was feedback available may also have attracted the students' attention.This feature of the robot may have served as a visual and audible cue to alert students to engage with the feedback. For RQ2, we had expected that students would produce higher quality concept maps if they were willing to follow the advice of an agent.Since students more often consulted the advice given by the robot, we might have expected higher concept map quality in the HRC condition.However, we found no significant difference between the quality of the concept maps created by students in the HRC versus the AC.Students generated fairly low-quality products (M 4.85 for HRC, M 4.76 for AC) regardless of the type of agent they interacted with.The most obvious explanation may be that students dismissed the recommendations offered by the robot, although in a few cases the robot advice may have been suboptimal.For example, this was the case when students spoke in a low voice and the robot did not understand the student's response.In that case, the robot prompted for a repetition.However, if no suitable reply was received even after two attempts, no automatic modification of the concept map was applied, leaving the concept map as it was. Within the scope of RQ3, in terms of attitudes, no significant overall difference was found between students' attitudes towards the robot and the avatar.On the subscale level, students who interacted with the robot reported a higher level of enjoyment than students who received feedback through the avatar.This may have been because students perceived the robot as more entertaining than an avatar, due to its three-dimensional human-like appearance and interactive behavior.In relation to this, research has shown that humanlike social robots tend to evoke more positive emotional responses from users than systems lacking the humanoid form [6].The presence of a robot may also play a significant role in students' motivation [23,42].Therefore, based on the findings of previous studies and our current research results, it could be concluded that using humanoid robots to provide feedback to students may have advantages in terms of attracting student attention, increasing engagement with and utilization of the feedback, and enhancing students' enjoyment and motivation [39]. No significant differences in engagement and intention were found between those using the robot or the avatar but the descriptive data here also showed an advantage for the HRC.Previous research has shown that the impact of the physical presence and embodiment of social robots on engagement and intention is complex and may depend on various factors such as robot behavior, task demands, and individual user characteristics [18].This, of course, makes the proper design of robot instruction a challenging task.What may complicate matters could be that a robot may evoke more anxiety for some students than an avatar.We did not find a significant difference in anxiety between the two conditions, but the descriptive data suggested that the lack of physical presence in the avatar condition could create a less threatening atmosphere for some students, resulting in a lower level of anxiety.However, due to the lack of evidence for this, we can only speculate about this. The limitations of our study include the relatively small sample size, the use of only one type of humanoid robot, and the short duration of our intervention.Future research could investigate the impact of different designs for robot behavior and various types of robot tutors on students' engagement, motivation, intention to use feedback, and anxiety levels in order to gain a more comprehensive understanding of how social robots can be most effectively implemented in educational settings.In this context, it may have been the case that the robot by its very nature was more attractive to students than the avatar.A study set-up in which also a condition with a robot-like avatar is included could have shed more light on this aspect.Also, the intervention in our study only lasted for 15 min, which is shorter than a typical instructional intervention.It should be tested if students' results and attitudes are similar when a longer intervention is used.Furthermore, the study did not account for individual student characteristics such as prior experience with or attitudes towards technology, which may have affected their engagement and motivation levels.Additional research could explore how the social robot tutor can adapt to individual differences to provide more personalized education and support.Another limitation may be not having a control group to determine the effectiveness of the robot and avatar conditions compared to a traditional human-led tutoring approach or a control group without feedback.Additionally, another limitation of the study is that the intervention duration is limited, which may not adequately reflect the long-term impacts.The fact that no study has been conducted to understand the anxiety of students due to communication with robots can be stated among the limitations of this study.In addition, the application of a pre-test to determine the level of familiarity of students with robots could have been useful in terms of evaluating parameters such as the novelty effect.For future studies, we can suggest that researchers should consider issues such as the robot's inability to understand the speech of students who speak in a low voice, rarely the touch sensors do not work, and rarely communication errors. Our study contributes to the growing body of literature on the use of robots and avatars in educational settings.Our results suggest that robots can be more effective than avatars in promoting enjoyment of the learning process and lead to higher accessing of feedback than an avatar does.Implementing robots in the actual school practice schools/teachers would, of course, still have to deal with aspects as costs, setup, maintenance etc. Future research may consider exploring the mechanisms behind the effects we found and ways to optimize the use of humanoid robots and avatars to enhance student learning experiences. Acknowledgement The authors express their sincere gratitude to the participants in this study for actively participating and being receptive towards a new technological innovation.Additionally, recognition is given to Firdevs Sirma, Busra Bozkurt, Tugba Sevval Gumus, and Senanur Hiz -the research assistants responsible for coding students' concept maps.A special acknowledgement is also extended to Karel Kroeze who developed the feedback mechanism and avatar system utilized in this investigation and who also aided in the data collection.Furthermore, Joris de Vries (Het Noordik Lyceum) and Gerbrand Wilbrink Med (Canisius) are acknowledged for allowing their students' participation within this study along with providing valuable support throughout its duration. need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix Numerical Indication of the Quality of the Concept Map This appendix consists of the quantitative representation of the concept map's quality, evaluated against four criteria derived from the literature. Fig. 2 Fig. 2 Pictures from the humanoid robot setting Fig. 4 Fig. 4 Example of a student concept map Table 1 Items from the attitudes questionnaires Table 3 Feedback consultation rate per condition Table 4 Concept map quality as total score and per criterion A score of 1 is the lowest level of the measured construct and 5 is the highest.The Anxiety score was reversed to line up with the other subscales, with a higher score meaning a more favorable attitude *
8,460.2
2024-06-06T00:00:00.000
[ "Education", "Computer Science", "Engineering" ]
Folding of Prion Protein to Its Native α-Helical Conformation Is under Kinetic Control* The recombinant mouse prion protein (MoPrP) can be folded either to a monomeric α-helical or oligomeric β-sheet-rich isoform. By using circular dichroism spectroscopy and size-exclusion chromatography, we show that the β-rich isoform of MoPrP is thermodynamically more stable than the native α-helical isoform. The conformational transition from the α-helical to β-rich isoform is separated by a large energetic barrier that is associated with unfolding and with a higher order kinetic process related to oligomerization. Under partially denaturing acidic conditions, MoPrP avoids the kinetic trap posed by the α-helical isoform and folds directly to the thermodynamically more stable β-rich isoform. Our data demonstrate that the folding of the prion protein to its native α-helical monomeric conformation is under kinetic control. Although protein folding is commonly thought to be controlled by thermodynamic preferences, it has been understood by many, including Anfinsen and others (1,2), that kinetic issues can alter the folding landscape. Whereas most small globular proteins will refold spontaneously in vitro to a native conformation, in vivo folding often exploits auxiliary molecules and defined subcellular compartments to avoid the deposit of misfolded forms (3). Increasingly, a role for protein misfolding in a variety of neurodegenerative diseases has emerged. A common thread joining prion-based diseases and Alzheimer's disease, and possibly Parkinson's disease and frontotemporal dementia, is the conversion of a normal, cellular, monomeric isoform of a protein into a ␤-sheet-rich, polymeric form (4 -6). When the deposited polymeric form is sufficiently ordered to bind Congo red and exhibit birefringence to polarized light, the pathologic term amyloid is used to cluster these and other maladies (7). Recent studies by Dobson and others (8 -12) have demonstrated that a broad variety of proteins that rapidly fold into monomeric or oligomeric cellular forms under native-like conditions can also be refolded into ␤-rich, amyloid forms under conditions that destabilize the native state. So far, these proteins have not been associated with human deposition diseases. This finding has led to the suggestion that the ability to adopt alternative ␤-rich folds capable of forming amyloid is not a unique property of specific proteins associated with conformational diseases but reflects a general property of polypeptide chains (13). The interplay between protein concentration and the conformational preferences of the monomeric chain in driving the transition to a ␤-rich multimeric isoform remains to be more fully explored. Glockshuber and colleagues (14) have shown that a fragment of the mouse prion protein folds very rapidly into the ␣-helixrich conformation with a half-life of 170 s as measured at 4°C. Here, we report that a ␤-sheet-rich conformation of the mouse prion protein (MoPrP) 1 is thermodynamically more stable than its native ␣-helix-rich conformation. The conformational transition from the ␣-helical to a ␤-sheet-rich isoform is controlled by a large energetic barrier that is associated with partial unfolding and oligomerization of an intermediate state. Under partially denaturing conditions, it is possible to avoid the kinetic trap that leads to the normal cellular isoform, PrP C , and fold the prion protein directly to a thermodynamically more stable, non-native ␤-isoform. Our data demonstrate that folding the prion protein to its native ␣-conformation is under kinetic, not thermodynamic, control. EXPERIMENTAL PROCEDURES Protein Preparation-The expression and purification of recombinant MoPrP 89 -231 was performed as described by Mehlhorn et al. (15). Circular Dichroism-CD spectra were recorded with a J-720 CD spectrometer (Jasco, Easton, MD) scanning at 20 nm/min, with a bandwidth of 1 nm and data spacing of 0.5 nm using a 0.1-cm cuvette. Each spectrum represents the accumulation of three individual scans after subtracting the background spectra. To monitor the refolding curves, MoPrP was diluted from 10 M to various concentrations of urea in 20 mM sodium acetate in the absence or in the presence of 0.2 M NaCl, pH 3.6, and then incubated at room temperature for different periods of time. No change in pH value was detected during the time course of incubation. To monitor the kinetic trace of the conformational transition, ␣-MoPrP was rapidly mixed with 10 M urea in a 1:1 volume ratio, whereas to monitor the kinetics of refolding to the ␤-MoPrP, MoPrP unfolded in 10 M urea was mixed with buffer, again at a 1:1 volume ratio. All kinetic experiments were carried out in 20 mM sodium acetate and 0.2 M NaCl, pH 3.6. Analysis of the Kinetic Data-The rate constant and apparent rate order of refolding were calculated from Equation 1, in which C 0 is concentration of the monomer MoPrP at zero time, C is concentration of monomer MoPrP at time t, and n is apparent order of the process. ⌬E ‡ was calculated from the Arrhenius relation (1) with k obs measured experimentally and k 0 determined from the equation for diffusion-controlled reaction, assuming that the reaction follows fifthorder kinetics. Size-exclusion Chromatography-All separations were performed at 23°C with a flow rate of 1 ml/min using TSK-3000 high pressure liquid * This work was supported in part by grants from the National Institutes of Health (AG0Z132, AG10770, and NS14069), as well as by a gift from the G. Harold and Leila Y. Mathers Foundation. I.B. was supported by the John Douglas French Foundation for Alzheimer's Disease Research. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Thioflavin T Assay-To follow the kinetics of amyloid formation, 0.64 mg/ml ␤-MoPrP was incubated in 20 mM sodium acetate and 0.2 M NaCl, pH 5.5, constantly shaken at 36°C. In the time course of incubation, aliquots of MoPrP were diluted 8 times by phosphate-buffered saline, pH 7.0, and the fluorescence was measured using a LS50B fluorimeter (PerkinElmer Life Sciences) at 482 nm (excitation at 450 nm, excitation slit is 5 nm, emission slit is 10 nm, 0.4-cm rectangular cuvettes) with 5 M thioflavin T. Congo Red Binding-Congo red (Sigma) was dissolved in 5 mM potassium phosphate, 150 mM NaCl, filtered 5 times with a 0.22-mm filter (Millipore, Bedford, MA), and adjusted to 0.2 mM. The difference spectra were obtained by subtracting the Congo red spectra in the absence of MoPrP from the Congo red spectra in the presence of 1.5 M MoPrP amyloid, corrected for MoPrP scattering. Electron Microscopy-Samples were absorbed on carbon-coated, 600mesh copper grids for 30 s, stained with freshly filtered 2% ammonium molybdate or 2% uranyl acetate, and were viewed in a JEOL JEM 100CX II electron microscope at 80 kV at standard magnifications of 40,000. RESULTS AND DISCUSSION To estimate the thermodynamic stability of ␣-MoPrP, its urea-induced unfolding and refolding was measured using the far-UV CD as a probe of its secondary structure. In a low salt buffer, pH 3.6, the urea-induced unfolding profile of the molar ellipticity at 222 nm shows a single cooperative transition between the ␣-isoform and the unfolded state (Fig. 1a). When ␣-MoPrP is unfolded in 10 M urea and then refolded by diluting the urea concentration, its refolding curve expresses hysteresis, a phenomenon indicative of a non-two-state process (Fig. 1a). Both the unfolding and refolding limbs of the curve remain stable for at least 5 weeks when MoPrP is kept in a low salt buffer (20 mM sodium acetate). However, when refolding of MoPrP at 10 M concentration is performed in a high salt buffer (0.2 M NaCl, 20 mM sodium acetate), the refolding curve undergoes a gradual time-dependent transformation from a single cooperative transition to a transition with local intermediates (Fig. 1b). If a similar experiment is performed at 30 M MoPrP, the migration of refolding curve occurs more rapidly (Fig. 1c). Unfolded MoPrP folds first to the ␣-helical form upon dilution from 10 M urea (Fig. 1d). During incubation for 5 weeks in the high salt buffer, it undergoes a slow conformational transition to the ␤-rich form as illustrated by the change in the overall CD spectra, as well as by reduction of the CD signal at 222 nm ( Fig. 1, b and d). The conformational transition from the ␣-helical to a ␤-sheet-rich isoform is accompanied by oligomerization as judged by size-exclusion chromatography (SEC) (Fig. 1e). Immediately after dilution from 10 M urea, a new peak corresponding to an oligomer appears, in addition to the peak that represents a monomer. During the conformational transition, the population of monomer decreases whereas the fraction of oligomer grows. Although the square variance analysis of the oligomer peak indicates that there is certain heterogeneity of the oligomer species, electrospray mass spectrometry suggests that an octamer is the dominant multimeric assembly (data not shown). The unfolding and refolding behavior of MoPrP demonstrates hysteresis, a time-dependent transformation of the single transition curve into a double transition curve, and a concentration-dependence for this process. These observations challenge the application of either of the two possible classical three-state models used previously to estimate the thermodynamic parameters for PrP unfolding (16,17). In contrast, a model with two independent transitions, one between the ␣-isoform and unfolded and the other between the ␤-isoform and unfolded, can be used to fit the data. We have observed that the refolding to the ␣-isoform is much faster than the refolding to a ␤-isoform, whereas the time-dependent accumulation of a ␤-isoform indicates that it is thermodynamically more stable than the ␣-isoform. Thus, MoPrP diluted out of urea folds predominantly to the ␣-isoform with little ␤-isoform present. The presence of a ␤-isoform would account for the hysteresis between the unfolding and the refolding curves (Fig. 1b). With time, the refolding curve transforms from an apparent single transition to the double transition, demonstrating equilibration of the ␣and the ␤-isoforms. Direct comparison of the thermodynamic stability of the ␣-and the ␤-isoforms illustrate that the ␣-isoform is not the lowest energy state. First, we estimated the thermodynamic parameters for the ␣-isoform using the urea-induced unfolding curve and applying a classical two-state model (see Fig. 1a and Table I) (18). To evaluate the thermodynamic stability of the ␤-isoform, two parameters, the molar ellipticity at 222 nm and the fraction of the oligomer, were monitored in parallel as a function of urea concentration after re-equilibration of MoPrP for 5 weeks. Despite the fact that a small fraction of MoPrP remains trapped in the ␣-helical conformation even after 5 weeks, we have exploited the fraction of the ␤-oligomer as directly measured by SEC to analyze the "unfolded 7 ␤-isoform" equilibrium using the two-state model (Fig. 2a). The unfolding curve measured by CD requires deconvolution, because it is composed of signals from the ␤-isoform, the unfolded state, and the ␣-isoform. Using the population of the monomer measured by SEC as a function of urea concentration and the thermodynamic parameters estimated previously for the "␣isoform 7 unfolded" equilibrium, we calculated the contribution of the ␣-isoform to the CD curve (Fig. 2b). When this contribution is subtracted from the original curve, a curve reflecting the unfolded 7 ␤-isoform transition results. As shown in Fig. 2c, the transition curves for the ␤-isoform are superimposible, with ⌬G, m, and C 1/2 determined from the two techniques equal within the uncertainty of the experiment (Table I). Both ⌬G and C 1/2 demonstrate that the ␤-isoform is thermodynamically more stable than the ␣-isoform (see Fig. 2c and Table I). Because both isoforms can be refolded directly from the unfolded state, we have used the unfolded state as a reference in the free energy diagram (Fig. 2d). Although the ␤-isoform is thermodynamically more stable than the ␣-isoform, it might be not a true global energy minimum state, because the ␤-isoform can undergo an additional time-dependent transition to a polymeric amyloid form. Incubation of ␤-MoPrP at 37°C and constant shaking lead to the formation of higher molecular weight aggregates that possess amyloid properties. The process of amyloid formation monitored by thioflavin T binding displays an apparent latent period and then an exponential accumulation of the aggregate (Fig. 3a). In addition to thioflavin T, the amyloid of MoPrP binds Congo red in a specific manner as judged by birefringence of polarized light and typical red shift of absorbance spectra (Fig. 3b). Aggregated MoPrP forms numerous twisted fibrilar filaments as seen by electron microscopy (Fig. 3c). Why is the thermodynamically more stable ␤-isoform not accessible during folding under native conditions? Previously, it has been shown that the folding of PrP to the ␣-isoform is an extremely fast, first-order process (14). Folding to the ␤-isoform is slower by several orders of magnitude and is concentration-dependent. To prevent the conformational conversion, the ␣-isoform has to be separated by a large energetic barrier from the ␤-isoform. Although the free energy diagram does not provide a view of the actual kinetic pathway for the conformational transition, several important observations can be made concerning the origin of the energetic barrier. First, the ␣-isoform has to unfold substantially on route to the ␤-isoform. As we have seen before, the ␣-isoform converts very slowly to the ␤-isoform at pH 3.6 in the absence of urea (Fig. 1b). This process can be accelerated by shifting the ␣-isoform 7 unfolded equilibrium toward the unfolded state. After jumping the urea concentration from 0 to 5 M, we observed a very fast loss of secondary structure by the ␣-monomer within the dead time of manual mixing, followed by an accumulation of a ␤-sheet-rich conformation (Fig. 4a). This result illustrates that a substantial portion of the energetic bar-rier requires partial unfolding of the ␣-isoform. The connection between the structural complexity of the pretransition state and the energetic barrier is demonstrated by previous observations that conversion of PrP-derived peptides with low structural complexity into ␤-rich isoforms occurs spontaneously and does not require partially denaturing conditions (19 -21). Whether the transition state on the way from the ␣to the ␤-isoform is predominantly unfolded under native conditions or whether it has residual ␤-sheet or ␣-helical structure remains to be established. A significant contribution to the energetic barrier seems to be associated with the process of oligomerization. As shown on Fig. 4b, the accumulation of a ␤-rich conformation is accompanied by oligomerization. The fact that both kinetic curves are superimposible illustrates that the two processes are coupled (Fig. 4a). MoPrP can be refolded directly to the ␤-isoform if the unfolded protein is diluted first to 5 M urea (Fig. 4b). When dialyzed out of urea and salt, ␤-MoPrP is stable for months at room temperature with no detectable conversion to the ␣-isoform. Analysis of the kinetic traces indicates that the process of folding to the ␤-isoform represents a single transition with apparent reaction order of 5, regardless of whether the refolding is initiated by dilution of urea from 10 to 5 M, a jump of the urea concentration from 0 to 5 M, or if the conformational transition occurs in the absence of urea. Such a high order of reaction suggests that the conformational transition will depend upon the concentration of the transition state. To estimate the energy of activation (⌬E ‡ ) of the conformational transition, the Arrhenius relation, can be used, in which k obs is the constant rate of the conformational transition measured experimentally, and k 0 is the rate of the process under diffusion control. Under experimental conditions employed (pH 3.6 and 10 M MoPrP), we found that the ␣-isoform is separated from the ␤-isoform by an energy barrier of 20 kcal/mol (Fig. 4c). The energetic barrier is predicted to be much higher under physiological conditions because of the lower concentration of PrP and the higher thermodynamic stability of the ␣-isoform at pH 5-7 (Fig. 4d). For wild-type MoPrP, the calculated energy barrier of 35-45 kcal/mol is sufficient to prevent the process of conformational transition over the functional lifetime of the protein. Hence, a large energetic barrier prevents the conversion of the ␣-isoform to the thermodynamically more stable ␤-isoform. From the kinetic perspective, the process of conformational transition can be facilitated by the reduction of the energetic barrier (22). Thus, single point mutations associated with inherited forms of prion diseases might reduce the energetic barrier by stabilizing the transition state. Additionally, if PrP Sc provides a template for the conversion of PrP C to PrP Sc by binding and stabilizing the transition state, this would also speed up the conformational conversion. Our results clearly indicate that the folding of native PrP C is under kinetic control. The observations that many proteins are able to adopt alternative amyloid-like folds require us to revisit the role of kinetic traps in protein folding (8 -12). If a ␤-rich amyloid competent structure is an intrinsic preference especially at a high protein concentration, then compartmentalization of partially folded intermediates and proteins that mediate unfolding and clearance of misfolded proteins play critical roles in cellular health. In addition, side-chain patterns that favor the formation of amyloid, such as alternating polar and nonpolar amino acid residues, will be avoided (23). Despite these strategies, some proteins, including PrP, A␤, ␣-synuclein, parkin and tau, find a route to a ␤-rich, multimeric structure with unfortunate consequences. FIG. 4. a, the kinetic trace of transition from the ␣-isoform to the ␤-isoform (10 M MoPrP) induced by jumping the urea concentration from 0 to 5 M at pH 3.6, as monitored simultaneously by SEC (open squares) and CD (filled circles). b, the kinetic trace of folding the ␤-isoform induced by jumping the urea concentration from 10 to 5 M at pH 3.6 monitored by CD. In the inset, the linearity of the fifth-order (1/ normalized n Ϫ 1 versus time) plot suggests that the process of folding may follow an apparent fifth-order kinetics. c, free energy diagram of the conformational transition, representing the activation energy (kcal/ mol) estimated at pH 3.6 and 10 M of MoPrP. TS represents the transitional state. The ␤-MoPrP undergoes an additional transition to the amyloid form, represented by the dotted line. d, the activation energy versus the concentration of MoPrP shown at different pH values, as calculated from the Arrhenius relation applying the diffusion-controlled reaction rates. The physiological concentration range of MoPrP is shown in the shaded bar.
4,208.4
2001-06-08T00:00:00.000
[ "Biology", "Chemistry" ]
Simultaneously improved dielectric, optical and conductivity properties of SrLa1−xNdxLiTeO6 double perovskites In this study, SrLa1-xNdxLiTeO6 (x = 0.00, 0.25, 0.50, 0.75, 1.00) compounds were prepared using the solid-state method, and their structural, optical, dielectric and conductivity properties were investigated. The Rietveld refinement of x-ray diffraction (XRD) data shows that the compound crystallizes in monoclinic symmetry (i.e. P2 1 /n space group). The morphological scanning electron microscopy study reports a larger grain size when the dopant is added. The optical ultraviolet-visible light spectroscopy (UV-Vis) study reveals that the energy band gap decreases as the doping increases from x = 0.00 to 0.50. Dielectric studies using electrochemical impedance spectroscopy (EIS) characterization reveal the non-Debye trend of dielectric real permittivity (ε′) with the enhancement of ε′ at 1 MHz from x = 0.00 to 0.50. ε′ and the dielectric tangent losses (Tan δ) show increment and decrement patterns, respectively, when the temperature is increased. The frequency-dependent conductivity (σ AC ) plot follows the universal power law at all temperatures, and the σ AC behavior in SrLa1-xNdxLiTeO6 is due to the tunneling of polarons. Recently, tellurium-based oxides have attracted the attention of researchers due to their structural features and technological applications in various fields, such as microwave communication systems. The microwave dielectric properties of Te 6+ double perovskites (i.e. A 2 MgTeO 6 (A=Sr, Ca)) were reported by Dias et al; their ε′ values for Sr 2 MgTeO 6 and Ca 2 MgTeO 6 are 14.31 and 13.23, respectively [22]. Besides, the same authors reported that the Sr 2 ZnTeO 6 compound has a ε′ value of 14.1 [23]. These dielectric results could be associated with the tolerance factor, volume, density, and polarizability aspects. Recently, the focus has been turned to mixed A cations double perovskites (i.e. AA′BB′O 6 ) which simultaneously display a layered ordering of A site cations and octahedra ordering of B site cations. AA′BB′O 6 with a 1:1 B site ordering has the potential to be excellent in dielectric [24] or ferroelectricity/paraelectricity aspects [25]. For example, Vilesh et al discussed the dielectric properties of BaBiNaTeO 6 and BaLaNaTeO 6 which have ε′ values of 39.7 and 18.5 at 1 MHz, respectively. This property is attributed to the high density and large grain size of BaBiNaTeO 6 and consequently high ε′ [24]. Meanwhile, the dielectric properties of BaBiLiTeO 6 and SrBiLiTeO 6 have been studied, and the ε′ value of these compounds is 49.5 and 34.4 at 1 MHz, respectively. The latter has a smaller ε′ value because the strong ionic bond of the Sr compounds in SrBiLiTeO 6 has a smaller polarizability intern value [26]. However, no clear relationship exists between the grain size or densification and the dielectric properties in these works because the small-grain BaBiLiTeO 6 can still produce a higher ε′ than SrBiLiTeO 6 . These works indicate that applying two different cations at the A site in telluriumbased AA′BB′O 6 double perovskites can significantly affect the physical properties of double perovskites. SrLaLiTeO 6 reportedly consists of the perfect 1:1 B site ordering of Te 6+ and Li + that is similar to that in the SrBiLiTeO 6 [26] and has an optical band gap of 4 eV [27]. This finding suggests that SrLaLiTeO 6 has potential in dielectric or ferroelectricity properties too. To the best of our knowledge, no report about dielectric and conductivity properties of pristine or doped SrLaLiTeO 6 can be found in the literature. Investigating the effects of small size Nd doping at the A site in SrLaLiTeO 6 on its grain size, dielectric and optical properties is an interesting endeavor. Hence, we report structural, optical, dielectric and conductivity studies on SrNd x La 1-x LiTeO 6 (x=0.00, 0.25, 0.50, 0.75, 1.00). Experimental Polycrystalline powders of SrLa 1-x Nd x LiTeO 6 (x=0.00, 0.25, 0.50, 0.75, 1.00) were synthesised using the solidstate reaction method. High-purity ( 99.99%) strontium carbonate (SrCO 3 ), lithium carbonate (Li 2 CO 3 ), lanthanum oxide (La 2 O 3 ), neodymium oxide (Nd 2 O 3 ) and tellurium dioxide (TeO 2 ) powders were used as raw materials. The chemical powders were mixed at stoichiometric ratios with a total mass of 3 g. The samples were then ground in an agate mortar with a pestle for 1 h to achieve good homogeneity. After grinding, the mixed powder sample was then pressed into a pellet at a pressure of 4-5 KPa using a hydraulic press. The pellet was then placed on an alumina crucible and calcinated in a box furnace at 850°C for 10 h at a heating rate of 15°C min −1 and a cooling rate of 1°C min −1 . Then, the samples were sintered in air at 900°C for 10 h, followed by slow cooling at 1°C min −1 . This action is expected to maintain the obtained stoichiometry near the desired oxygen stoichiometry [28,29]. The phase(s) of the final products were analyzed using the XRD patterns collected by an x-ray powder diffractometer (PANanalytical model Xpert PRO MPD diffractometer) equipped with a Cu Kα source from 10°t o 90°. A General Structure Analysis System and a Graphical User Interface (i.e. EXPGUI) programs [30,31] were used for Rietveld refinement [32] prior to the visualization in the Visualisation for Electronic Structural Analysis program. The peak shape was modeled by the pseudo-Voight function refined with the cell parameter, the scale factor, the zero factor, and the background function. For the Fourier transform infrared (FTIR) study, samples were prepared by thorough mixing with potassium bromide (KBr), and the infrared reflectance spectra ranging from 400 cm −1 to 1500 cm −1 were recorded using an FTIR-Raman Drift Nicolet 6700 equipment. The surface morphology of the sintered pellets was obtained using a scanning electron microscope (SEM) characterization using LEO model 982 Gemini equipment. The grain size was measured using the ImageJ software. The dielectric and impedance properties at the frequency range of 50 Hz to 1 MHz were collected using a HIOKI 3532-50 LCR Hi Tester connected to a computer with the sandwich geometry of the electrode pellets. The optical study was performed using a Lambda 750, Perkin Elmer, Waltham, USA equipment in the 2 hv to 5 hv range. Results and discussions 3.1. Structural analysis Figure 1 shows the XRD data of SrLa 1-x Nd x LiTeO 6 (x=0.00, 0. 25 Figure 2 shows the octahedral structure from the ab plane that consists of Te 6+ or Li + (B site cations) that is alternately surrounded by six O 2− atoms, whilst the much larger Sr 2+ and La 3+ (A site cations) fill the occupancy between these Te(Li)O 6 octahedral layers. Sr 2+ and La 3+ are located at (0.497, 0.506, 0.253) for Nd 0.00, (0.495, 0.513, 0.252) for Nd 0.25, (0.497, 0.518, 0.251) for Nd 0.50, (0.501, 0.523, 0.252) for Nd 0.75 and (0.500, 0.527, 0.251) for Nd 1.00. Li + is placed at (0.5, 0, 0) and Te 6+ at (0, 0.5, 0). The tolerance factor (τ) is calculated by the following equation [27]: where R a and R a' represent the radii of the A site cations (Sr 2+ and La 3+ /Nd 3+ , respectively), R b and R b' represent the radii of the B site cations (Li + and Te 6+ , respectively), and R o represents the radius of the oxygen anion (O 2-). The ideal cubic structure has a τ of 1. The decrease in τ is caused by the small ionic size at the A sites. Hence, the crystal structure is distorted by the tilting phenomenon of the B and B′ octahedra. To calculate the distortion factor, the ionic radii used for this calculation are as follows: where θ represents the average (Li-O-Te) bond angles [34]. The octahedra tilting angle calculated for all compounds is 9.6°. The average Li-O-Te bond angle for all compounds is the same, suggesting that doping at the A site does not affect the B site angle. The octahedra tilt angle in this compound is attributed to the presence of Sr 2+ , La 3+, and Nd 3+ atoms at the A sites because the octahedra tilt is needed to optimize the interatomic distance in (Sr/La/Nd-O) bonds. The bond lengths of (Li-O) and (Te-O) are different, suggesting that the octahedral structure distorts and affects the bond length of B cations. The bond between Li and O is longer than that between Te and O because Li + has a larger ionic radius than Te 6+ , and Te-O has a stronger covalent bond than Li-O because of its hexavalent oxidation state. The crystallite size (D) is calculated using the following Scherrer equation [35]: where K represents a constant (0.94), λ represents the wavelength of XRD, β represents the full width at half maximum, and θ represents the angle of the most intense peak in XRD. The D values for these compounds are 1.939 (7) 1.936 (4) 1.937 (6) 1.931 (7) 1.935(9) Te-O 2 (x2) 1.939 (6) 1.937 (5) 1.937 (5) 1.932 (7) 1.936(9) Te-O 3 (x2) 1.925 (1) 1.925 (6) 1.923 (7) 1.921 (10) Figure 3 shows the infrared patterns of SrLa 1-x Nd x LiTeO 6 (x=0. Figure 4 exhibits the surface morphological properties of SrLa 1-x Nd x LiTeO 6 . The compounds are formed by rough, agglomerated particles but with nearly the same shape distribution. The SEM image of all samples shows that the particles are crowded together and nearly 'glued' to each other, especially in Nd 0.75 and Nd 1.00. SrLa 1-x Nd x LiTeO 6 possessed micrometer-sized grains with considerable porosity, and the porosity decreased as doping increased. The compounds' grain size range was enhanced from 0.7-1.2 μm for Nd 0.00 to 0.9-2.0 μm for Nd 0.25, 1.4-2.7 μm for Nd 0.50, 1.1-2.0 μm for Nd 0.75 and 0.6-1.5 μm for Nd 1.00. The increase in grain size was consistent with the increment in crystallite size. The increase in grain size in SrLa 1-x Nd x LiTeO 6 until Nd 0.50 may be due to the small cationic-size Nd content increased as more dopant added, resulting in the decline of the grain boundaries' free energy and led to grain size enhancement. The reduction of grain size in Nd 0.75 and Nd 1.00 was probably due to the increasingly dominant Nd phase with small grain size when the Nd content was high. The energy-dispersive x-ray spectroscopy(EDX) results show that all elements in the compounds are present in the samples. UV-Vis analysis The UV-Vis diffuse reflectance measurements for SrLa 1-x Nd x LiTeO 6 were taken at the wavelength range of 200 nm to 800 nm at room temperature, and figure 5(a) displays the obtained spectra as a function of wavelength (λ). Absorption peaks were observed at 300-400, 500-600, and 700-800 nm. From this plot, the peaks' intensity increased as the value of doping increased from Nd 0.25 to Nd 1.00. Meanwhile, the Nd 0.00 compound showed no peaks. Hence, these peaks may be related to the charge transfer transition between Nd 3+ and O 2− (valence and conduction band) in the lattice structure. Figure 5(b) exhibits the UV-Vis diffuse spectra of SrLa 1-x Nd x LiTeO 6 plotted according to the Kubelka-Munk equation [27]: where R represents the diffuse reflectance. By taking the intercept of the extrapolations to zero absorption with the photon energy axis, the absorption edge value can be obtained. The bandgap energy values can be calculated by applying the following equation: where hv represents the energy, and E g represents the optical gap energy. In this plot, the values of E g can be directly obtained by extrapolation without further calculation as shown in figure 5(b). From the Tauc equation, n can be varied from n=1/2 (direct allowed transition), n=2 (indirect allowed transition), n=3/2 (direct forbidden transition) and n=3 (an indirect forbidden transition). The fitting of all the values of n in the Tauc relation indicates that n=1/2 is the most well-fitted. Thus, the optical band gap in this compound has allowed the direct shift of electrons from the valence band to the conduction band where only the absorption of photon occurred because its crystal momentum at both bands was the same. The values of E g obtained from the Kubelka-Munk and Tauc plots are tabulated in table 2. The values from both methods are comparable to each other. These results are comparable to the reports on the optical properties of SrLaLiTeO 6 by Lal et al [42]. The bandgap deformation potential for a physical response to a hydrostatic volume change is given by: where V is the unit cell volume [43,44]. Therefore, the optical band gap should decrease with the increasing dopant concentration if the deformation is positive. In this work, the optical bandgap of the samples decreased as Nd concentration increased till x=0.50 which is consistent with the unit cell volume trend. The increase of the bandgap with further doping may be attributed to the increase of the unit cell volume. Furthermore, the smallest optical band gap was obtained in Nd 0.50, indicating the best conducting ability. This property may due to the presence of polarons in these samples that contribute to conducting ability. The production of polarons in SrLa 1-x Nd x LiTeO 6 can be related to the single/double ionized oxygen vacancies formation and most probably caused by the lithium evaporation during high-temperature sintering as suggested by Kroger and Vink [45] with the following reaction: where  V o represents double ionized oxygen vacancies. Oxygen vacancies contributed to the results in this work. However, the monoclinic structure and the lattice parameters of all samples are comparable to the stoichiometric in SrLaLiTeO 6 according to a previous report [27]. Hence, the differences in oxygen content amongst the samples are small and should not have a major influence on the measured properties. Materials with a wide bandgap that can emit light have helped fuel the development of semiconductors in recent years for optoelectronics applications, such as blue/green lasers or light-emitting diode (LED) [46]. The high values of the optical band gap obtained are comparable with those in the GaN solid-state (3.5 eV and above) which are usually used as a basis in solid-state LED production to provide an energy-saving, durable, and long-life alternative to incandescent bulbs [47,48]. EIS dielectric analysis The real component of dielectric constant (ε′) indicates the ability of the electrical dipoles in the samples to align with the external electric field. Figure 6(a) shows the variation of ε′ versus frequency between 50 Hz to 1 MHz for the SrLa 1-x Nd x LiTeO 6 (x=0.00, 0.25, 0.50, 0.75, 1.00) samples at 298 K. The specific response of ε′ towards frequency is different for each sample according to the concentration of the Nd dopant. For frequencies below 100 Hz, the samples showed a nearly similar response of ε′ variation with the frequency where ε′ dropped abruptly with the frequency from a higher initial value. However, the specific response for each sample was different at frequencies higher than 100 Hz. For sample Nd 0.00, the response showed a weak decrease in ε′ with the frequencies. Increasing the Nd dopant in samples Nd 0.25, Nd 0.75, and Nd 1.00 caused a sharper decrease of ε′ up to 1 MHz. However, for sample Nd 0.50, the ε′ value was surprisingly, nearly constant at above 100 Hz. Figure 6(a) (inset) shows the variation of ε′ at frequencies of 250, 500 and 750 kHz, and the grain size for each Nd concentration was x=0.00, 0.25, 0.50, 0.75, 1.00. Interestingly, this variation showed an increasing ε′ trend up to a peak value in Nd 0.50 before decreasing with a further increase in Nd dopant. This ε′ maxima apparently coincided with the grain size peak in Nd 0.50. Figure 6(b) shows the Tan δ variation for SrLa 1-x Nd x LiTeO 6 (x=0.00, 0.25, 0.50, 0.75, 1.00) at 298 K. Dielectric loss (Tan δ) represents the energy loss during the alternation of electric field onto the samples. This figure shows the drop of Tan δ at low frequencies (<500 Hz), followed by a relatively flat trend at intermediate frequencies (600-10 kHz). For samples Nd 0.00 and 0.50, the loss slightly increased at 100 kHz and above. However, for samples, Nd 0.25, Nd 0.75, and Nd 1.00, a large increase in Tan δ at the same frequency range can be observed. This increase indicates the possibility of the presence of a relaxation peak at frequencies higher than 1 MHz. The initial drop of ε′ at low frequencies (< 100 Hz) for all samples in figure 6(a) can be attributed to the heavy dipoles which mainly consist of space charge dipoles. The space charge dipoles were probably due to the defects in grain boundaries that cannot follow the alternation of the electric field as the frequencies increased. This drop was supported by the high value of Tan δ at the same frequency range in all samples which indicates loss of energy took place. At this frequency range (< 100 Hz), the Nd dopant did not seem to significantly affect the heavy space charge dipoles' contribution to the total polarisation. The defect in the crystal structure, i.e. oxygen vacancies formation, maybe the other reason for the presence of the heavy dipoles in this frequency range. The presence of oxygen vacancies in the insulating compounds can reduce the phonon modes, causing space charge polarization [49]. This drop may also be due to the DC conduction loss. However, the plot of ln ε′ versus ln ω does not show a slope of magnitude near (−1) to prove the presence of DC conduction loss. At higher frequencies (>100 Hz), the ε′ response was most probably due to the combination of mediumand light-sized dipoles which consist of orientational, ionic, and electronic dipoles. For sample Nd 0.00, the decline of ε′ was correlated with the increase of Tan δ which can be seen at frequencies higher than 10 kHz. The effect of Nd dopant can be seen at frequencies ranging from 100 kHz to 1 MHz where, for samples Nd 0.25, Nd 0.75 and Nd 1.00, increasing the Nd dopant caused a sharper decrease in ε′ which was related to the larger increase in Tan δ for frequencies higher than 100 kHz. Above 100 kHz, the medium-sized dipoles in Nd 0.25, N0.75, and Nd 1.00 were not able to follow the external electric field efficiently till 1 MHz, thus producing a low polarisation effect. However, the almost constant ε′ for Nd 0.50 was intriguing but can be related to the nearly constant Tan δ for the same sample. The absence of a similar increase in Tan δ with frequency for this sample indicates the abundance of medium-sized dipoles that can respond to high frequencies and produce a higher polarisation value. The variation of ε′ can be understood better by considering the ε′ trend in the inset of figure 6(a) which is consistent with the grain size trend with the increasing Nd concentration. Larger grains allow the formation of larger medium-sized dipoles with a higher ε′ than smaller grains. In addition, larger grains are expected to carry a higher density of medium-sized dipoles than smaller grains. This phenomenon explains the consistency of the variation in grain size with that of ε′. A similar explanation of the effect of grain size on ε′ is reported in the literature, where ε′ is linearly related to the grain size [50]. Figure 7 illustrates the variation of the ε′ value for SrLa 1-x Nd x LiTeO 6 as a function of temperature in the range of 298 K to 343 K. The ε′ response variation pattern is nearly the same in all samples. All samples showed decrease trend of ε′ at low frequencies until 300 Hz, starting from room temperature to 323 K, except for Nd 0.25 which starting from room temperature to 333 K and for Nd 1.00 which starting from room temperature to 313 K. However, the values exhibit an increasing ε′ pattern at the same frequencies of higher temperatures. Meanwhile, the ε′ value at the intermediate and high frequencies presents a decreasing trend for all temperatures, and the ε′ value increases with the temperature, especially at high frequencies. Figure 8 illustrates the variation of ε′ with the function of temperature for the Nd 0.50 sample at different frequencies. At frequencies higher than 100 Hz, the plot dispersion was clearly towards higher temperatures. This result shows that ε′ was strongly dependent on frequencies at high temperatures. At most frequencies, the values of ε′ were not increasing linearly with the increasing temperature. The ε′ values decreased at a certain temperature for each frequency. At 100 Hz, the decrease of ε′ started at 303 K. At higher frequencies, the decrease of ε′ shifted towards higher temperatures, except at 1 MHz where the ε′ values increased with the temperature up until 343 K. At 1 MHz, the value of ε′ may decrease at much higher temperatures, following the trend of ε′ at lower frequencies. Figure 9 illustrates the variation of the Tan δ of all samples with respect to various temperatures. Two maxima existed at low and higher frequencies. As the temperature increased, the low frequencies maxima were enhanced, whilst the high frequencies maxima were reduced. Nonetheless, in most compounds, no lowfrequency maxima existed at the temperature of 333 K and above. This phenomenon can be correlated to the trend at low frequencies in the graph of the ε′ response in figure 7 and most probably pertains to the resonance that occurred in all the samples. Overall, the declining trend in figure 7 indicates the reduction of the contribution of the heavy space charge when the frequencies increased. However, starting at 333 K (343 K in Nd 0.25 and 323 K in Nd 1.00), the increasing trend of ε′ values at frequencies below 400 kHz can be observed in compounds which is contrary to the declining trend of ε′ values at low temperatures. This finding is most probably due to the dielectric resonant [51]. At high frequencies, the strong temperature dependence of the ε′ value suggests there is more thermal energy to weaken structural bonds and converted into kinetic energy to move the space charge and orientational dipoles to follow electric field alternation as temperature increased. These results suggest two possible effects of high temperature (323 K and above) on the studied samples. One is the weakening of the dipole formations at low frequencies due to the dielectric resonant. The other is the provision of kinetic energy and weakening structural bonds. Hence, the dipoles' movement at higher frequencies is facilitated. Figure 8 shows the weak temperature dependence of the ε′ value at low frequencies, i.e. below 1000 Hz, indicates that dielectric resonant occurred and hindered the dipoles from following the electric field alternation. As the frequency increased, the shifting of ε′ peak towards higher temperatures indicates the effects of temperature in enhancing ε′ values by allowing more thermal energy conversion into kinetic energy to aid dipole movements, before resonant occur at higher temperatures. The largest increase rate of ε′ against temperature is at 1 MHz, indicating the most delocalization of dipoles which consist of orientational and lighter dipoles. Figure 9 shows that the presence of maxima at low frequencies can be due to the DC conduction loss as discussed earlier. The presence of maxima can also be related to the movement of charge carriers through the grain boundaries. Given the higher resistance of grain boundaries than that of the grain, more energy is required for the motion of carriers through the grain boundaries. Hence, the energy loss is high. For temperatures 333 K and 343 K (including 323 K in Nd 0.75 and Nd 1.00), the shape of the Tan δ curve is most probably related to the dielectric resonant. Meanwhile, the presence of high-frequency maxima indicates the presence of relaxation peaks at high frequencies. Furthermore, the plot indicates possibilities of peaks shifting towards higher frequencies with respect to temperature. This possibility can indicate the results of bond weakening and hence more free movement of dipoles which is more dominant at higher temperatures. Furthermore, this result can also be due to the scattering of thermally induced charge carriers [52]. Figure 10 shows the imaginary modulus plot which is important for determining the charge carrier's mechanism or relaxation times within the studied compound. However, figure 10 shows no definitive relaxation peaks of SrLa 1-x Nd x LiTeO 6 . A higher frequency is probably required to determine the relaxation peaks. The plot exhibits a long tail at lower frequencies, specifically at higher temperatures. As the temperature elevated, the presence of long-tail suggests that more delocalization of charge carriers occurred. This is understandable as by conservation of energy, more thermal energy supplied converted into more kinetic energy to embark on the movement of carriers. However, the tails do not start from zero values, indicating the electrode effect of space charge. The impedance spectroscopy method is widely used to characterize the electrical properties of materials and provide data regarding both the resistive (real) and reactive (imaginary part) components with various transport mechanisms operating within the structure of the material. Figure 11(a) shows the Nyquist plots of the SrLa 1-x Nd x LiTeO 6 compounds. The Low-frequency region consists of the grain boundary resistance, R gb [53]. The plots of all compounds are straight lines, indicating their high insulating or resistive nature. Nd 0.50 clearly shows the largest R gb whilst Nd 0.00 shows the smallest R gb with 1.3×10 6 Ω and 4.6×10 6 Ω, respectively, which are consistent with their ε' trend. The fitted semi-circular arcs show a slight distortion, indicating the deviation from the Debye mechanism. Figure 11 To calculate the capacitance of SrLa 1-x Nd x LiTeO 6 , the following equation is used: where e o represents the vacuum permittivity, A represents the area of contact between compounds and electrode whilst d represents the thickness of compounds. For the AC electrical property, AC conductivity takes place only when the induced electric field present towards the compounds. Figure 12(a) displays the plot of the AC conductivity of Nd 0.50 which displays two distinct regions, a low-frequency dispersive region. The value of σ AC increased significantly from a low frequency to 100 kHz. At higher frequencies, a near plateau region existed from 100 kHz to 1 MHz. The variation of conductivity in the low-frequency region was due to the polarisation effects at the electrode and perovskite interfaces. The conductivity decreased as the frequency decreased due to the increased accumulation charge at the electrode and perovskite interfaces. The conductivity was nearly frequency-independent at higher frequencies (plateau region). The extrapolation of the plateau region on the log σ AC provides σ DC [54] for SrLa 1-x Nd x LiTeO 6 . For Nd 0.50, the obtained DC conductivity is determined at 4.4×10 -9 Scm −1 at room temperature. Meanwhile, σ AC may exhibit a higher value of σ AC at higher frequencies to show the same trend of σ AC as in other ceramic double perovskites [53,55]. This is correlated with the probability of peaks at higher frequencies in the plot of M' and Tan δ as previously discussed. Hence, the AC conductivity of SrLa 1-x Nd x LiTeO 6 can be related to Jonscher's power law [56]: where A represents pre-exponential constant, ω=2πf represents the angular frequency and s represents powerlaw exponent, where 0<s<1. From this equation, s DC is the DC conductivity whereas Aω s is the AC conductivity. The plot of ln ε′ against ln ω (not shown) indicates that the value of s obtained is less than 1. According to Funke [55], when s<1 charge carriers in compounds take a translational motion with a sudden hopping between sites in the lattice whilst when s>1, localized hopping can occur in the lattice sites. Hence, the To further confirm the AC conduction mechanism in SrLa 1-x Nd x LiTeO 6 , the variation of s with different temperatures is plotted. Figure 12(b) shows the variation of s with different temperatures. This figure indicates that the variation of s decreases until the temperature of 323 K and increases again until the temperature of 343 K. Some mechanisms of AC conductivity have been suggested by researchers in previous studies, i.e. correlated barrier hopping [57,58], variable range hopping [59], small polaron hopping [53,60,61], quantum mechanical tunneling [62], non-overlapping small polaron tunneling [63] and overlapping large polaron tunneling (OLPT) [64]. Figure 12(c) shows that the most suitable model for all compounds is the OLPT mechanism model that exhibits a decreasing trend of s values with the temperature before starting to increase at a certain temperature which is the same trend reported for other ceramics [65]. The AC conductivity for the overlapping large polaron tunneling model is expressed as: where α represents the decay parameter for the localized wave function, N E F ( )represents the density of the localized state, k B represents Boltzmann's constant, T represents the absolute temperature, W HO represents the activation energy associated with the charge transfer between overlapping sites,r p represents the polaron radius and w R represents the tunneling length at frequency w. For the frequency exponent, the s predicted by this model is expressed as follows [ The polarons can tunnel between two or more sites in the lattice structure with more or less hopping movement between defects or energy wells. The large polaron is the excitation due to the polaron tunneling in the deformed lattice in which the change of the polarisation yields the polaron energy. In the case of the OLPT, the large polaron wells overlap at two sites by reducing the polaron hopping energy [65]. Henceforth, this mechanism can be related to the small value activation energy of polarons as calculated using equation (9). Conclusion SrLa 1-x Nd x LiTeO 6 (x=0.00, 0.25, 0.50, 0.75, 1.00) compounds were prepared using the solid-state reaction method and crystallised into a P2 1 /n primitive monoclinic structure. The IR spectra exhibit the characteristic bands of Te(Li)O bonds, confirming the 'fingerprint' structure of this double perovskite. The SEM images reveal that grain size increases when dopant is added up until x=0.50, whilst the grain size in the compound with x=0.75 and x=1.00 decreases. The UV-Vis study reports the decrease of E opt as the doping increases from Nd 0.00 to Nd 0.50. The frequency and temperature dependence of the real permittivity and DC conductivity of SrNd x La 1-x LiTeO 6 were characterized by impedance spectroscopy. Nd 0.50 recorded the highest ε′ value of 165 at room temperature. This behavior explained on the basis of grain size enhancement. To further enhance the e¢ value, the sintering parameters can be altered to increase the densification as in the other reports. Meanwhile, the AC conductivity plot follows the universal power law, and the nature of AC conductivity can be explained using the OLPT model with polarons as charge carriers.
7,200.2
2020-08-07T00:00:00.000
[ "Materials Science", "Physics" ]
Radiomics Driven Diffusion Weighted Imaging Sensing Strategies for Zone-Level Prostate Cancer Sensing Prostate cancer is the most commonly diagnosed cancer in North American men; however, prognosis is relatively good given early diagnosis. This motivates the need for fast and reliable prostate cancer sensing. Diffusion weighted imaging (DWI) has gained traction in recent years as a fast non-invasive approach to cancer sensing. The most commonly used DWI sensing modality currently is apparent diffusion coefficient (ADC) imaging, with the recently introduced computed high-b value diffusion weighted imaging (CHB-DWI) showing considerable promise for cancer sensing. In this study, we investigate the efficacy of ADC and CHB-DWI sensing modalities when applied to zone-level prostate cancer sensing by introducing several radiomics driven zone-level prostate cancer sensing strategies geared around hand-engineered radiomic sequences from DWI sensing (which we term as Zone-X sensing strategies). Furthermore, we also propose Zone-DR, a discovery radiomics approach based on zone-level deep radiomic sequencer discovery that discover radiomic sequences directly for radiomics driven sensing. Experimental results using 12,466 pathology-verified zones obtained through the different DWI sensing modalities of 101 patients showed that: (i) the introduced Zone-X and Zone-DR radiomics driven sensing strategies significantly outperformed the traditional clinical heuristics driven strategy in terms of AUC, (ii) the introduced Zone-DR and Zone-SVM strategies achieved the highest sensitivity and specificity, respectively for ADC amongst the tested radiomics driven strategies, (iii) the introduced Zone-DR and Zone-LR strategies achieved the highest sensitivities for CHB-DWI amongst the tested radiomics driven strategies, and (iv) the introduced Zone-DR, Zone-LR, and Zone-SVM strategies achieved the highest specificities for CHB-DWI amongst the tested radiomics driven strategies. Furthermore, the results showed that the trade-off between sensitivity and specificity can be optimized based on the particular clinical scenario we wish to employ radiomic driven DWI prostate cancer sensing strategies for, such as clinical screening versus surgical planning. Finally, we investigate the critical regions within sensing data that led to a given radiomic sequence generated by a Zone-DR sequencer using an explainability method to get a deeper understanding on the biomarkers important for zone-level cancer sensing. Introduction Prostate cancer is the most commonly diagnosed type of cancer in North American men, excluding non-melanoma skin cancer, and is one of the leading causes of cancerous death in North American men [1,2], accounting for an estimated 164,690 new cases and 29,430 deaths in the United States in 2018. However, prognosis is relatively good given sufficiently early detection, motivating the need for fast and reliable cancer screening methods. Diffusion weighted imaging (DWI) is a magnetic resonance imaging (MRI) technique that is gaining traction as a noninvasive method for prostate cancer sensing. Several DWI sensing modalities have been proposed to better differentiate between healthy and cancerous tissue. Currently, apparent diffusion coefficient (ADC) imaging are the most commonly used DWI sensing modality for cancer sensing. Computed high-b value diffusion weighted imaging (CHB-DWI) is a different DWI sensing modality that has shown improved distinction between healthy and cancerous tissue [3,4], but this method has not been widely adopted. The current clinical practice is a clinical heuristics driven strategy, where a heuristics based threshold derived from the observations made in past clinical studies is leveraged to detect areas that likely contain cancer [5]. Although this approach leverages important clinical findings, it lacks the ability to characterize more complex spatial traits such as textural or morphological traits that can differentiate between healthy and cancerous prostate tissue, and may be limited in its ability to sense prostate cancer. Radiomics driven cancer sensing methods have been shown to be a promising prognostic tool [6][7][8], but rely on predefined and hand-engineered quantitative radiomic features. Recently, discovery radiomics [9] was introduced to uncover abstract radiomic features, directly from the wealth of sensing data, that capture highly unique tumour traits and characteristics beyond what can be captured using existing radiomics driven sensing approaches. Previous studies have focused on slice-level prostate cancer sensing. However, it can potentially be very beneficial to grade tissue on a smaller scale at a zone level as tumors in different zones can have different characteristics. Furthermore, zone-level cancer sensing can also help isolate the precise location of cancer for more targeted biopsy and treatment. In this study, we investigate comprehensively the efficacy of two DWI sensing modalities, ADC and CHB-DWI, for radiomics driven zone-level prostate cancer sensing. In order to do this investigation, we introduced several radiomics driven strategies (which we will term as Zone-X sensing strategies) for zone-level DWI prostate cancer sensing geared around hand-engineered radiomic sequences that have not been previously introduced in past literature for the purpose of zone-based prostate cancer sensing, which is an important contribution of this study. Furthermore, for another important contribution of this study, we also introduce Zone-DR, a discovery radiomics driven strategy based on zone-level deep radiomic sequencer discovery. Demonstrating the efficacy of such radiomics driven strategies that leverage DWI sensing modalities can hopefully lead to their widespread adoption for improved zone-based prostate cancer sensing, as using a non-invasive sensing method can reduce the number of negative surgical biopsies and improve the detection of tumors. Diffusion Weighted Imaging for Cancer Sensing In this study, two DWI sensing modalities are leveraged: i) apparent diffusion coefficient (ADC) imaging, and ii) computed high-b diffusion-weighted imaging (CHB-DWI). The details of these DWI sensing modalities are specified below, and illustrative examples are displayed in Figure 1. Apparent Diffusion Coefficient Imaging DWI measures the sensitivity of tissue to the Brownian motion of water molecules by applying pairs of opposing gradient pulses on either side of refocusing pulses in spin-echo sequences [10]. The duration and amplitude of these gradient pulses are represented by a b-value, and the diffusion-weighted signal S decays exponentially with respect to: In (1), S 0 is the signal intensity with no diffusion-weighting (b = 0) and D is the ADC. DWI is typically performed for different b-values, allowing ADC images to be computed by fitting the signal curve with parameters S 0 and D via least-squares or maximum likelihood methods [11]. Cancerous tissue presents with a lower ADC relative to surrounding healthy tissue, allowing it to be identified in estimated ADC images [10]. Computed High b-Value Diffusion Weighted Imaging It has been shown that DWI using high b-values (i.e., b-values greater than 1000 s/mm 2 ) allows for improved distinction between healthy and cancerous tissue due to the higher signal intensities exhibited by cancerous tissue [3,4]. However, due to hardware limitations, high b-value DWI is difficult to achieve at a high enough signal-to-noise ratio to provide quality acquisitions for diagnostic purposes. To attain a higher signal-to-noise ratio, computed high b-value DWI was introduced, where a set of low b-value acquisitions are used to estimate higher b-value acquisitions [12]. For a diffusion weighted signal S i at a b-value of b i , the general equation is: where S α is the reference signal at a b-value of b α and D is the ADC. An estimateD of the ADC may be formulated as a Bayesian estimation problem, where S is a set of DWI measurements, D is the ADC, P(S|D) is the probability of S given D, and S i is a single DWI measurement corresponding to b-value b i [3]:D Once the ADC has been estimated, the estimateD can be used to compute signalsŜ i at any value Radiomics Driven Cancer Sensing Existing radiomics driven cancer sensing methods for prostate cancer typically rely on quantitative, hand-engineered radiomic sequences derived from mono-or multi-parametric MRI. Notably, existing feature-based methods typically define features and perform grading on a per-pixel or per-region-of-interest basis. A comprehensive review of existing radiomics driven methods for prostate cancer sensing was authored by Lemaître et al. [13]. The examined methods used hand-engineered radiomic feature sets derived from some combination of T2-weighted (T2w) imaging, dynamic contrast enhanced (DCE) imaging, diffusion-weighted imaging (DWI), and magnetic resonance spectroscopic imaging (MRSI). Additionally, a number of methods make use of apparent diffusion coefficient (ADC) imaging. For radiomic-driven cancer sensing and detection, the examined methods utilize the following classifiers, or some combination thereof: linear/quadratic discriminant analysis, logistic regression, naïve Bayes, AdaBoost, random forest, support vector machines (SVMs), relevance vector machines (RVMs), neural networks, Markov random fields, and conditional random fields. Duda et al. [14] proposed a semi-automatic texture analysis method that combined features from various modalities, including DWI. For each modality, features were computed based on first order statistics, autocorrelation, gradients, fractals, co-occurrence matrices, and run length matrices. Litjens et al. [15] extracted various features, such as second-order statistical and Gabor features from T2w images, multi-scale blobness from ADC maps, and curve fitting and pharmacokinetic features from DCE images to perform prostate gland segmentation, cancer likelihood mapping, and cancerous region classification. Ozer et al. [16] combined parametric images derived from DCE imaging, T2w imaging, and ADC imaging. To classify the sensing data, the use of relevance vector machines (RVMs) with a Bayesian framework was proposed, and was subsequently evaluated against the performance of SVMs with the same framework. Ozer et al. [17] later extended their work to select a threshold value for increased segmentation performance, and further compared with a representative Markov random field approach. Artan et al. [18] engineered feature vectors from median-filtered multispectral MRI sensing data consisting of axial-oblique fast spin-echo (FSE) T2w images, echo-planar DWI, multi-echo FSE images, and DCE images. These features were used to develop a cost-sensitive SVM for automated prostate cancer grading, which showed improved accuracy over conventional SVMs. A conditional random field was then added to the cost-sensitive SVM framework, further improving localization accuracy. Vos et al. [19] developed a fully-automated prostate cancer sensing method using a supervised two-stage classification approach. Lesion candidates were analysed via a combination of histogram analyses of axial T2w images, pharamcokinetic maps, contrast-enhanced T1w images, and ADC maps with texture-based features. Vos et al. then discriminated prostate cancers from benign abnormalities by their heterogeneity. Khalvati et al. [20] proposed a multi-parametric MRI texture feature model for radiomics driven prostate cancer sensing. The texture feature model comprises 19 low-level texture features extracted from each MRI modality, and is based on the model proposed by Peng et al. [21]. More recently, Khalvati et al. [8] extended this texture feature model to include additional MRI modalities and low-level features, as well as feature selection. Chaddad et al. [22] proposed the extraction of features from the joint intensity matrices (JIMs) of T2w imaging data and DWI data. Using Haralick texture features extracted from both the JIMs and co-occurrence matrices, random forest classifiers were used compare the utility of classification using features derived from JIMs to that of features derived from co-occurrence matrices. It was also shown that combining features from JIMs and co-occurrence matrices further improved cancer grading performance. Apart from hand-engineered radiomic sequences for prostate cancer, these radiomic sequences can be drawn from other types of cancer sensing, such as lung cancer. Narayanan et al. [23] explored the performance of SVM on a large set of 503 features and shortlisted these features to 300 based on a feature ranking algorithm. Recently, Narayanan et al. [24] introduced a novel optimization method for selecting features from computed tomography (CT) and chest radiographs (CRs) for clustering and classification of lung cancer. The proposed method adapts the feature selection process based on the task in hand. In addition to hand-engineered radiomic sequences, the notion of discovery radiomics has previously been proposed to prostate cancer sensing as well as other problems in radiomics driven sensing [9,[25][26][27][28]. Chung et al. [25] proposed a fully-automated discovery radiomics system for sensing prostate lesion candidates using multi-parametric MRI (MP-MRI). Radiomics features were extracted using a discovered radiomics sequencer consisting of 17 convolutional sequencing layers and 2 fully-connected sequencing layers, and the discovered sequences were evaluated against the hand-engineered radiomic sequences described in [8,21]. More recently, Chung et al. [26] introduced a Layered Random Projection (LaRP) sequencer, which was evaluated using the same framework as [25]. Karimi et al. [27] extended the methods of [25,26] to use a mixture of convolutional sequencers in order to mitigate the effects of class imbalance. This approach was shown to improve specificity and accuracy when compared to [25]. Image Data and Pre-Processing DWI sensing data (including ADC imaging and CHB-DWI imaging modalities) of 101 patients was acquired using a Philips Achieva 3.0 T MRI machine at Sunnybrook Health Sciences Centre in Toronto, Ontario. Institutional research ethics board approval and written informed consent was waived by the Research Ethics Board of Sunnybrook Health Sciences Centre. The 3D MRI volume of each patient was divided into 2D slices along the coronal plane, resulting in 18 to 34 slices per patient. Each slice was manually annotated by an expert radiologist, producing a segmentation map of the prostate and its 10 anatomical zones [29]. An example of how the prostate is split into zones is shown in Figure 2. Each zone was labeled with a PI-RADS score between 1 and 5, representing a very low, low, intermediate, high or very high chance that clinically significant cancer was highly likely to be present [30]. 72 patients had one or more zone with a PI-RADS score of 3 or above, and these individuals were referred to obtain biopsies of the potentially cancerous regions. Samples were collected and a histopathological assessment was performed by a trained pathologist using the Gleason grading system [31] to assess the prognosis of patients. In total, 42 zones received a Gleason score of six, 78 received a score of seven, 12 received a score of eight and three received a score of nine, indicating the presence of a cancerous tumor (positive-grade) in 41 patients. Prostate zones were extracted from the image slices based on a zone-level map, and cropping the area corresponding to each zone. For ADC, pixels not contained in a zone were masked with the value of 3,949, the maximum ADC value, as lower ADC values indicate a higher likelihood of cancer. For CHB-DWI, pixels not contained in a zone were masked with zero values. This process resulted in extracted zones of varying dimensions, so all examples were resized to 32 by 32 pixels using bilinear interpolation. In total, 12,466 prostate zones were extracted and split into a dataset of 12,361 negative-grade and 135 positive-grade, pathology-verified zones. Five stratified splits of the dataset were calculated at the patient-level such that the percentage of positive-grade zones for each split was preserved. To obtain these splits, five folds were randomly selected at the patient-level for patients with no presence of a cancerous tumor and patients with presence of a cancerous tumor verified by biopsies. This resulted in four groups of 20 patients and one group of 21 patients. Each group has at least 8 patients with presence of a cancerous tumor. These five splits were used to perform 5-fold cross validation. Clinical Heuristics Driven Strategy To act as a baseline reference for comparison, the widely-used clinical heuristics driven strategy was evaluated in this study for the purpose of zone-level prostate cancer sensing. More specifically [32], for the heuristics driven strategy evaluated in this study, the following clinical heuristics presented in past studies [3,4] were used: • When leveraging ADC sensing, any zone with ADC value less than 1000 s/mm 2 are considered cancerous. • When leveraging CHB-DWI sensing, any zone with CHB-DWI values greater than 1000 s/mm 2 are considered cancerous. The estimated probability of cancer in each ADC zone was computed as, where M ADC is the maximum ADC value. The estimated probability of cancer in each CHB-DWI zone was computed as, max CHB-DWI value of zone M CHB (7) where M is the maximum CHB-DWI value. With these probabilities, a receiver operating characteristic curve is constructed to determine the area under the curve and the optimal threshold value for each sensing modality. Zone-X: Radiomics Driven Sensing Strategies In this study, we introduce several radiomics driven DWI sensing strategies (which we refer to as Zone-X sensing strategies) in the form of support vector machines (SVM), logistic regression, and random forest techniques using hand-engineered radiomic sequences for the task of zone-level cancer sensing. It is important to note that the Zone-X sensing strategies explored in this study have not been previously introduced in past literature for the purpose of zone-based DWI prostate cancer sensing, and is thus an important contribution of this study. The Zone-X sensing strategies investigated in this study are briefly discussed below. Support Vector Machines (SVM) Initially proposed and optimized by Vapnik et al. [33], SVM is a supervised learning classification algorithm that performs binary classification by determining the optimal hyperplane between two classes via the maximal margin of separation. The extended use of the kernel trick for computation and soft margins allow for creation of nonlinear classifiers that can be applied in real-world applications. Random Forest Proposed by Ho in 1995, random forest is an implementation of decision tree classifiers that prevents overfitting [34]. Ho suggested that subsets of the feature space could be used to generate decision trees and form an ensemble, to obtain a more accurate prediction. Random forest advantages include minimal tuning of hyperparameters and the ability to view the relative importance of input features. Logistic Regression A standard classification technique, logistic regression is the estimation of the parameters that will fit the logistic model to the data. Algorithms to do so include gradient descent or maximum-likelihood estimation. However, the logistic regression technique is limited to performing linear classification, and are prone to overfitting. In this study, a zone-level hand-engineered radiomic sequence based on Khalvati et al. [8] is modified for the purpose of zone-level prostate cancer sensing and leveraged in the proposed Zone-X sensing strategies. This zone-level radiomic sequence consists of four first-order statistical features (mean, standard deviation, skewness, kurtosis), 18 Haralick features in four directions, Kirsch features in eight directions, and Gabor features in four directions and three scales within a zone for a 96-dimensional zone-level radiomic sequence. Zone-DR: Discovery Radiomics Driven Sensing In this study, a two-part discovery radiomics strategy (which we will term as Zone-DR) for discovering zone-level radiomic sequencer discovery for the purpose of zone-level prostate cancer sensing is also introduced. An overview of the proposed Zone-DR approach is shown in Figure 3. The two-part discovery radiomics strategy comprises of: (i) machine-driven sequencer design discovery to discover the radiomic sequencer design, and (ii) data-driven sequencer parameter discovery to discover the parameters of the radiomic sequencer. This discovery radiomics strategy was leveraged to discover zone-level radiomic sequencers for both ADC and CHB-DWI sensing modalities using a wealth of zone-level sensing data of the prostate gland across an archive of patient cases. Using the final discovered radiomic sequencers, zone-level sensing acquisitions for current patient is passed through the discovered radiomic sequencers to produce radiomic sequences that characterize the tissue phenotype within the input zone based on information captured in the respective sensing modality. The generated radiomic sequence can then be used to grade the input prostate zone. The details of the different steps of the proposed Zone-DR driven sensing strategy is described in detail below. Figure 3. Overview of Zone-DR. For this study, radiomic sequencers were discovered via a two-part discovery radiomics strategy (machine-driven sequencer design discovery to discover the radiomic sequencer design, followed by data-driven sequencer parameter discovery to discover the parameters of the radiomic sequencer) for both ADC and CHB-DWI sensing modalities using a wealth of zone-level sensing data of the prostate gland across an archive of patient cases. Using the final discovered radiomic sequencers, zone-level sensing data of a current patient is passed through the discovered radiomic sequencers to produce radiomic sequences that characterize the tissue phenotype within the input zone based on information captured in the respective modality. The generated radiomic sequence can then be used to grade the input prostate zone. Machine-Driven Radiomic Sequencer Design Discovery The radiomic sequencers leveraged in this study for the proposed Zone-DR are highly compact deep convolutional radiomic sequencers, each designed and discovered via discovery radiomics specifically to generate radiomic sequences given zone-level sensing data captured from a specific sensing modality (ADC or CHB-DWI) that quantitatively characterize tissue phenotype associated with prostate cancer for zone-level cancer grading. Since this is the first study on zone-based prostate cancer sensing via discovery radiomics, it is important to investigate and identify the appropriate zone-level radiomic sequencer design for the sensing modalities being explored. To achieve the goal of identifying the appropriate zone-level radiomic sequencer design, we employ a machine-driven radiomic sequencer design discovery strategy, where we leverage generative synthesis [35] based on a deep convolutional-based radiomic sequencer design prototype to discover the best deep radiomic sequencer design for zone-based grading based on the area under the curve (AUC) metric as the universal performance function for each modality. More specifically, the deep convolutional-based radiomic sequencer design prototype leveraged in this study for machine-driven radiomic sequencer design discovery strategy was designed with sequencer efficiency and generalization capabilities in mind and inspired by [36], where a kernel size of 3 was leveraged in the sequencer design for computational efficiency while achieving strong grading performance, the flexibility for maxpool operations to be added at any depth of the sequencer design, and finally the incorporation of a sequence operation at the end of the sequencer design to generating the output radiomic sequences given the input zone-level sensing data. The final discovered zone-level deep radiomic sequencer designs that achieved the best AUC performance are shown in Figure 4. A number of observations can be made about the discovered radiomic sequencer designs. First, it can be observed that the deep radiomic sequencer designs discovered via the machine-driven radiomic sequencer design strategy exhibit noticeable architectural heterogeneity as they are tailored specifically for zone-level prostate cancer sensing. Second, it can also be observed that the discovered ADC sequencer design is noticeably less complex than the CHB-DWI sequencer design discovered via the machine-driven sequencer design discovery process. This suggests that a less complex radiomic sequencer design is sufficient to characterize the tissue phenotype captured via ADC sensing modality compared to that needed for CHB-DWI sensing modality. Data-Driven Radiomic Sequencer Parameter Discovery The zone-level deep radiomic sequencers designed via the aforementioned machine-driven design discovery process then undergo a radiomic sequencer discovery process to discover all the parameters of the sequencer given sensing data to capture the cancer phenotype of cancerous tissue. More specifically, an iterative adaptive gradient descent optimization method (in this study, the Adam optimizer) is leveraged with a categorical cross-entropy loss function, shown in Equation (8), to discover the parameters of the radiomic sequencers. To account for grade imbalance, grade weights of 1 and 150 are used for learning characteristics about the negative-grade and positive-grade zones, respectively. The hyperparameters that discovered the best parameters of the radiomic sequencers are shown in Table 1. Results To evaluate the efficacy of the introduced Zone-X and Zone-DR strategies for zone-level DWI prostate cancer sensing, an empirical five-fold performance analysis using the following performance metrics was conducted for the proposed Zone-X and Zone-DR sensing strategies, along with the baseline clinical heuristics driven sensing strategy: (i) area under the curve (AUC), (ii) sensitivity, and (iii) specificity. To determine the optimal zone-level grading threshold for each zone-level prostate sensing strategy, denoted here asθ, we solve the following optimization problem: where f pr(θ) and tpr(θ) denotes the false positive rate and true positive rate, respectively. Note that while for this study the weights on 1 − f pr(θ) and tpr(θ) are equal in the above optimization formulation, one can change the weight to determine a zone-level grading threshold that favor the zone-level sensing strategies towards either higher sensitivity or higher specificity. Figures 5 and 6 present a quantitative comparison of the AUC, sensitivity, and specificity achieved using the introduced Zone-X and Zone-DR Zone-DR sensing techniques, as well as the baseline clinical heuristics driven sensing strategy, for ADC and CHB-DWI sensing modalities, respectively. Here, we will refer to the proposed Zone-X radiomics driven zone-level prostate cancer sensing strategies leveraging logistic regression, random forest, and support vector machines as Zone-LR, Zone-RF, and Zone-SVM, respectively. One particular advantage of the Zone-DR sensing strategy is the fact that it leverages discovered radiomic sequencers where the critical regions that led to a given radiomic sequence can be visualized using explainability methods such as [37]. Therefore, in addition to the empirical results illustrating the efficacy of the introduced radiomics driven zone-level prostate cancer sensing strategies, we further visualize the critical regions leveraged by the proposed Zone-DR sensing strategy on several example CHB-DWI positive and negative samples using GSInquire, a state-of-the-art explainability method that have been demonstrated to better reflect decision-making processes when compared to other popular explainability methods [37]. The visualizations are shown in Figure 7. Discussion A number of observations can be made from Figures 5 and 6. First, for both ADC and CHB-DWI sensing modality, the introduced Zone-X and Zone-DR prostate cancer sensing strategies outperform the clinical heuristics driven sensing strategy in terms of AUC performance metrics (e.g., clinical heuristics driven strategy achieved on average ∼0.79 while Zone-DR achieved on average ∼0.86 for ADC sensing modality). This suggests that zone-level radiomic sequences that provides better characterization of tissue traits is important for distinguishing between positive and negative zones. Second, it can be observed that the Zone-DR sensing strategy achieved the highest sensitivity compared to the other radiomics driven sensing strategies when using ADC sensing modality, while Zone-SVM strategy achieved the highest specificity compared to the other radiomics driven sensing strategies when using ADC. Third, it can be observed that the Zone-LR and Zone-DR sensing strategies achieved the highest sensitivities compared to the other radiomics driven sensing strategies when using CHB-DWI sensing modality. Fourth, it can be observed that the Zone-LR, Zone-SVM, and Zone-DR sensing strategies achieved the highest specificities compared to the other radiomics driven sensing strategies when using CHB-DWI sensing modality. Fifth and finally, it was observed the use of CHB-DWI led to higher specificity while the use of ADC led to highest sensitivity, making the choice of sensing modality useful for different clinical scenarios. For example, maximizing specificity is important for surgery for removal of prostate where you want to minimize false positive rates to avoid unnecessary surgeries. On the other hand, for cancer screening, maximizing sensitivity may be useful to avoid missing cancerous patients. By visual inspection in Figure 1, the cancerous tissue is more apparent for the CHB-DWI prostate slice than the ADC prostate slice. In addition, Table 1 shows that CHB-DWI sensing modality is easier to leverage and the results in Figures 5 and 6 indicate that CHB-DWI achieves higher accuracy, as well as a better balance between specificity and sensitivity, when compared to the design for ADC modality. These results show that the CHB-DWI sensing modality provides superior cancer tissue characterization when used within a radiomics driven sensing strategy. Based on the experimental results, it can be observed that introduced Zone-X and Zone-DR radiomics driven DWI prostate cancer sensing strategies can provide significantly improved cancer sensing performance. As mentioned earlier, an interesting benefit of the Zone-DR sensing strategy is that the critical regions that led to a given radiomic sequence generated by the discovered radiomic sequencer can be visualized using explainability methods. Based on Figure 7, which provide visualizations of Zone-DR critical regions on both positive zones and negative zones, it can be observed that the critical regions being leveraged by Zone-DR is consistent with the markedly signal hyperintensity characteristics in CHB-DWI used by radiologists when conducting PI-RADS assessments. More specifically, for the positive examples (Figure 7a), Zone-DR focuses on regions exhibiting signal hyperintensity in CHB-DWI sensing data. When cancer is not present in a zone (Figure 7b), Zone-DR focuses on a larger region, as shown by the critical region identified by GSInquire as the driving factor for Zone-DR for the negative example. Conclusions In this study, we introduced and demonstrated the efficacy of several introduced radiomics driven sensing strategies (Zone-X sensing strategies) using ADC and CHB-DWI modalities for zone-level prostate cancer sensing. Furthermore, we additionally introduced Zone-DR, a discovery radiomics sensing strategy based on zone-level deep radiomic sequencer discovery that discover radiomic sequences directly for radiomics driven sensing. The introduced Zone-X and Zone-DR sensing strategies were able to achieve noticeably higher performance when compared to the clinical heuristics driven strategy with respect to AUC performance metrics. Furthermore, the experimental results showed that the trade-off between sensitivity and specificity can be optimized based on the particular clinical scenario we wish to employ radiomics driven DWI prostate cancer sensing strategies for, such as clinical screening versus surgical planning. These promising results suggest that radiomics driven DWI sensing strategies such as the proposed Zone-X sensing strategies and discovery radiomics driven DWI sensing strategies such as the proposed Zone-DR sensing strategy can potentially be a very powerful tool for aiding radiologists in zone-level prostate cancer screening.
6,686.4
2020-03-01T00:00:00.000
[ "Medicine", "Engineering" ]
Development of a protein signature to enable clinical positioning of IAP inhibitors in colorectal cancer Resistance to chemotherapy‐induced cell death is a major barrier to effective treatment of solid tumours such as colorectal cancer, CRC. Herein, we present a study aimed at developing a proteomics‐based predictor of response to standard‐of‐care (SoC) chemotherapy in combination with antagonists of IAPs (inhibitors of apoptosis proteins), which have been implicated as mediators of drug resistance in CRC. We quantified the absolute expression of 19 key apoptotic proteins at baseline in a panel of 12 CRC cell lines representative of the genetic diversity seen in this disease to identify which proteins promote resistance or sensitivity to a model IAP antagonist [birinapant (Bir)] alone and in combination with SoC chemotherapy (5FU plus oxaliplatin). Quantitative western blotting demonstrated heterogeneous expression of IAP interactome proteins across the CRC cell line panel, and cell death analyses revealed a widely varied response to Bir/chemotherapy combinations. Baseline protein expression of cIAP1, caspase‐8 and RIPK1 expression robustly correlated with response to Bir/chemotherapy combinations. Classifying cell lines into ‘responsive’, ‘intermediate’ and ‘resistant’ groups and using linear discriminant analysis (LDA) enabled the identification of a 12‐protein signature that separated responders to Bir/chemotherapy combinations in the CRC cell line panel with 100% accuracy. Moreover, the LDA model was able to predict response accurately when cells were cocultured with Tumour necrosis factor‐alpha to mimic a pro‐inflammatory tumour microenvironment. Thus, our study provides the starting point for a proteomics‐based companion diagnostic that predicts response to IAP antagonist/SoC chemotherapy combinations in CRC. Resistance to chemotherapy-induced cell death is a major barrier to effective treatment of solid tumours such as colorectal cancer, CRC. Herein, we present a study aimed at developing a proteomics-based predictor of response to standard-of-care (SoC) chemotherapy in combination with antagonists of IAPs (inhibitors of apoptosis proteins), which have been implicated as mediators of drug resistance in CRC. We quantified the absolute expression of 19 key apoptotic proteins at baseline in a panel of 12 CRC cell lines representative of the genetic diversity seen in this disease to identify which proteins promote resistance or sensitivity to a model IAP antagonist [birinapant (Bir)] alone and in combination with SoC chemotherapy (5FU plus oxaliplatin). Quantitative western blotting demonstrated heterogeneous expression of IAP interactome proteins across the CRC cell line panel, and cell death analyses revealed a widely varied response to Bir/chemotherapy combinations. Baseline protein expression of cIAP1, caspase-8 and RIPK1 expression robustly correlated with response to Bir/chemotherapy combinations. Classifying cell lines into 'responsive', 'intermediate' and 'resistant' groups and using linear discriminant analysis (LDA) enabled the identification of a 12-protein signature that separated responders to Bir/chemotherapy combinations in the CRC cell line panel with 100% accuracy. Moreover, the LDA model was able to predict response accurately when cells were cocultured with Tumour necrosis factor-alpha to mimic a pro-inflammatory tumour microenvironment. Thus, our study provides the starting point for a proteomics-based companion diagnostic that predicts response to IAP antagonist/SoC chemotherapy combinations in CRC. Introduction Resistance to apoptosis is a classical hallmark of cancer, which can manifest clinically as lack of response to chemotherapeutic agents such as 5-Fluorouracil (5FU) and oxaliplatin [1]. Cancer cells can acquire such resistance through increasing their expression of (and therefore dependence on) anti-apoptotic proteins such as inhibitor of apoptosis proteins (IAPs), making such proteins attractive therapeutic targets [2]. IAPs exert their anti-apoptotic function via two distinct mechanisms [3]. Firstly, cIAP1 and cIAP2 can divert typically death-inducing Tumour necrosis factor-alpha (TNF-α) signals to prosurvival signalling through the ubiquitination of RIPK1, leading to the activation of Nuclear factor kappa-B transcriptional programmes that enhance cell proliferation, propagate further inflammatory signalling via transcriptional upregulation of cytokines including TNF-α itself and drive the transcription of anti-apoptotic proteins, including the caspase-8 regulator FADD-like IL-1β-converting enzyme inhibitory protein (FLIP) and the IAPs themselves [4,5]. Secondly, IAPs [specifically X-Linked inhibitor of apoptosis protein (XIAP)] can directly inhibit the complete processing and subsequent activation of caspase-3/7 and caspase-9, thereby abrogating the cell's ability to execute apoptosis [6,7]. IAP antagonists interact with IAPs through binding to their BIR domains and promote apoptosis induced by extracellular death ligands such as TNF-α by inhibiting cIAP-induced RIPK1 ubiquitination [3,8]; this leads to the formation of an intracellular death signalling complex termed complex II in which RIPK1 recruits Fas-associated death domain (FADD), which in turn recruits procaspase-8 and FLIP. Depending on the relative amounts of procaspase-8 and FLIP in this complex, procaspase-8 can form homodimers, which drive apoptosis [9,10]. In the absence of procaspase-8, this complex can instead activate necroptosis [11], an alternative form of programmed cell death in which RIPK1 recruits and activates RIPK3 eventually leading to phosphorylation of Mixed lineage kinase domain-like pseudokinase (MLKL); phosphorylated MLKL oligomerises and forms pores in the plasma membrane that result in pro-inflammatory cell lysis [12]. IAP antagonists also inhibit XIAP facilitating apoptosis mediated by the intrinsic mitochondrial apoptotic pathway and the execution phase of apoptosis mediated by caspase-3/7 [13,14]. The clinical realisation of the potential of IAP antagonists has been hampered by the lack of predictive biomarkers of response to enable patient stratification. Using a large panel of colorectal cancer (CRC) cell lines representative of the clinical diversity seen in CRC, herein we have developed a strategy to predict responses to IAP antagonist-based therapy by quantifying the expression of key apoptotic proteins. Results and Discussion Response of CRC cell lines to IAP antagonistbased therapy We assembled a panel of 12 CRC cell lines with a range of genotypes and microsatellite status ( Table 1) that were co-treated with a model IAP antagonist birinapant (Bir, TL37211) alone and in combination with 5FU and oxaliplatin ('OX5FU', to mimic the clinical FOLFOX regimen). In addition, we included treatment arms in which a pro-inflammatory tumour microenvironment was modelled using recombinant TNF-α. Cell death was quantified after treatment for 24, 48 or 72 h; the average cell death was also calculated over the three time points (Fig. 1). Overall, the cell lines were resistant to Bir alone, with GP5D demonstrating the greatest apoptotic response of just 25% averaged across the three time points (Fig. 1). Although TNF-α alone induced nonsignificant levels of cell death, its addition to Bir significantly increased apoptosis; again, GP5D cells were most sensitive to Bir/ TNF-α with~60% apoptosis at 24 h and~80% at 48 h. A range of responses were observed in the models, with only the COLO320 model completely resistant to Bir/ TNF-α at all time points. Compared to Bir/TNF-α, OX5FU-induced apoptosis had slower kinetics, with the greatest degree of cell death observed at 72 h; however, over half of the models exhibited a high level of resistance to this treatment (which mimics the clinical standard-of-care regimen), with < 10% cell death even at 72 h. It was notable therefore that the apoptotic response to chemotherapy of a number of models was significantly enhanced by the co-treatment with Bir, including LoVo, RKO and HCT116 (Fig. 1). In the context of TNF-α, there were further increases in cell death observed in cells co-treated with Bir and OX5FU in several models, including SW620, p53 null HCT116, HT29 and LIM1215. Fig. 2A. *RIPK3 is absent from HeLa; therefore, CRC cell line absolute RIPK3 abundance was calculated using HT29 as a reference. Western blot images and densitometry are representative of three independent experiments, and quantification represents the mean AE SEM of three independent experiments. Apoptotic protein expression profiling In order to better understand the heterogeneity of response to IAP inhibitors in CRC, the absolute expression of a panel of key IAP interactome and other major apoptotic regulators ( Fig. 2A) was determined by quantitative western blotting. Absolute protein levels were quantified by initially determining the concentrations of these proteins within a reference cell line (HeLa or HT29) by comparison to recombinant proteins as described previously [16,18]. We generated a concentration range of recombinant proteins against which could be used to calculate the concentration of target proteins in the reference cell line (Fig. 2B). Protein concentrations in each CRC model were then determined by western blotting, with comparison of each protein in each model against the protein levels in the reference cell line (Fig. 2C). This allowed us to obtain absolute concentrations for each protein in each CRC model (Table 2). Not surprisingly given the heterogeneity of CRC and the heterogeneity of responses to Bir, the levels of key proteins of the extrinsic apoptotic pathway (procaspase-8, FLIP(L), FLIP(S), FADD, RIP1, RIP3, Cellular inhibitor of apoptosis protein 1/2, XIAP) were variable across the panel of cell lines (Fig. 2B, Table 2). cIAP1 was present in all models and expressed at a consistently higher level than cIAP2 and XIAP. RIPK1 was the most highly expressed protein in the panel, whereas RIPK3 was expressed at a much lower level overall and was absent or almost completely absent in five models. The key adaptor protein for complex II, FADD, was similarly expressed across the cell line panel. Notably, the key apoptotic effector of the extrinsic pathway caspase-8 was expressed at a relatively high level in all models. The long splice form (FLIP(L)) of the canonical regulator of caspase-8, FLIP, was expressed at a relatively low level in comparison with procaspase-8, and the short splice form FLIP(S) was absent in several models; this is consistent with the findings of us and others that FLIP expression can be > 100-fold lower than procaspase-8, although it can still compete effectively with procaspase-8 for recruitment to effector complexes such as the DISC [26]. We also used previously established [16,18] concentrations in HeLa cells for B-cell lymphoma 2 (Bcl-2) family proteins, procaspase-3 and procaspase-9, APAF1 and Second mitochondria-derived activator of caspases (SMAC) to determine the levels of these key regulators of mitochondrial apoptosis in the CRC panel ( Fig. 2A,B, Table 2). The Bcl-2 family regulate mitochondrial outer membrane permeabilisation (MOMP), a critical commitment point in apoptosis induction [27]. For the Bcl-2 family of proteins, it was notable that BID, which mediates cross-talk between the extrinsic and intrinsic mitochondrial pathways [28], was expressed at a consistent, relatively high level across the panel. The key anti-apoptotic proteins Bcl-XL and Myeloid cell leukaemia differentiation protein 1 (Mcl-1) were expressed in all models albeit to varying degrees, whereas Bcl-2 itself was undetectable in half the models. While one of the requisite effectors for initiating MOMP, BCL2 antagonist/killer (BAK), was expressed in all models, BCL2-associated X-protein (BAX) was very low or absent in several models. A key component of the apoptosome, Apoptosis-associated factor-1 (APAF-1), was expressed variably across the panel, whilst procaspase-9, with which it forms the apoptosis-inducing apoptosome downstream of cytochrome c release from the mitochondria, was expressed in all models, albeit at a relatively low level compared with APAF-1, suggesting that procaspase-9 levels are likely the limiting factor for apoptosome formation. SMAC, like cytochrome c, is a key regulator of apoptosis downstream of MOMP that once released from mitochondria binds to and inhibits or promotes the degradation of IAPs [29]. Indeed, first-generation IAP antagonists such as Bir are based on the critical N-terminal domain of SMAC and are sometimes called SMAC mimetics [14,30]. SMAC was expressed in every model in the panel. Finally, the proform of the key executioner caspase, caspase-3, was expressed at a similar level in all models, suggestive of comparable potential to engage the executioner phase of apoptosis across the panel. p53 is a key regulator of apoptosis signalling genes [31]. We therefore compared the expression of each protein in the isogenic p53 wild-type (WT) and Null HCT116 models (Fig. 3A). Overall, the majority of proteins were constitutively expressed at a similar level in the presence and absence of p53. The canonical p53 target BAX was indeed higher in the p53 WT model. FADD and RIPK1 were also higher in the p53 WT setting, and consistent with our recent findings, FLIP was expressed more highly at the protein level in p53 WT HCT116 cells [32] as was XIAP. Moreover, hierarchical clustering did not reveal any differential patterns of protein expression between mutant and WT p53 models (Fig. 3B), suggesting that p53 status is not a major determinant of baseline apoptotic potential in CRC. Correlation between IAP antagonist response and apoptotic protein expression We next assessed whether the level of any of the individual proteins correlated with apoptosis induction. Linear correlation was determined using Pearson's correlation. In addition, Spearman's correlation was considered for the discovery of other monotonic dependencies and for the exclusion of outlier-driven linear correlation cases (Fig. 4). None of the IAP concentrations were correlated with Bir+TNF-α treatment; therefore, IAP levels alone were not separating responders to this combination. However, absolute levels of procaspase-8 positively correlated with response to Euclidean distance clustering of protein abundance Relative protein abundance Bir+TNF-α (Fig. 4A), and this effect was robust across all three time points (Fig. 4B). cIAP1 expression was found to be significantly correlated with response to Bir+OX5FU, as was response to OX5FU+TNF-α; this suggests that sensitivity to chemotherapy is related to cIAP1 expression, and indeed, there was a nonsignificant trend for cIAP1 expression and response to OX5FU (Pearson's r = 0.52, P = 0.08). Moreover, the increase in cell death between Bir+OX5FU and OX5FU treatments (Fig. 5A) or between Bir+OX5FU and Bir treatments (Fig. 5C) also showed a linear dependence on cIAP1 levels; however, this relationship was not found to be significant in Spearman's correlation test (Fig. 5A,C) and was lost following the addition of TNF-α (Fig. 5B). Notably, RIPK1 was found to be a more prominent indicator of the Bir+OX5FU response. Its level was significantly correlated with Bir+OX5FU response using both Pearson's and Spearman's tests (Fig. 4A), especially after 24 h (Fig. 4B). In addition, RIPK1 was also found to be correlated with the increase in cell death response between Bir+OX5FU and OX5FU treatments (Fig. 5A). Minimal apoptotic protein profile that separates treatment responses by means of linear discriminant models From our protein quantification experiments, 19 proteins that characterise the extrinsic and intrinsic apoptotic pathways showed a wide range of variance and amplitude over the 12 cell lines (Fig. 2B, Table 2). To identify proteins and minimal characteristic protein profiles that separated responses, we performed a principal component analysis (PCA) (Fig. 6A-C). PCA discovered 11 component axes (principal components, PCs) that transformed the 19 proteins set into a system of coordinates that optimally captured the protein expression variance of our cell line profiles (Fig. 6A). Based on PC eigenvalues, we have found that first seven PCs cumulatively captured 91% of the variation we measured in the protein profiles. Thus, these seven PCs were kept as essential components for the subsequent analysis. Interestingly, we noted that the first PC which explained 20% of the entire variance in the protein profiles showed a very strong dependence on the core components of the Ripoptosome complex: FADD, caspase-8 and RIPK1 (Fig. 6B). It is known that downstream of IAP inhibitor treatment, the cytosolic Ripoptosome is the signalling platform required for the activation of initiator caspase, procaspase-8, thus triggering the extrinsic apoptosis pathway response [33][34][35]. Moreover, Ripoptosome assembly on its own contributes to a high variability in the cell death response as shown in experimental and mathematical modelling studies [36,37]. Thus, our results suggested that the capacity to initiate extrinsic apoptosis through the Ripoptosome might be a prominent process that is affected by protein expression differences between our cell lines. To identify a minimal protein set, the contribution of each protein was averaged over the first seven essential PCs (Fig. 6C). From this, we could down select 12 proteins that are involved in the extrinsic (FADD, caspase-8, RIPK1, RIPK3, FLIP(S), BID) and in the intrinsic (XIAP, caspase-9, caspase-3, BCL-XL, BAK, SMAC) apoptosis pathways. To find the relationship between these proteins and cell death responses, we categorised cell lines into 3 response groups according to treatment-specific cell death averaged over 3 time points (Table 3). Groups 'resistant', 'intermediate' and 'responsive' were assigned for cell death response < 10%, 10-30% and > 30%, respectively. Thus, the specific cell death responses were discretised for further comparison with the minimal protein set by means of linear discriminant analysis (LDA) [38][39][40][41]. LDA provides a linear classification model that transforms the minimal protein set into a new subspace of component axes that maximises the separation of the response groups. These axes are called linear discriminant functions (LDs) and were generated for each treatment separately (Fig. 6D). An important property of LDA is that its performance is independent of the protein data standardisation because the algorithm relies on variation between response groups only. Therefore, LDs are the linear functions of protein concentrations directly and, therefore, can be used for response classification in any data set expressed in concentration terms. The scatter plots showed clear separation between all response groups in the optimal LD spaces calculated for OX5FU, Bir+OX5FU and Bir+TNF-α treatments ( Fig. 6D) with 100% accuracy in response classification for all 12 cell lines (Fig. 6E). Descriptive accuracy of the LDA models was achieved for most of our treatments with the minimal set of 12 proteins. Only one classification error occurred for the Bir+OX5FU+TNF-α treatment in the classification of HT29 cells. In this case, the LDA model underestimated the HT29 response, suggesting for it an intermediate rather than a responsive group (Fig. 6D, E). This inaccuracy was found to be associated with the overall weaker separation between response groups for the triple-treatment combination LDA model. This was likely due to the underrepresentation of intermediate and resistant response groups (2 and 1 cell lines are representing these two groups, respectively) or an influence of other key signalling pathways not covered in the present study. Nevertheless, cell lines that are resistant to Bir+OX5FU and, at the same time, responsive to Bir+TNF-α all show a high level of response to Bir+-OX5FU+TNF-α. Additionally, all cell lines responsive to Bir+OX5FU were also responsive to Bir+OX5-FU+TNF-α. Therefore, the Bir+OX5FU+TNF-α LDA model (Fig. 6D), which did not separate well, is dispensable, and the triple-treatment responsive cell lines can be identified from the use of the Bir+OX5FU and Bir+TNF-α LDA models. Clinical Positioning of IAP inhibitors in CRC Overall, our analyses suggest that benefit from the treatment with IAP inhibitors in CRC can be calculated from the use of a panel of 12 proteins plus knowledge of whether the tumour is inflamed (indicating the presence of TNF-α). As depicted in Fig. 7, if the protein profile indicates that a tumour is in the 'Bir+OX5FU responsive' group, the patient would be predicted to benefit from treatment with an IAP inhibitor alongside FOLFOX. If not, consideration would be given to whether the patient's tumour expresses TNF-α. If inflammation is present and the patient's tumour falls into the 'Bir+TNF-α responsive' group, the patient would be predicted to benefit from treatment with an IAP inhibitor. If not, the benefits of the combinational treatment with an IAP antagonist would be predicted to be minimal. To make the above approach clinically feasible, simple protein quantification approaches are needed. In this regard, huge advances in multiplex immunohistochemistry have been made. For example, a multiplexed fluorescence microscopy method for the quantification of multiple proteins from formalin-fixed paraffin-embedded tissues was recently described that can quantify up to 60 proteins in single 5-µm sections [42,43]. The ability to do this analysis in single sections means problems of changing cellularity associated with sequential sectioning needed for traditional IHC (one section per protein) are overcome. This and similar methodologies can also identify the presence of specific immune cell populations in tumours and hence whether or not a tissue is inflamed. Conversion of protein intensities into absolute concentrations would require the incorporation of standards into the analysis; this could be achieved using formalin-fixed cell lines with known protein concentrations embedded alongside the tumour sample. Thus, these emerging approaches could be employed to convert experimentally derived proteomics-based predictive algorithms such as we have developed here into realworld companion diagnostics. CRC cell line panel Cell lines were selected as a representation of the molecular genetic diversity seen in CRC based on the mutation status of known driver genes (including TP53, KRAS, BRAF and PIK3CA), transcriptional profiling and microsatellite stability status, as published in Medico et al. [15]. HeLa, DLD1, LS174T, LoVo, HT29, GP5D, LIM1215, HCT116, RKO, SW620, COLO320 and LS513 cell lines were purchased as authenticated stocks from ATCC (Teddington, UK). HCT116 p53 −/− was gifted from B. Vogelstein (Johns Hopkins University, USA). Immediately after purchase, early passage stocks of each cell line were frozen down. After thawing, cells were kept for a limited number of passages and were regularly screened for the presence of mycoplasma using the MycoAlert Mycoplasma Detection Kit (Lonza, Basel, Switzerland). Cell death assay Sensitivity of CRC cell lines to each treatment was determined by Annexin V/propidium iodide high-content microscopy. Cells were seeded at appropriate densities and treated for 24, 48 or 72 h with TL32711 (Bir; Selleck Chemicals, Munich, Germany) alone, chemotherapy alone ['OX5FU': Patient (A) 12 proteins profile Bir + OX5FU will work Variable Response μM 5FU (Medac, Chobham, Surrey, UK) plus 2 μM oxaliplatin (Accord, Barnstaple, Devon, UK)], or a combination of the two in the presence or absence of 10 ngÁmL −1 TNF-α (Alomone Labs, Jerusalem, Israel). Cells were stained with propidium iodide (Sigma, UK), FITC-labelled Annexin V (Thermo Fisher, Waltham, MA, USA) and Hoechst 33342 to visualise nuclei (Thermo Fisher). Cell death was quantified (AV+and/or PI+) on an Array Scan XTI high-content microscope (Thermo Scientific). To control for variabilities in basal cell death across the CRC panel, treatment-specific cell death was calculated as follows: {cell death in treatment group} -{cell death in control group}. Statistics Pairwise comparisons of the mean treatment responses with the Wilcoxon test were performed using ggpubr v0.3.0 package [19] together with the ggplot2 v3.3.0 package [20] in the R environment [21]. Spearman's and Pearson's correlation coefficients were calculated to identify significant correlations between cell line sensitivities (treatment-specific cell death) and absolute concentrations of each IAP interactome protein. Correlation and hierarchical clustering heat maps were produced with the R package clusterProfiler v3.12.0 [22]. PCA was applied for dimensionality reduction in the characteristic protein profile in the cohort of 12 colon cancer cell lines (Fig. 6A). For the PCA, we standardised protein concentrations (z-scores of 19 proteins). We have calculated the contribution of each protein on the given PC (Fig. 6B) from the correlation coefficients between a protein and PC termed loadings of variables as per the following: Contribution i, j ð Þ¼ loading i, j ð Þ ð Þ 2 Â 100% ∑ 19 j¼1 loading i, j ð Þ ð Þ 2 , where 1 ≤ i ≤ 11 is the indexing of PCs, and 1 ≤ j ≤ 19 is the indexing of proteins [23]. Further, the contribution of each protein over first seven PCs (Fig. 6C) was calculated as follows: Contr ðjÞ ¼ , where λðiÞ is the eigenvalue of PC i . With the assumption of average uniform expected contribution for all 19 proteins, the significance threshold was set to 5.26%. PCA algorithm was implemented using MATLAB 2017b [24]. LDA was then performed on the minimised protein profile (12 proteins), using absolute protein concentrations, and categorical response groups assigned based on specific cell death averaged over three time points (24, 48 and 72 h). LDA runs were performed for each treatment independently with R package MASS v7.3-51.5 [25].
5,411
2021-03-04T00:00:00.000
[ "Biology" ]
Heat Prediction of High Energy Physical Data Based on LSTM Recurrent Neural Network High-energy physics computing is a typical data-intensive calculation. Each year, petabytes of data needs to be analyzed, and data access performance is increasingly demanding. The tiered storage system scheme for building a unified namespace has been widely adopted. Generally, data is stored on storage devices with different performances and different prices according to different access frequency. When the heat of the data changes, the data is then migrated to the appropriate storage tier. At present, heuristic algorithms based on artificial experience are widely used in data heat prediction. Due to the differences in computing models of different users, the accuracy of prediction is low. A method for predicting future access popularity based on file access characteristics with the help of LSTM deep learning algorithm is proposed as the basis for data migration in hierarchical storage. This paper uses the real data of high-energy physics experiment LHAASO as an example for comparative testing. The results show that under the same test conditions, the model has higher prediction accuracy and stronger applicability than existing prediction models. Introduction Large-scale scientific experiments such as particle physics, particle astrophysics, and radiation sources are inseparable from large-scale data processing and analysis. High-energy physics scientific computing is a typical data-intensive application, which is characterized by observing rare cases from massive data and further searching for new scientific discoveries [1]. The I/O access performance of the storage system is important for computing efficiency. More and more scientific experimental data also put forward higher requirements for mass storage systems, such as capacity, reliability, scalability, and cost performance. High-energy physics experiments generate a huge amount of data every year and need to be stored for a long time. Historically, the field has used hierarchical data management methods based on tape and disk systems. In the future, we plan to introduce solid-state hard disk SSD as a separate fast storage layer and build a three-tier storage system with ordinary mechanical hard disks and tape libraries. Therefore, there are problems with data classification, data placement, and data migration. In the traditional hierarchical storage management process of high energy physics, data file migration sometimes requires the administrator to specify and manually confirm the migrating file list in IHEP. It is heavily dependent on experience, requires a lot of labor costs, and the overall storage system efficiency is not high. The increase in data volume leads to the continuous growth of the system scale, which in turn causes the complexity of traditional data migration models to increase dramatically, making it difficult to manage efficiently based on human experience. Therefore, the automatic migration of data between all levels of storage is another major problem facing researchers. Research status Effective migration can help data to be distributed in a reasonable storage hierarchy and improve storage system efficiency. The incorrect migration will bring additional read and write load, which will affect normal system I/O. Common data migration strategies, or hot file selection methods include LFU(least frequently used) , most recently used. Common data migration strategies, or cold file selection methods include LRU, FIFO, file-aging. These methods are essentially based on access history statistics of the storage system and historical data access frequency is one of the important indicators. In recent years, deep learning technology has become popular, and the training method is greatly different from traditional algorithms, thereby breaking the limitations of traditional neural networks on the number of hidden layers, the number of nodes in each layer, and has strong self-learning and nonlinear mapping capabilities. Among various deep neural network models, Recurrent Neural Network (RNN) introduces the concept of timing in the design of the network structure, and at the same time combines network nodes with storage capabilities to make the model have memory like a human [2] . Recurrent neural networks can abstract input timing signals layer by layer and extract features [3]. At present, it is used to construct time series data in speech recognition [4], machine translation, power load prediction, fault prediction and other fields. Many breakthroughs have been made in the model, but their application in data access prediction is very limited. For data access heat prediction of tiered storage, no similar case has been found. High energy physics computing environment High energy physics computing is a process of observing rare cases from massive data. At present, clusters method is widely used in the field of high-energy physics to reduce system cost and improve scalability. The computing center of the High Energy Institute of the Chinese Academy of Sciences separates computing clusters and storage clusters, and has built computing scheduling clusters using HTCondor, high-performance storage clusters using EOS and Lustre distributed file systems, and highspeed connection networks. LSTM(long-term and short-term memory neural network) As one of the sub-fields of artificial intelligence, machine learning focuses on the methods of letting computers learn by themselves. It can be divided into supervised learning, unsupervised learning, reinforcement learning. Deep learning is a method of machine learning, which refers to a method based on the Deep Neural Network (DNN). Deep learning is evolved from the traditional neural network (NN). By referring to the human neuron mechanism, it simulates the process of thinking and cognition. It has powerful non-linear mapping and generalization capabilities. Most neural networks belong to the Feed Forward Nerual Network (FNN) ; no matter how many hidden layers the network has, the neurons in each layer only accept the input of the connected neurons in the previous layer, and the output produced is only passed to the next connected neuron. The advantage of this model is that it can produce output results in real time, but the disadvantage is that it can only use the information of the current moment. The correlation of information at different times is discarded by the FNN model. It cannot be used to process time series data. Jordan and Elman proposed a Recurrent Neural Network (RNN) model [5]. The model is memorable, and the relevant data of the hidden layer neurons at the previous moment are also used as the input of the model at the next moment. The data in the hidden layer neurons will be updated every time, thus acting as a memory unit. This design ensures that the RNN can, in principle, learn the data correlation of any length in time and distance and better handle various time series data of different lengths. Design and Implementation As shown in Figure 2, the data access heat prediction system interacts with existing high-energy physical storage systems such as Lustre and EOS, and consists of feature collection nodes, a central database, and model training nodes. An I/O log collection component is deployed on each file storage server (FST). After filtering out irrelevant information, it is stored in the central key-value database in the format of <timestamp, parameter field, value>. File access characteristic data is calculated, integrated, normalized, batch processed, and written into the online data queue for model training. Model training is based on deep learning frameworks such as Tensorflow [6] and sklearn [7]. The trained model structure is stored in the local file system for persistent storage. The data migration system of the computing center periodically scans the file list in tapes, mechanical hard disks, and solid-state hard disks in the background and performs migration actions based on the output of the file access prediction system and migration conditions set by the administrator. When constructing the access feature vector, various types of file operation records need to be filtered in the file system log and stored persistently. The EOS storage system of the computing center has tens of millions of files and petabytes of data. Hundreds of thousands of file access logs are recorded every day. They need to be organized in three dimensions: file name, operation type, and time window. This article uses the column-oriented key-value distributed database HBase to store the file access features. The rowkey design is shown below. Rowkey byte 0 is a file operation type field, such as file opening, closing, reading and writing. The first byte to the 16th byte are the file name hash value field. The hashed file name has a uniform length, thereby increasing the probability of data being distributed evenly in each region, and achieving load balancing to improve query efficiency. The 17th to 20th bytes are file operation time fields. The 21st to 23rd bytes are extension fields, which record the username, file operation permissions, and so on. Model Output The file access frequency prediction problem can be regarded as a type of continuous variable prediction problem. Traditionally, this kind of problem can use the method of regression analysis [8] to make corresponding file migration decisions based on the prediction results to minimize the migration cost. However, in the actual storage scenario, there are a large number of files, and the timing access rules of different files are very different. It is impossible to train a regression model for each file, and it has the disadvantages of complicated calculation and poor adaptive ability. Assuming that the hierarchical storage system is divided into n storage levels, there may be n different migration decisions for each file (migrate into this layer or keep it in the original storage layer). In order to reduce the impact of migration on the user's normal data access in hierarchical storage, changes in the file access frequency within a small range do not change which storage level the file should be migrated to. This article predicts the popularity of the file, that is, the interval within which the access frequency falls. In this way, the prediction problem can be reformulated as a classification problem. At the same time, this article uses the file's access level (0,1, ..., n-1) to label each access feature sequence in the training set. The n heat tags are converted to a sparse vector consisting of 0 and 1 using one-hot encoding, as shown below Model Training Model training mainly uses the hidden layer of the LSTM network as the research object. In the model input layer, define the original file access feature order as (2) The dynamic time window segmentation method is used to process "Fo". If the length of the dynamic time window is set to L, the model input after segmentation is The LSTM model uses the cross-entropy loss function as the loss function in the training process, which is defined as We set the minimum loss function as the training target of the model. We gave a randomization seed to randomize the weights and biases in the LSTM network. Then we set the number of hidden layers and hidden nodes of the LSTM network to layers and nodes. At last, we set initial learning rate and training steps, and use Adam gradient stochastic optimization algorithm [9] to update parameters in the network. In general, LSTM-based file access prediction model training and prediction algorithms can be summarized as follows: Input: Fo, Y, L, layers, nodes, seed,, Among them, LSTMcell represents the neurons of the LSTM network, LSTMnet represents the hidden layer structure of the LSTM network, and LSTMforward represents the forward propagation process of the LSTM network. Experiments This article uses the data of the high-altitude cosmic ray observation experiment LHAASO [10] in EOS, a large-scale storage system in Daocheng, Sichuan, as an example. First, it introduces how to prepare the file access data set required in the experiment, trains the LSTM model to predict the file access frequency. The file access frequency threshold γ corresponding to the popularity is used to test the prediction accuracy of the LSTM model under different thresholds. So we could compare with the advantages and disadvantages of other current prediction models, as well as the hardware and software configuration of the experimental platform. The data set used in this article is from the access I/O logs of files in the EOS storage system. The number of files that have been active in the past 30 days is 5,842,207. Then model training and test data sets were generated. The EOS storage cluster system logs of the computing center are periodically captured by the monitoring system to the ElasticSearch database. During the data preprocessing stage, file access characteristics are extracted from the logs and stored in the HBase database. Taking the experimental data of LHAASO as an example, the access features extracted from the file access log in the previous 27 days are set as the input of the prediction model. The access frequency Freq of the last 3 days is divided into multiple intervals, as the output of the prediction model. Generally, a file with Freq of 0 is defined as a cold file in high-energy physical storage. The data migration system periodically dumps such files to the tape library. To further distinguish warm files from hot files, this paper defines the access frequency threshold γ. A file with Freq less than or equal to the threshold γ is defined as a warm file. The migration system periodically migrates such files to the HDD layer of the mechanical hard disk. Files with Freq greater than γ are defined as hot files, and the migration system periodically migrates such files to the SSD layer. Taking γ = 3 as an example, the number of cold files in the LHAASO experimental training data set is about 95.8%, the number of warm files is about 3.06%, and the number of hot files is about 1.13%. Conclusion and Outlook This paper introduces a method for predicting file access popularity based on a LSTM deep learning model of hierarchical storage. It introduced content including data set preparation, file access feature construction, training and prediction. Compared with the migration method based on administrator experience and statistics, the LSTM model can more accurately predict the change in the heat of file access, thereby providing a more effective basis for file migration. This paper applies deep learning methods to the field of data migration in hierarchical storage. As this was a preliminary attempt, there are many aspects that need further exploration and research.
3,283
2020-01-01T00:00:00.000
[ "Computer Science" ]
Distinction between Pore Assembly by Staphylococcal α-Toxin versus Leukotoxins The staphylococcal bipartite leukotoxins and the homoheptameric α-toxin belong to the same family of β-barrel pore-forming toxins despite slight differences. In the α-toxin pore, the N-terminal extremity of each protomer interacts as a deployed latch with two consecutive protomers in the vicinity of the pore lumen. N-terminal extremities of leukotoxins as seen in their three-dimensional structures are heterogeneous in length and take part in the β-sandwich core of soluble monomers. Hence, the interaction of these N-terminal extremities within structures of adjacent monomers is questionable. We show here that modifications of their N-termini by two different processes, using fusion with glutathione S-transferase (GST) and bridging of the N-terminal extremity to the adjacent β-sheet via disulphide bridges, are not deleterious for biological activity. Therefore, bipartite leukotoxins do not need a large extension of their N-terminal extremities to form functional pores, thus illustrating a microheterogeneity of the structural organizations between bipartite leukotoxins and α-toxin. INTRODUCTION Staphylococcal bipartite leukotoxins and α-toxin belong to a single family of structurally related β-stranded pore-forming toxins ( Figure 1). Leukotoxins are constituted by a class S protein (32 kd) that binds to the surface of target cells prior to a class F protein (34 kd) distinct in sequence [1]. Then, these proteins oligomerise to octamers or hexamers [2][3][4][5], induce Ca 2+ -activation [6], and form monovalent cationselective β-barrel transmembrane pores [7]. Each protomer contributes to the pore by transforming a central β-sheet domain into a β-hairpin [8,9]. Five loci encoding leukotoxins are characterized [9]. Several of these loci, encoding the Panton-Valentine leucocidin (PVL), gamma-hemolysin, [10] and LukE-LukD [11], may be present together and are expressed in a single isolate. The S and F components can then combine to give a specific leukotoxin. However, they do not combine with α-toxin, also present in almost all isolates, for an action onto natural target cells [12], that is, human polymorphonuclear cells or erythrocytes. Panton-Valentine leucocidin, which is composed of LukS-PV and LukF-PV, is associated with furuncles [13] and pneumonia [14]. Bipartite leukotoxins show a complementary spectrum for lytic functions towards human blood cells including lymphocytes and erythrocytes, accounting for the bacterial virulence. Furthermore, leukotoxins and α-toxin differ from each other by the respective cationic and anionic selectivities of their pores [6,15,16]. X-ray diffraction and other techniques have been used to study the heptameric pore of α-toxin [17][18][19][20][21]. The crystal structure of the assembled α-toxin [20] revealed that the transmembrane β-barrel that forms the pore corresponds to the stem domain of a mushroom-shaped structure. It also revealed that the N-terminal extremity, also called the amino latch, of each protomer interacts with two adjacent protomers in the mouth of the pore lumen. The 3D structures of the soluble form of two F monomers, LukF-PV from the Panton-Valentine leucocidin (Figure 1(a)) [8] and HlgB from gamma-hemolysin [22], have also been determined. They showed that the central stem domain is prefolded as three β-strands stacked to the core of the soluble proteins. This core is formed of a β-sandwich to which the N-terminal extremity of the soluble F proteins contributes to one strand. Despite the high similarity between the F and S structures, the N-terminal extremity of LukS-PV appeared shorter [23]. The F and S proteins must undergo a number of conformational modifications during oligomerization and β-barrel formation to evolve to a functional transmembrane pore [24,25]. Since the possible unfolding of the N-terminal extremity to interact with adjacent subunits is unclear, and that modifying the N-termini for bioengineering purposes could considerably influence their function, we report that some biological activity is retained by recombinant proteins after glutathione S-transferase (GST) fusion and expression in Escherichia coli. We also constrained this portion of the proteins to the β-sandwich by engineering disulfide bonds, and investigated the residual functions of new leukotoxins in order to verify if N-terminal extremities of leukotoxins must explore a large conformational space for their pore-forming activity. Choice, construction, and purification of modified proteins We considered alignment of N-terminal LukS-PV and LukF-PV sequence alignment extracted from the alignment of entire sequences (Figure 1(a)) [23]. We chose two pairs of LukF-PV amino acids, T5 and T21, and S8 and K20 for cysteine substitutions due to their respective close locations in the structure (Figure 1(a)) [8]. The distance between Cβ atoms of T5 and T21 (5.7Å) and angles between thiol groups may allow the formation of disulfide bonds that would bridge 3 the first two β-strands of the protein (SSBond software, [26]). The other pair of amino acids seemed even more favorable for disulphide bridge formation, with a distance between Cβ atoms of 4.1Å. The situation was less obvious for LukS-PV, which has no counterpart to LukF-PV T5 (Figure 1(a)). Furthermore, the first three amino acids of the shorter LukS-PV N-terminal extremity could not be traced in the crystal structures of either wild type [23] or recombinant protein (unpublished results). Therefore, this N-terminal extremity may be poorly structured. To allow bridging with R16, which can be aligned with T21 of LukF-PV, we finally chose to substitute either LukS-PV N2 or D1 by a cysteine, and to introduce another cysteine residue upstream from D1 (called LukS-PV-1C) (Figure 1(a)). GST∼LukS-PV and GST∼LukF-PV fusion proteins were purified for functional analysis by chromatography on glutathione-sepharose 4B followed by hydrophobic interaction chromatography (HIC, alkyl-superose, i.e., resource ISO-Ge Healthcare, USA). GST∼LukS-PV and GST∼LukF-PV eluted at 0.73 M and 0.78 M (NH 4 ) 2 SO 4 , respectively. All double cysteine-mutated proteins were purified by affinity chromatography on glutathione-sepharose 4B followed by cation-exchange FPLC chromatography (CEC) using an NaCl gradient from 0.05 M to 0.7 M [24]. The GST-tag was removed thanks to the PreScission protease. Nevertheless, it remains an octapeptide at the N-terminus which does not hamper the biological activities of the toxins [8,24] (Amersham Biosciences). Hydrophobic interaction chromatography (HIC) was further applied using an (NH 4 ) 2 SO 4 gradient from 1.3 M to 0.5 M to improve purifications. LukS-PV mutants eluted at 0.15 M NaCl and 1.02 M (NH 4 ) 2 SO 4 , whereas those of LukF-PV eluted at 0.1 M NaCl and 0.95 M (NH 4 ) 2 SO 4 . To avoid any disulfide links formation between free sulfhydryl groups of the cysteine residues of the fusion proteins and the GSH, all buffers used for the CEC and the HIC contain 1 mM DTT (except for GST fusion proteins). Controls for homogeneity were performed using SDS-PAGE, and the proteins were then stored at −80 • C. We labelled two fully functional mutants, LukF-PV S27C and LukS-PV G10C, with fluorescein 5-maleimide (Molecular Probes, Leiden, The Netherlands) [24]. Oxidation and accessible thiol-titration The cysteine mutants were first reduced with 10 mM DTT, before treating with CuSO 4 , 1,10-phenanthroline (Sigma, USA). Briefly, proteins at a concentration of 20 μM were dialysed in 50 mM Hepes, 0.5 M NaCl, 10 mM DTT pH 7.5, and further equilibrated against 50 mM Hepes, 0.5 M NaCl, pH 7.5 to undergo oxidation. Proteins were then adjusted to 2 mL (20 μM) of the same buffer complemented with 1.5 mM CuSO 4 , 5 mM 1,10-phenanthroline and incubated for 2 hours at 4 • C. Protein solutions were then equilibrated against 50 mM Hepes, 0.5 M NaCl, 1 mM EDTA-Na 2 , pH 7.5, and could then be frozen without loss of disulfide bonds and biological activity. For the titration of free thiols, proteins were precipitated in 5% (w/v) trichloroacetic acid and left for 5 minutes at 0 • C, pelleted by centrifugation and washed three times with the same solution. The precipitate was dissolved in 400 μL of N 2 -saturated 0.2 M Hepes, 0.2 M NaCl, 1 mM EDTA-Na 2 , 2% (w/v) SDS, pH 8.0. The remaining precipitated material (< 10% of total proteins) was removed by centrifugation and 300 μL of the supernatant was added to 30 μL of 10 mM 5-5 -di-thio-bis (2-nitro) benzoic acid (DTNB) (ε = 13, 600 M −1 ·cm −1 ). After 10 minutes of reaction at room temperature, the amount of titrated thiols was estimated by OD at 412 nm and the molarity was compared to the protein concentration determined by the Lowry method. Human polymorphonuclear cells (PMNs) and flow cytometry measurements Human PMNs from healthy donors were purified from buffy coats as previously reported [27], and suspended at 5 × 10 5 cells/mL in 10 mM Hepes, 140 mM NaCl, 5 mM KCl, 10 mM glucose, 0.1 mM EGTA pH 7.3. Flow cytometry measurements from 3000 PMNs were carried out using a Fac-Sort flow cytometer (Becton-Dickinson, Le Pont de Claix, France) equipped with an argon laser tuned to 488 nm [28]. We evaluated the intracellular calcium using flow cytometry of cells previously loaded with 5 μM Fluo-3 (molecular probes) in the presence of 1.1 mM extracellular Ca 2+ by measuring the increase of Fluo-3 fluorescence. Pore formation and monovalent cation influx were revealed by the penetration of ethidium through the pores; cells were incubated 30 minutes with 4 μM ethidium prior to the addition of toxins in the absence of extracellular Ca 2+ [24,28]. Fluo-3 and ethidium fluorescence were measured using Cell Quest Pro software (Becton, Dickinson and Company). The results from at least four different donors were averaged and expressed as percentages of a control of human PMNs treated with the wild-type (WT) PVL. Base level values were obtained for each series of data from a control without addition of toxin. These were systematically subtracted from the other assays. Standard deviations values never exceeded 10% of the obtained values; they were removed from figures for clarity. The dissociation constant (k D[S] ) of LukS-PV for the PMN membrane and that of LukF-PV for the PMN membrane-bound LukS-PV (k D[F] ) were previously reported to be 0.07 nM and 2.5 nM, respectively [29]. LukS-PV and mutants were applied at 1 nM while LukF-PV and derivatives were applied at 10 nM as an excess of toxins to their ligand(s). The binding properties of the modified proteins onto PMNs were estimated through competition experiments carried out in the absence of extracellular Ca 2+ . Fluorescein-labelled LukS-PV G10C (0.1 nM) and LukF-PV S27C (2 nM), and increasing concentrations of the respective mutants (from 1 to 1000 nM) were added 15 minutes before measuring the fluorescence retained at the surface of PMNs. For LukF-PV competitions, the PMNs were initially incubated for 10 minutes with 1 nM of LukS-PV. The IC 50 value corresponds to the concentration of nonfluorescent competitor needed for 50% cell fluorescence inhibition, and is determined from the best fit of independent triplicates of the residual cell fluorescence [24]. The apparent inhibition constant, k Iapp , was calculated from the following equation: Determination of pore radius The pore formation induces a disruption in the cell membrane. It results in an increase of the cell size due to the difference of osmolarity between the cytoplasm and the medium. Indeed, the osmolarity is weaker in the medium than in the cytoplasm. The relative variations in PMN size were assessed by measuring of the forward light scatter (FSC) of cells (5×10 5 cells/ml) treated with WT or mutant 20 nM LukS-PV, WT or mutant 100 nM LukF-PV, and 30 mM PEG polymers (1000, 1500, 2000, 3000 Da) of different hydrodynamic radii (0.94, 1.12, 1.22, and 1.44 nm, resp.) [25]. FSC values were collected at 15 minutes after toxin application. If the sizes of the PEG molecules are similar or greater than the diameter of lumen, they cannot pass through the pores, and then cannot balance the osmolarity between both compartments. In that case, the FSC variations are weak. If the sizes of the PEG molecules are smaller than the diameter of lumen, they can pass through the pore with balancing the osmolarity between the two compartments. It results in a rise of the FSC variations. Identification of oligomers Denaturing polyacrylamide gel electrophoresis (SDS-PAGE) was carried out on oxidized leucotoxin-treated human PMNs. Preparations at 5 × 10 7 cells/mL in 10 mM Hepes, 140 mM NaCl, 5 mM KCl, 10 mM glucose, 0.1 mM EGTA, pH 7.3, were incubated with 100 nM of LukS-PV and LukF-PV derivatives in the presence of 10 μL/mL of a mammalian cell-tissue antiprotease cocktail (Sigma, USA). After a 45minute incubation at 22 • C, the biological activity was assessed by optical microscopy, the cells were washed twice and then resuspended in 1 mL of the same buffer containing an antiprotease cocktail (1 μL/mL) as above. The cells were ground in a FastPrep apparatus (QBiogene, Bio101, Illkirch, France) using FastPrep Blue tubes for an orbital centrifugation (10 seconds, 3600 rpm, room temperature). The membranes were harvested by ultracentrifugation for 20 minutes at 20000 × g at 4 • C. The membrane pellets were resuspended in 100 μL of the same buffer complemented with 2 μL of antiprotease cocktail containing 1% (w/v) saponin (Sigma), incubated for 30 minutes at room temperature and then centrifuged for 30 minutes at 22000 × g. The supernatants were adjusted to 1 mM glutaraldehyde (in the above buffer) and incubated for 10 minutes at 50 • C. One third volume of loading buffer (0.5 M Tris-HCl pH 8.5, 2% (w/v) SDS, 0.04% (w/v) bromophenol blue, 30% (v/v) glycerol) containing 100 mM ethanolamine to block the cross-linking reaction was added and assays were heated to 100 • C for 5 minutes. Finally, 10 μL of the solution was loaded onto Trisacetate, pH 8.1 polyacrylamide 3-8% (w/v) gels (Invitrogen, Calif, USA). Proteins were subjected to electrophoresis for 75 minutes at 150 V at room temperature in 50 mM Tris, 50 mM Tricine, pH 8.2, 0.1% (w/v) SDS, and then electroblotted onto nitrocellulose membranes for 1 hour at 30 V in 25 mM Tris, 192 mM glycine, pH 9.3, 20% methanol using a transfer Xcell II blot module (Invitrogen). The leukotoxins oligomers or components were characterized by immunoblotting using affinity-purified rabbit polyclonal antibodies and a peroxidase-labeled goat antirabbit antibody using ECL detection (Amersham Biosciences, Saclay, France) as previously described [24]. The apparent molecular masses were estimated from protein migration according to Precision Plus Protein Standards (Bio-Rad, USA). GST-fusion proteins remain biologically active GST∼LukS-PV and GST∼LukF-PV were purified to homogeneity and appeared as homogeneous proteins with apparent molecular masses of 57 and 60 kd, because of the fusion with the 26 kd GST (Figure 2(a)). The apparent binding of leukotoxin derivatives with cell membranes was assessed in competition with an increased amount of the functional fluorescein-labelled ( * ) LukS-PV G10C * or LukF-PV S27C * . Apparent k I of 0.039 nM was found for the WT LukS-PV binding to membranes and a value of 3.6 nM was found for the WT LukF-PV binding to LukS-PV-membrane complexes (data not shown). In these conditions, values of k Iapp determined for GST∼LukS-PV and GST∼LukF-PV (0.2 and 3.4 nM, resp.) remained close to the values obtained for the WT proteins. However, when GST∼LukF-PV was applied to the bound GST∼LukS-PV, a k Iapp of 17.6 nM was recorded, this marked difference may be due to some steric hindrance caused by GST molecules. Combinations of LukS-PV with GST∼LukF-PV or of GST∼LukS-PV with LukF-PV led to Ca 2+ induction in treated cells comparable to the WT proteins (Figure 2(b)). However, the combination of both fusion proteins required a longer time, that is, 4 minutes instead of 2 minutes, to reach a fluorescence maximum. Subsequent decrease of fluorescence is mainly due to the release of the fluorescent probe by pores and disrupted cell membranes. The permeability to monovalent cations mediated by the pores was measured via the entry of ethidium and its combination with nucleic acids. Despite ethidium fluorescence being less sensitive than Ca 2+ assay, these two influxes were demonstrated to be independent [7,24]. Fusion proteins tested alone did not generate entry of Ca 2+ (not shown) and ethidium (Figure 2(c)). Ethidium entry showed a higher variation than what was observed for calcium, but the fusion proteins retained significant biological activity (Figure 2(c)). The activity of GST∼LukS-PV + LukF-PV lies between those obtained for PVL and for PVL with a 1 : 10 dilution of LukS-PV, whereas the activity of LukS-PV + GST∼LukF-PV was similar to that of PVL with the 1 : 10 dilution of LukS-PV (Figure 2(c)). The combination of both fusion proteins showed a considerably reduced ethidium influx, even if compared with the 1 : 10 dilution of LukS-PV (Figure 2(c)). This difference can again be explained by the steric hindrance caused by GST molecules. After osmotic protection with PEG molecules, all couples involving at least one fusion protein showed decays of the forward light scatter values comparable to those of the WT toxin (Figure 2(d)). The inflexion point calculated for each curve clearly indicated permeability to PEG molecules with a hydrodynamic radius cutoff between 1.12 and 1.2 nm. Thus, the diameter of the leukotoxin pores was not affected by GST fusion proteins [25]. We aimed at identifying oligomers formed by the fusion proteins (67 and 60 kd, resp.) within the human PMNs membranes to confirm the pore formation. When retrieved from cell membranes, we noticed that oligomers formed by LukS-PV and LukF-PV were very sensitive to detergents (saponin 1% and SDS 0.04%), as we only detected mixtures of monomers and dimers (Figure 3, lane 3). Therefore, we used a cross-linking agent, glutaraldehyde, applied consecutively after the saponin treatment, to help to stabilize the oligomers removed from the membranes in the presence of an excess of cell membrane proteins. Cross-linking proved efficient for both combination of fusion proteins (Figure 3, lanes 4-6), but essentially showed tetramers for all, in spite of the fact that hexamers might be suspected for the LukS-PV + GST∼LukF-PV combination (Figure 3, lane 4, see arrow). The use of anti-LukS-PV and anti-LukF-PV antibodies, alone or in combination, allowed the detection of similar high molecular mass oligomers containing GST fusion leukotoxins (data not shown). The materials were specific for the leukotoxins, since no immunoreactive product was observed when analyzing human PMNs alone (Figure 3, lane 2). The intensity of these bands rapidly decreased while the number of subunits increased. The increase of the steric hindrance due to the fusion GST proteins probably contributes to the intrinsic instability of leukotoxins pores when extracted from membranes and may impact the resolution of the complete oligomers ( Figure 3, lanes 4-6). Therefore, we decided to characterize the oligomers for only WT proteins and double cysteine mutants (see below). Integrity of double-cysteine mutants Double-cysteine mutants were analyzed in their oxidized forms (Figure 4). Among the LukS-PV and LukF-PV doublecysteine mutants obtained in either reducing or oxidizing conditions, only oxidized LukS-PV N2C-R16C and oxidized LukF-PV T5C-T21C contained very small amounts of homodimers (< 8%). The presence of these dimers did not cause a significant loss of biological activity. Binding of double-mutated leukotoxins Using similar conditions as for the fusion proteins, we determined the apparent binding constants of the different mutants to their respective ligands. For LukS-PV mutants, we found k Iapp values, ranging from 0.052 nM to 0.069 nM, similar to those of WT LukS-PV. Values for the binding of LukF-PV T5C-T21C to the LukS-PV mutants-membrane complexes (k Iapp = 2.8 − 5.2 nM) were also close to that of the WT protein (k Iapp = 2.5 nM). In contrast, binding of LukF-PV S8C-K20C was more affected. Indeed, binding of LukF-PV S8C-K20C to LukS-PV D1C-R16C and N2C-R16C gave k Iapp of 18.5 ± 2.5 and 12 ± 2 nM, respectively. Binding became even worse with LukS-PV and LukS-PV(-1C)-R16C with k Iapp = 37 ± 5 and 24 ± 6 nM, respectively. Olivier Joubert et al. Ca 2+ entry Staali et al. [7] and Baba Moussa et al. [24] showed that Ca 2+ influx and ethidium entry promoted by leukotoxins can be selectively inhibited. We evaluated the different oxidized LukS-PV and LukF-PV double mutants for their ability to activate PMNs and provoke Ca 2+ -channel opening ( Figure 5). In our system, we assayed LukS-PV at 0.1 or 1 nM in combination with LukF-PV at 2 or 20 nM, and compared activity of associations of LukS-PV proteins/mutants at 1 nM and LukF-PV proteins/mutants at 20 nM. Table 1 summarizes kinetics of cell-associated Ca 2+ fluorescence for different combinations. When mutants were first combined with the heterologous WT protein, kinetics of Ca 2+ influx comparable with that of the WT toxin were observed for combinations involving LukS-PV(-1C)-R16C or N2C-R16C and LukF-PV at 20 nM ( Figure 5(a)). In contrast, the kinetics for 1 nM LukS-PV D1C-R16C combined with LukF-PV was decreased to a lower value than that obtained for WT combination involving 2 nM of LukF-PV. The same was observed for LukS-PV + LukF-PV S8C-K20C, whereas the combinations involving LukF-PV T5C-T21C were almost as active as the WT toxin ( Figure 5(b), Table 1 Table 1). Altogether, these data indicate that constraining N-terminal extremities might not be detrimental to biological activity. Ethidium entry induced by pore formation Pore formation and ethidium entry promoted by the oxidized mutated proteins were more variable than Ca 2+ entry ( Figure 6). Table 2 Table 2) and kinetics for LukS-PV D1C-R16C was dramatically affected. The kinetics of ethidium entry induced by LukS-PV combined with LukF-PV mutants was intermediate between those produced by a same concentration of LukF-PV and its 1 : 10 dilution. Combinations of reduced mutants with heterologous WT proteins or mutants did not show high difference in activity compared to the use of oxidized mutants (data not shown). (Figure 6(c)), the associations of reduced mutants always induced higher ethidium influxes than those obtained with associations of oxidized mutants (Table 2). From combining oxidized mutants, the pairs involving LukS-PV(-1C)-R16C, N2C-R16C, combined with LukF-PV T5C-T21C showed intermediate activity (Table 2), while only LukS-PV N2C-R16C combined with LukF-PV S8C-K20C could be considered as significant as the last cited ( Figure 6(d)). It has to be noticed that all combinations of oxidized mutants less prone to induce Ca 2+ influx were also less effective in pore formation. Pore radii The pore radii of the most active oxidized leukotoxins were evaluated by osmotic protection induced with calibrated PEG molecules (Figure 7). PMNs were protected against osmotic disruption by PEG molecules having a hydrodynamic diameter between 1.12 nm and 1.22 nm. A same value ranging 1.2 nm was obtained for the WT toxin, indicating similar pore radii of these toxins. Panton-Valentine leucocidin oligomers Several experiments were carried out, checking for the recovery, the stability, and the occurrence of oligomers formed by PVL. Control of cells without PVL did not give evidence of cross-reacting material ( Figure 8, lane 1). Challenging PVL oligomers with 4 ng of each component in solution was highly dependent on glutaraldehyde concentrations (Figure 8, lanes 3 and 4), while they show a little tendency to spontaneously form dimers (lane 2). Indeed, the use of 0.3 mM glutaraldehyde allowed to obtain various oligomers containing 2 to 10 units, at least, whereas 3 mM applied on these purified proteins dramatically affected signals (lane 4). Application of 30 ng of PVL per assay to PMNs only allowed observing few dimers in the described conditions without any glutaraldehyde treatment (Figure 8, lane 5). It has to be noticed that a nonboiled, but SDS-containing, assay resulted in insufficiently denatured materials, and even if treated by saponin and glutaraldehyde, this did not give interpretable results (Figure 8, lane 6). Saponin treatment of cells largely helped the release of protein-containing oligomers, despite the fact that saponin brought contaminating proteins (50% of the total bulk of proteins). Each assay on cells involve an original volume of 1 mL further concentrated to 100 μL. Cells treated with PVL, saponin, and with a consecutive cross-linking with 3% (v/v) glutaraldehyde allowed to Olivier Joubert et al. identify oligomers species provided that samples were boiled (Figure 8, lanes 7-11), whether saponin treatment was carried out at 4 • C or 23 • C. In fact, the Schiff reaction promoted by glutaraldehyde is not stable in aqueous solution, but crosslinking certainly occurs in our assays through the Michael addition reaction onto amine groups (R-NH 2 ). Considering the excess of proteins in lysates, the observed oligomers are assumed as preformed oligomers, since the cross-linking reaction cannot be considered to be specific of the PVL proteins. This addition reaction was stopped by ethanolamine Table 2: Times (min) to reach (a) 5%, (b) 50%, and (c) 100% of the entry of ethidium into cells by pores formed by combinations of PVL (taken as reference) and its mutants. DISCUSSION Although staphylococcal α-toxin and bipartite leukotoxins fold in a comparable way, differences exist in the sequences and structures of these two subfamilies of toxins that imply differences in function [30]. Another distinction is in stoichiometry. Indeed, α-toxin is known to form heptamers in target cells [20] and hexamers in some different membranes [31]. Recently, octamers were identified in synthetic bilayers and PMNs membrane by using cross-linking between mutated components of gamma-hemolysin [5,32]. Moreover, the presence of strictly alternating S and F proteins was recently proven in the case of leukotoxin pores [33]. Considering the sequence alignment of the S and F components of leukotoxins and of α-toxin [23], differences between these proteins are located at the N-termini, at each side of the central domain and in the last fifty C-terminal residues. The significance of these differences with α-toxin is not understood, so far. It has been shown that deleting the first two residues or labelling the N-terminal extremity of α-toxin dramatically reduce its biological activity [34,35]. It is also noteworthy that leukotoxins oligomers are less stable than those of αtoxin, hampering their crystallization. An investigation using infrared spectroscopy did not give evidence of any significant modification in the β-strands content when passing from the soluble state to the pore formed in planar lipid membranes [4]. In this work, we try to bring out arguments that Nterminal extremities of leukotoxins keeping part of the βsandwich core during pore formation lead to functional toxins, and that a large unfolding of these extremities is not obvious. Despite the fact that the recombinant expression system used in this study adds an N-terminal octapeptide, the proteins that were produced retain a biological activity comparable to native toxins [24,25]. Moreover, GST fusions of LukS-PV or LukF-PV also proved their binding efficacy. LukS-PV∼GST + LukF-PV was only from 3 to 5 folds less efficient than the corresponding WT proteins, and the LukS-PV + LukF-PV∼GST was also a bit less effective than the WT couple. The binding of LukF-PV∼GST was affected when tested on its GST fusion counterpart protein and the resulting biological activity significantly decreased, but remained biologically active. Thus, the decrease in binding might be due to steric hindrance produced by the fusion, especially in the case of the GST couple. Despite a higher sensitivity of the Ca 2+ entry assay, the decrease of Ca 2+ influx induced by the GST couple correlated with its lower pore-forming activity. It can be assumed, therefore, that the decrease in pore-forming activity results from the reduced binding of LukF-PV∼GST (Figure 2(c)). Taking into account all these features, it becomes less realistic that Nterminal extremities of leukotoxins extensively unfold to interact with residues of adjacent monomers located within the lumen of the pore (see Figure 1). Comparatively, the expressed and purified GST∼ α-toxin has no significant activity against rabbit red blood cells, whereas when GST is cleaved, the resulted octapeptide-α-toxin shows a lytic activity diminished by more than 10 folds than that of the native α-toxin purified from S. aureus. This constitutes a functional difference with the Panton-Valentine leucocidin. Moreover, at least LukF-PV was not sensitive to deletions less than 10 amino acids [36,37]. In the second approach used in this study, N-terminal extremities of both PVL components were locked to the protein core by disulfide bonds via site-directed mutagenesis. Different locations for cysteine substitution were chosen for each protein. The very first residues and R16 of LukS-PV appeared as good candidates for substitution by cysteines and subsequent bridging. Indeed, although the first two Nterminal residues are absent in the three-dimensional structure of LukS-PV [23], three positions were chosen by analogy with LukF-PV which produced internal disulfide bonds in majority. Thus, formation of homodimers during assisted oxidation could not be responsible for a great decrease in the biological activity of the mutated toxins. Oxidation with 30 mM H 2 O 2 could alternatively be preferred to Cu 2+ oxidation for some mutants, but in any case, excess of oxidant was removed before any biological assays. The binding of oxidized LukS-PV mutants to human PMNs was not affected, but that of LukF-PV S8C-K20C to LukS-PV mutants was diminished from 6 to 15 folds compared to that of WT proteins. This may suggest some modifications in the overall structures and some adverse compatibilities between new structures. In fact, when combined with LukS-PV, LukF-PV S8C-K20C showed diminishing both in Ca 2+ induction (> 10 folds) and in pore formation (less than 10 folds), but activities largely decreased for combinations with LukS-PV
6,743.8
2007-02-28T00:00:00.000
[ "Biology", "Chemistry" ]
High energy Coulomb-scattered electrons for relativistic particle beam diagnostics A new system used for monitoring energetic Coulomb-scattered electrons as the main diagnostic for accurately aligning the electron and ion beams in the new Relativistic Heavy Ion Collider (RHIC) electron lenses is described in detail. The theory of electron scattering from relativistic ions is developed and applied to the design and implementation of the system used to achieve and maintain the alignment. Commissioning with gold and 3He beams is then described as well as the successful utilization of the new system during the 2015 RHIC polarized proton run. Systematic errors of the new method are then estimated. Finally, some possible future applications of Coulomb-scattered electrons for beam diagnostics are briefly discussed. I. INTRODUCTION Instrumentation providing accurate information on particle beam properties and behavior in accelerators is essential for their operation. The challenge of performing reliable and often delicate measurements in the harsh particle accelerator environment provides strong incentives for exploring new approaches. We describe a new type of beam diagnostic device for high energy particle accelerators based on the Coulomb scattering of electrons by the beam particles. Measuring the deflection of low energy electron beams by the macroscopic fields generated by the high energy particle beam to be characterized, the so-called electron wire, was proposed in the late 1980s and early 1990s [1,2,3,4] and later implemented in several systems, including the use of electron ribbons instead of the pencil beams [5,6,7,8]. The system we describe here is a new noninvasive beam diagnostic tool also based on the Coulomb interaction of low energy electrons with relativistic particle beams, but in this case the interaction is due to small impact parameter collisions of a small fraction of the electrons with individual beam particles leading to large momentum transfers. This mechanism is the so-called Rutherford scattering, named after Ernest Rutherford who, in 1911 [9], discovered the atomic nucleus by studying the scattering of alpha and beta particles from stationary targets. Our targets, the ion beam particles, far from being stationary, are moving at relativistic velocities. The theory describing the interaction is the same in the frame of reference co-moving with the particle beam. Using this theory, we can predict the energies and angular distribution of the scattered electrons by coordinate transformation to the laboratory frame. In the laboratory frame, some of these electrons acquire energies up to several MeV, making them easy to detect even after traversing thin vacuum windows, thus allowing the use of simple scintillation detectors in air. Based on these ideas, we developed a non-invasive diagnostic tool called electron beam backscattering detector (eBSD) [10] to accurately align the electron and proton beams in the new Brookhaven National Laboratory (BNL) electron lenses for the partial compensation of the head-on beam-beam effects that limit the luminosity [11]. In the following sections we review the theory of electrons scattered form relativistic ions, we then describe the principle of applying this phenomenon to the alignment of electron and ion beams in the RHIC electron lenses and we describe the implementation of the backscattered electron detectors. We then report on the commissioning of the systems with gold and helium beams and, finally, we describe in detail the successful utilization of this new diagnostic instrument during the 2015 RHIC polarized proton-proton operations run (henceforth: "run"). Based on the experience and data from these runs, we then provide an analysis and estimates of systematic errors of this alignment method. During the first commissioning run of these systems [10,12] it was discovered that energetic scattered electrons are also generated by the interaction of the relativistic particles with the electrons of residual gas atoms. We mention here the possible use of these electrons in other non-invasive beam diagnostic instruments not requiring a low energy electron beam, and we suggest the possibility of using instruments similar to the eBSD for the alignment of other configurations involving ion and electron beams such as hollow beams for collimation or for halo monitoring, and long range beam-beam compensators [13]. Finally, we suggest that detecting the electrons scattered from an "electron wire" may be an alternative, and more intensity-independent way to obtain profiles as compared to the present measurements of small deflections caused by the macroscopic fields generated by the beam. Time resolved measurements of this type, in addition to providing transverse beam profiles, could also provide longitudinal bunch profiles and diagnostics for "head-tail" perturbations. II. THEORY OF ELECTRONS SCATTERED FROM MOVING TARGETS To first order in the fine structure constant, the Coulomb scattering of relativistic electrons by nuclei is described by the Mott formula which in the rest frame of the nucleus is written [14]: where Ω is the solid angle, σ the cross section, Z the atomic number, MA the mass of the nucleus, e the elementary charge, E and p the energy and momentum of the electron in the frame of the nucleus and θ the electron scattering angle in that frame. The first term is the classical Rutherford cross section and the two bracketed terms come from the 1/2 spin of the electron and the nuclear recoil respectively. A small correction for the nuclear magnetic moment has been neglected. Another correction that has been neglected is the one for Bremsstrahlung, which can be significant. A more complete theoretical treatment will probably be required to make good quantitative predictions, especially for even higher energy protons or ions. Values of this cross section are computed at small angular intervals and then relativistic transformations to the laboratory frame of the cross sections, the angles and the energies lead to results such as plotted in Fig. 1. Such plots are useful for rough estimates of counting rates, but detailed comparisons are difficult due to the complicated nature of the spiraling electron trajectories in our particular application (see next section). FIG. 1. The solid lines show calculated energies and approximate scattering cross sections for 5 keV electrons backscattered by 250 GeV protons. The dotted lines correspond to the same quantities, but for 10 eV electrons as a qualitative indication of energetic electrons generated by the interaction of the beam with the residual gas and/or with low energy electrons captured in the potential well of the beam. For the same example, the laboratory angle is plotted in Fig. 2 as function of the angle in the proton frame of reference, both angles being shown in this case with respect to the original electron propagation direction. We see that this is a rather extreme example of relativistic beaming, also referred to as the "headlight effect". Electrons scattered forward in the proton frame at angles larger than ~0.1 degrees appear in the laboratory at angles larger than 90 degrees, i.e. they are backscattered. III. RHIC ELECTRON LENS BEAM ALIGNMENT The partial compensation of the head-on beambeam effect in RHIC is necessary for mitigating the limit imposed by this effect on the achievable protonproton beam luminosities [11]. Electron lenses (elenses) [15,16,17,18] consisting of low energy (in our case ~5 keV), high intensity (~1 A), magnetized electron beams [19], can provide the precise nonlinear focusing properties necessary to effect such compensations. A schematic view of the two electron lenses that are being used for this purpose in RHIC is shown in Fig. 3. The precise alignment overlap of the electron and ion beams is an important prerequisite for achieving maximum compensation and for avoiding deleterious effects on the proton beam emittance [20,21,22]. Over the 2.4 m interaction region in the up to 6 T solenoid, the centers of these, as small as 300 m rms wide beams, need to be separated by less than 50 m. The precision achievable with the installed beam position monitoring system is not sufficient for ensuring this result, especially in view of electronic offsets that are not identical for electrons and ions [23,24]. Besides, in order to generate Beam Position Monitor (BPM) signals, the dc electron beam needs to be modulated and this modulation may affect electron beam stability during operation. As mentioned above, electrons in the electron beam, backscattered by the relativistic protons can provide a signal proportional to the electron-proton luminosity that can be used to maximize the overlap of the two beams [10]. The lensing effect of an e-lens on the relativistic protons is due to the macroscopic electric and magnetic fields produced by the Gaussian-shaped electron beam. In other words, it is the collective long-range Coulomb interaction of the electrons with individual ions that affects the trajectory of these ions. The vast majority of the electron trajectories are only slightly affected transversely [25] since their trajectories are confined by a strong solenoid magnetic field. There is, however, a finite probability for ion-electron collisions with impact parameters that are so small as to produce a significant electron scattering angle, imparting at the same time considerable momentum and energy to the scattered electrons. Large scattering angles, correlated with high energies, result in energetic electrons spiraling backwards (towards the electron gun) along the magnetic field lines. Some of these backscattered electrons are intercepted and counted by a scintillation detector placed in air, behind a thin vacuum window. Figure 4 shows the simulated projected trajectories of two electrons backscattered by 250 GeV protons in a 6 T solenoid at angles of ±50 o , one upwards and the other one downwards. As the electrons spiral towards the detector, the radii of their trajectories grow as they encounter lower fields. The upward drift of the trajectory envelopes of the back-scattered electrons is due to the horizontal bend in the field [26]. The higher energy of the scattered electrons as compared to the electrons in the primary beam makes this drift appreciable for the latter, while it is negligible for the former. This upward drift of the scattered electrons is helpful in separating the electron trajectories from the primary electron beam, thus facilitating their detection. FIG. 3. Perspective and top views of one of the RHIC electron lenses [11,12], showing the backscattered electron detector location close to the electron gun. In reality, the light guide and photomultiplier tube are enclosed in a heavy and light-tight magnetic shield to protect the PMT from the stray fields of the nearby magnets. FIG. 4. Computer simulated trajectories (blue and green curves) of two scattered electrons generated inside of the 6 T solenoid (see text). Only the first 200 mm of this 2400 mm long superconducting solenoid is included at the left (from Z=1000 mm to Z=1200 mm). Three weaker solenoids (not shown) guide the electron beam from the cathode towards the 6 T region (see Fig. 3) Each RHIC electron lens is equipped with an eBSD consisting of a small plastic scintillator (7.4 × 7.9 × 20.6 mm 3 ) attached to a 1.2 m long light guide leading to a small magnetically shielded photomultiplier (PMT) tube (Hamamatsu R3998-02) [27]. The signals from this PMT reach the instrumentation rack through a ~90 m long, 50  coaxial cable and are amplified and connected to a fast discriminator, the output pulses of which are used to determine the counting rates. The long light guide is necessary to keep the PMT far enough from the adjacent magnets so as to enable adequate shielding. This scintillation detector assembly is mounted in air in a vertical shaft, at the bottom of which there is a 0.1 mm thick titanium alloy vacuum window facing the scintillator. The vertical position of the detector shaft can be selected so as to locate the bottom of the scintillator at any position from 1 mm to 25.4 mm from the edge of the primary electron beam. This position adjustment can be used as an intensity range selector. An insulated tungsten block (35 × 4.9 × 7.6 mm 3 ) with current detection provides some protection against electron beam heating, should the position interlock and limit switch fail. Such a failure actually occurred during commissioning and a RHIC vacuum failure was avoided thanks to the tungsten block even though indirect heating was sufficient to melt the scintillator. Since the scintillation detector was in air, it was easily replaced. FIG. 5. Not-to-scale schematic of the eBSD scintillation detector and its housing. Electrons backscattered by the relativistic ion beam reach the plastic scintillator after traversing a thin titanium alloy vacuum window. The light from the scintillator is converted to electrical signals by means of a well shielded photomultiplier tube. A tungsten block protects the back of the detector cavity from accidental heating by the electron beam. A not-to-scale schematic and a cutaway drawing of the detector housing are shown in Figs. 5 and 6 respectively. The design and fabrication of the 100 m thick Ti-6Al-4V alloy window was a critical aspect of this project since the electron energy loss had to be minimized while guaranteeing the integrity of the RHIC vacuum system. Fig.7 shows the stopping power [28], and the calculated energy loss in a 0.1 mm thick titanium alloy window. We see that the energy loss in a 0.1 mm thick titanium window is acceptable for electrons of a few hundred keV and up. The design and dimensions of the window and detector housing are shown in Fig. 8 and the corresponding stress analysis is presented in Fig. 9. There is a safety factor of 3.9 at atmospheric pressure. The window was pressure tested up to three atmospheres without bursting. The detector cavity and the 0.1 mm thick window shown in Fig. 8 were fabricated using Ti-6Al-4V alloy which provided the desired strength and relatively small electron energy loss (see Fig. 7). The stress analysis shown in Fig. 9 as well as pressure tests proved that the safety factor is larger than 3. IV. COMMISSIONING WITH GOLD AND 3 He BEAMS IN RHIC The commissioning of the eBSDs was started during the 2014 100 GeV/nucleon gold-gold run. The first proof-of-principle horizontal and vertical beam separation scans are shown in Figs. 10 and 11. FIG. 10 Horizontal and vertical beam separation scan obtained by steering the 5 keV electron beam with respect to the 100 GeV/nucleon gold beam. The measured widths are both about 25% larger than the sums in quadrature of the gold and electron beam widths. This small discrepancy could be due to residual angular misalignments, to small ion beam motions or to errors in the gold beam beta function estimates. Soon after obtaining these results, a beam alignment optimization system was implemented based on automatically maximizing both the eBSD counting rates as function of horizontal and vertical positions and angles. This system is based on a program (LISA) [29] that was developed many years ago and used since then to maximize the ion-ion luminosities for the RHIC experiments by maximizing the coincidence rates from the zero degree calorimeters (ZDCs) [30]. After the gold run was completed, there was a brief opportunity for commissioning the eBSD system with a 3 He beam. This was important since gold scattering cross sections are much larger than the cross sections for protons, and therefore the gold beam tests were not representative of the situation with protons. The cross section for 3 He is ~4 times larger than for protons (see eq. 1) but the intensity was smaller. The counting rates for 3 He were similar to the ones expected for protons. Horizontal and vertical separation scans are shown in Fig. 11. These data are from a manual LISA eBSD scan obtained by displacing the gold beam by means of a set of steering correctors forming closed horizontal and vertical bumps. FIG. 11 Manual beam separation scans obtained by stepwise steering the 3 He beam with closed bumps utilizing part of the algorithm developed for the automated alignment optimization system based on the LISA program [29]. During the 3 He run the vertical positioning mechanism was utilized for the first time since the counting rates with gold had always been so large that the fully retracted position had to be used. Figure 12 shows the counting rate as a function of detector position for a 100GeV/nucleon 3 V. UTILIZATION OF THE eBSDs DURING THE 2015 RHIC POLARIZED PROTON RUN During this two-month run, the eBSDs were used routinely as the main alignment and monitoring tools for the electron and proton beam overlap in both electron lenses, without any system failures. To ensure optimal pulse height discriminator settings, rejecting low amplitude noise while minimizing any impact of gain shifts, a pulse-height analysis system was implemented shown schematically in Fig. 13. FIG. 13 Schematic of the pulse height analysis system used to optimize the discriminator setting. The Multichannel Analyzer (MCA) used here consists of an Ortec Multichannel Buffer (MCB) connected to a computer running the Maestro analysis and control software [32]. Figure 14 shows pulse height spectra and selected discriminator settings. The pulse-height resolution is poor, mainly due to the small photon collection efficiency through the thin, 1.2 m long light guide. However, excellent signal-to-noise ratios are achieved by adequate selection of the discriminator setting. The stability of the system was such that only one slight readjustment was performed during the entire period. By the end of the run, there was a ~12% pulse height reduction measured with a precise light pulse generator [31]. This slight pulse height reduction is illustrated in Fig. 15 which shows screen shots of counting rates as a function of pulse height from the pulse height analyzer program, Maestro [32]. All settings were identical and the pulse height of the light pulser peak is reduced by about 12% after two months of continuous use. This reduction may indicate slight radiation damage of the 1.2 m long light pipe and/or of the fiber carrying the light to its far end. It would not reflect any reduction in the scintillation efficiency. Rather than using this light pulser as a reference, it would have been better to install a very weak radioactive source for continuous, end-to-end gain verification. Modest reductions in pulse height can be easily compensated by adjusting the PMT high voltage. If necessary, the detector assembly can be easily replaced. FIG. 14 Pulse height spectra from the scintillation counter used in one of the RHIC electron lenses. In the upper spectrum all pulses are accepted, while in the lower one, pulse amplitudes below the discriminator setting are rejected. The discriminator effectively suppresses the high intensity low amplitude noise (see the red curve, which is the original histogram plotted with a scale change of a factor 1000) . The small remaining peak to the left in (b) may be an artifact of the pulse height analyzer gating system. FIG.15 . Pulse height spectra screen shots obtained with the pulse-height analyzer software [32]. The logarithm of counting rates are displayed as a function of pulse heights. The peaks to the right were obtained using identical settings of a precision light pulse generator. The slight shift of this peak indicates a ~12% light transmission loss due perhaps to slight radiation damage of the 1.2 m long light guide or of the fiber carrying the light signal to the far end of the light guide. The optimization of the beam alignment was largely automated by using the LISA algorithm described above [29]. To simplify the angular adjustments, the algorithm was modified, so as to rotate the beams around the centers of the respective lenses rather than around the proton-proton crossing point which is 3.3 m away from the center of the lenses. No interactions occur at this crossing point because the two beams are at different heights being separated by 10 mm or approximately 20 sigmas. During this run, compensation with close to maximum electron current was only used at the beginning of each store and was then reduced in steps as shown in Fig. 16. This optimized compromise FIG.16. The total proton intensities (a), RHIC electron lens electron current intensities (b), and corresponding eBSD counting rates (c) are shown as a function of time. The electron beam intensities are reduced in steps to optimize the benefits of the RHIC electron lenses [11]. The two colors indicate the results from each of the two RHIC rings. provided the best integrated luminosity by utilizing the electron lenses when most necessary for compensation, while minimizing their impact on beam lifetime and emittance [11]. During this period, a system that sorts the eBSD signals according to their arrival time was tested [10]. For this purpose, time digitizers [33] were started by the eBSD signals and stopped by a signal synchronized to the RHIC revolution frequency. While this system wasn't utilized, we mention it here because it may be used in the future and may also be of interest for other applications (see next section). of the peaks represents electrons arriving early which tend to originate closer to the gun, while the right side of the peaks, with a gentler slope, corresponds to later electrons from the other end of the interaction region. This effect could, in principle, be used as an aid for angular tuning of overlapped electron and proton beams. VI. SYSTEMATIC ERROR ESTIMATES While experimental results with proton-proton collisions [11] are compatible with having achieved perfect overlap between the electron and proton beams, possible deviations are difficult to estimate from these measurements. We explore in this section to what extent maximizing the eBSD counting rates may result in imperfect overlap. We identify two sources of systematic errors, one in the horizontal and one in the vertical alignment, and we estimate the magnitude of these errors with simulations for the specific example of the 3 He tests for which we have the necessary data. Table 1 lists the relevant parameters. 18 Magnetic field profile (a) and electron beam trajectory (b) starting at the electron gun and ending at the electron collector. In the magnified view (c) we show the ±1 sigma electron beam and ion beam profiles in the region where the electron beam starts curving away on its way to the collector. For the horizontal alignment, there is an obvious bias due to the fact that at the entrance and exit of the electron-ion beam overlap region, the electron beam deviates from the straight trajectory on its way from the electron gun and to the electron collector. As shown in Fig.18 there are regions at both ends of the overlap region where the electron beam becomes larger and curves away from the ion beam trajectory. Backscattered electrons from these regions will make an asymmetric contribution to the counting-rate curve when scanning one beam across the other. In Fig. 18 c we show a magnified view of the area where this asymmetry arises at the electron collector side of the interaction region. The electron beam transport is symmetric around the center of the solenoid since the magnetic field is symmetric (see Fig. 18 a. In Fig 18 c we also show a ±1 sigma ion beam profile centered with respect to the electron beam in the solenoid. We estimate the relative counting rate variation as function of offset by computing the convolutions of these two beams (assumed to be Gaussian) as function of offset in 0.1 mm steps. The result is shown in Fig. 19, and compared to the simulated counting rate profile in the absence of the asymmetric contribution. FIG.19 Simulated relative counting rates as function of horizontal electron-ion beam offset for the case illustrated in Fig. 18. Also shown (in red) is the ideal convolution of the two Gaussian beams in absence of the asymmetric contributions. The counting rate asymmetry as function of horizontal beam offset is clearly visible. The peak position was calculated with a quadratic fit over a ±0.2 mm range, following the procedure used by the automatic adjustment software [29]. The peak offset is 0.018 mm for this example where the rms widths of the electron and ion beams are 0.375 mm and 0.46 mm respectively. This shift is not expected to have any measurable consequences upon the e-lens performance in this case. If much larger deviation should occur in other situations, corrections could be computed and applied. In the vertical direction, there is a bias introduced by the detection efficiency dependence on the vertical position of the Coulomb interaction point. In other words, a backscattered electron originating from a point located at some distance below the axis common to the electron and ion beams, will have a slightly different detection probability compared to another electron originating at a point located above that axis. This effect could be measured by vertically displacing the interaction region, i.e. both the electron and the ion beams together, and monitoring associated counting-rate changes. While this approach may be attempted in the future, we will here arrive at an estimate based on the measured countingrate dependence on the vertical position of the detector shown in Fig. 12. The slope of this curve at the operating position at 24 mm is ~6.8 % per mm. This detection efficiency slope at the detector location can be converted to an equivalent efficiency slope in the interaction region. To that effect, we take into account the adiabatic invariance of the flux through the electron orbits [26] which leads to similar projections of the electron orbits onto planes perpendicular to the field at the detector position and in the interaction region. These similar projections differ by a scale factor equal to the ratio of the square roots of the magnetic field strengths at these two locations. In the present case that scale factor is 4.0 (see Table 1). That in turn means that the slope of the detection efficiency translated to the interaction region will be 6.8 ×4 =27.2 %/mm. The counting rate as function of 3 He beam position will be the usual convolution of two Gaussians, but modified here by the efficiency which we approximate as a linear function of the position with the slope 27.2%/mm obtained above. Using this value for the parameter k, and the vertical rms beam sizes σHe and σe from Table 1, we use Eq. 2 to calculate values proportional to the eBSD counting rate as function of the He beam position: The approximation that was made in this estimate is neglecting the small variations in vertical drift corresponding to variations in longitudinal electron velocity. These variations are small because of electron momentum conservation and because of the small angles between the electron trajectories and the magnetic field lines in the region of the bend. For the example shown in Fig. 4, these angles are around ~8 0 .The result is an approximately Gaussian curve of rms width  ~ √ 2 + 2 , with its maximum displaced by 0.028 mm. This is less than 10% of the rms widths of either beam. A correction is not necessary in this case. In other situations, in particular for beams of much larger widths, corrections could be computed or measured as outlined above, and applied by introducing a position correction after maximizing the counting rate. VII. FUTURE POSSIBILITIES In this section, we briefly present a few preliminary ideas on how the detection of energetic scattered electrons could be used in other beam diagnostic applications. These ideas are based on the previous extensive use of electron beams as diagnostic tools, documented in the literature, and on the results and experience gained during the design, implementation and application of this new approach. A) eBSDs used with hollow electron beams as possible halo monitors and beam alignment tools. Hollow electron beams have been tested as collimators or halo collimators in the Tevatron [34,35,36,37] and are being considered as an option to complement the LHC collimation system [13]. Here, we suggest that the backscattered electrons from the proton electron collisions could be detected and used for halo diagnostics and for centering the proton beam [38]. The arrangement would be similar to the BNL electron lenses, but additional thought is required to determine the best way to merge the beams and to separate them after the interaction region without producing unduly large background counting rates in the detectors. A schematic illustration of the interaction between the halo protons with the electrons in the hollow beam is shown in Fig. 20. In reality, two or four equidistant detectors surrounding the beam would probably be used. The core of the proton beam will also produce energetic electrons by collision with the atomic electrons of the residual gas. This is the principal source of background and will determine the ultimate sensitivity for halo detection. For a rough estimate of this background we note that a 4 keV electron current density of 1 A/mm 2 has an electron density equal to the electron density in 2.15×10 -6 Torr of H2 at room temperature. For an example of a round Gaussian ion beam of rms width σ we conclude that for a hollow, 1 A/mm 2 electron beam extending from 4σ to 5σ the signal-to-background ratio would be approximately 1 if the vacuum is 7×10 -10 Torr at room temperature. A better vacuum and/or a more intense electron beam will improve this signal-to-background ratio. Exceptionally good vacuum in a room-temperature chamber should be achievable in a beam pipe section pumped by a continuous longitudinal cryo-pumped antechamber as mentioned for example in reference [39]. If that is impractical, a Non Evaporable Getter (NEG) coated and activated beam pipe would be excellent too. The use of a warm chamber with an adjacent distributed cryo pump is appealing since the quantity of interest to reduce the background is the gas density which, at constant pressure, is inversely proportional to the absolute temperature. A technique that can be used to extend the dynamic range of these measurements involves modulating or pulsing the electron beam. Depending on counting statistics, results could be obtained with signal to background ratios as small as a few percent. Figure 21 shows schematically the topology of three possible implementations. The first one seems elegant and appealing but it may be difficult to implement an annular cathode surrounding the proton beam. The second one has the same geometry as the existing electron lenses, but the ion beam intersects the electron beam on the collector side producing unwanted background. Finally, the third option solves these problems by locating the gun with the annular cathode to one side, and uses an annular collector surrounding the proton beam which should be possible to implement. This appears to be a viable option for a system that could be used as a halo monitor and as a beam alignment tool. Aligning electron "wires" proposed as LHC longrange beam-beam compensators [40] may be achieved in a similar way, and without the complication of annular cathodes and collectors. The beams would be aligned by first overlapping them and then separating them by a known distance. If the electron beam remains partly in the halo of the proton beam, continuous monitoring would also become possible. FIG. 21 Three possible configurations for using eBSDs for halo monitoring and beam alignment of hollow electron beam systems. The central long solenoid is the same strong field superconducting solenoid in each case, similar to the 6T ones used in the Brookhaven RHIC electron lenses. The smaller and weaker room-temperature solenoids, indicated schematically by the small rectangles, guide the electron beam from the gun to the central solenoid and from there to the collector (see Fig. 3). Some of these guiding solenoids have been omitted for clarity. Only two eBSDs are shown in the bottom configuration, but there could be four at 90 o intervals for continuous halo monitoring and beam centering. The bottom configuration seems to be the most feasible one (see text). B) Concept of a Coulomb Scattering Electron Wire (CSeW) beam profile monitor. Electron beams that are not collinear with the relativistic ion beam will also generate energetic scattered electrons that can in principle be used for beam diagnostics. An example is schematically shown in Fig. 22. A ribbon shaped electron beam propagates at a right angle to the ion beam guided by a weak magnetic field (2B) that affects the ion beam only slightly. This slight perturbation is compensated by the field B generated by the other two split solenoids. FIG.22 Schematic illustration of a Coulomb Scattering Electron Wire beam profile monitor (see text). The trajectories of the scattered electrons are bent in the field of the central split solenoid and some of them reach a scintillation detector through a vacuum window (not shown in the picture). Maximum intensity corresponds to optimal overlap. The beam profile can be explored by stepwise deflections of either the electron or the ion beam. In contrast to conventional electron wire profile monitors [6,7,8], the profile is determined here by measuring the counting rates of the scattered electrons and not by detecting the deflection of the electron trajectories, largely suppressed here by the transverse magnetic field. Potential advantages are that the measured profiles are largely independent of the beam intensity and that profiles are obtained directly as deflection-dependent counting rates. For relatively long bunches, the arrival time of the scattered electrons can be used to measure the time structure and head-tail position differences for each bunch. Two such systems, one horizontal and one vertical, would provide a rather complete characterization of the bunch through non-destructive measurements. C) Electrons scattered from residual gas atoms for beam diagnostics. The interaction of particle beams with residual gas atoms and molecules is often used for measuring beam profiles such as in ionization profile monitors (see e.g. [41]) and fluorescence profile monitors (see e.g. [42]). Recently, a beam-gas vertexing technique [43] was used to characterize LHC beam properties by high precision tracking of particles from nuclear interaction with a small amount of gas injected into the vacuum chamber [44,45]. We suggest here that detecting energetic scattered electrons is another good way to exploit the interaction with residual gas for beam diagnostic purposes. The cross sections for Coulomb interactions are orders of magnitude larger than for nuclear cross sections. Much less gas will therefore be required. As an example, we show a conceptual design for a beam position monitor for eRHIC [46], the proposed BNL ERL-based electron-ion collider. This is only one of several possibilities for the difficult task of monitoring the position of up to 24 side-by-side electron beams circulating in the same vacuum chamber and separated in time by as little as 2 ns. As shown schematically in Figure 23, two fast, position-sensitive channel-plate detectors detect the scattered electrons through sets of parallel plate collimators which are necessary to define the plane of the trajectories. Thin foils in front of the detectors stop low energy electrons from generating spurious signals. A second set of detectors and collimators, at right angles to the first one, could, in principle, be located in the same chamber. The detection by the channel plates is fast and the position resolution will be defined by the acceptance angle of the collimators. FIG. 23 Concept of one of the possible approaches to detect several side-by-side orbits in one of the fixed-field alternating gradient (FFAG) arcs of the electron-ion collider, eRHIC, which will be proposed as a successor to RHIC. Fast, position-sensitive channel-plate detectors respond to energetic electrons collimated by parallel plate collimators (see text). VIII. CONCLUSIONS Most instruments used for particle accelerator beam diagnostics are of the analog type where often small signals are transmitted through long cables, amplified and digitized. The few instances when particle detecting and counting techniques can be used, offer the advantages of greater dynamic range and greater noise immunity which is particularly important in the harsh environment of high energy particle accelerators. The detection of energetic electrons generated through Coulomb scattering by relativistic ions offers new possibilities for relativistic ion beam diagnostics. The fact that these electrons can traverse vacuum windows with relatively minor energy loss allows the convenient use of simple detectors such as scintillation detectors that are cumbersome to use in vacuum. The easy replacement of the detectors without disturbing the vacuum is also an important advantage. We have shown here the successful application of such a system, used for the alignment of electron and ion beams in the RHIC electron lenses at BNL. A counting rate dynamic range of about five orders of magnitude has been utilized so far. Given the fast response of the utilized scintillators and counting electronics, larger dynamic ranges are available. A likely improvement for future systems of this type will be the utilization of silicon photomultipliers which are not sensitive to magnetic fields. Shorter light guides and less magnetic shielding should simplify the design and improve the performance. We also outlined ideas for other possible beam diagnostic applications based on energetic electrons produced by Coulomb scattering by relativistic ions. sensitivity improvement and the possibility mentioned in section VI of displacing both beams simultaneously to better evaluate a systematic error may be implemented in the future. This work was supported by Brookhaven Science Associates, LLC, under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
8,600.2
2016-01-19T00:00:00.000
[ "Physics" ]
Infrared detection of (H 2 O) 20 isomers of exceptional stability: a drop-like and a face-sharing pentagonal prism cluster † Water clusters with internally solvated water molecules are wide-spread models that mimic the local environment of the condensed phase. The appearance of stable (H 2 O) n cluster isomers having a fully coordinated interior molecule has been theoretically predicted to occur around the n = 20 size range. However, our current knowledge about the size regime in which those structures become energetically more stable has remained hypothetical from simulations in lieu of the absence of precisely size-resolved experimental measurements. Here we report size and isomer selective infrared (IR) spectra of (H 2 O) 20 clusters tagged with a sodium atom by employing IR excitation modulated photoionization spectroscopy. The observed absorption patterns in the OH stretching region are consistent with the theoretically predicted spectra of two structurally distinct isomers of exceptional stability: a drop-like cluster with a fully coordinated (interior) water molecule and an edge-sharing pentagonal prism cluster in which all atoms are on the surface. The drop-like structure is the first experimentally detected water cluster exhibiting the local connectivity found in liquid water. Water clusters with internally solvated water molecules are widespread models that mimic the local environment of the condensed phase.The appearance of stable (H 2 O) n cluster isomers having a fully coordinated interior molecule has been theoretically predicted to occur around the n = 20 size range.However, our current knowledge about the size regime in which those structures become energetically more stable has remained hypothetical from simulations in lieu of the absence of precisely size-resolved experimental measurements. Here we report size and isomer selective infrared (IR) spectra of (H 2 O) 20 clusters tagged with a sodium atom by employing IR excitation modulated photoionization spectroscopy.The observed absorption patterns in the OH stretching region are consistent with the theoretically predicted spectra of two structurally distinct isomers of exceptional stability: a drop-like cluster with a fully coordinated (interior) water molecule and an edge-sharing pentagonal prism cluster in which all atoms are on the surface.The drop-like structure is the first experimentally detected water cluster exhibiting the local connectivity found in liquid water. Water clusters are of relevance to many fields of science like atmospheric and astrophysical chemistry 1,2 and have been reported to be trapped inside inorganic crystals, 3 polymers 4 and in crystal structures of hydrophobic proteins. 5In an aqueous solution the chemical reactivity is coupled to the subtle balance between the solvent-solute and solvent-solvent interactions that ultimately control the fleeting hydrogen bonding network.Water clusters provide a medium for probing the underlying solvent-solvent interactions in those aqueous environments and quantifying the properties of the hydrogen bonding interactions at the molecular level.Contrary to the fully connected network in liquid water, the structures of the first few water clusters exhibit arrangements in which all atoms are on the surface of the cluster, [6][7][8][9][10][11][12] a consequence of the need to maximize the number of hydrogen bonds in combination with the large surface-to-volume ratio that is characteristic of a small water cluster.4][15][16] However, all water molecules in these clusters are under-coordinated, having less than four neighbors, the number found on the average in bulk water.Previous theoretical studies 17,18 have identified a cluster size regime starting at around 17 molecules for which the transition between the ''all-surface'' to ''interior'' structures, viz. the appearance of stable cluster isomers with an interior, fully coordinated water molecule, may occur.The fact that experimental, size-resolved probes of either the infrared (IR) ''fingerprint'' region (3000 to 4000 cm À1 ) 14,15,[19][20][21] or the broadband rotational spectrum 11,22 have so far been limited to smaller sized clusters, has prevented the direct experimental probing of this important size regime.Therefore, these previous theoretical studies have been unchallenged by experiments, in contrast to the case of aqueous ionic clusters. 23,24Here we report size and isomer selective infrared (IR) spectra of (H 2 O) 20 clusters tagged with a sodium atom in the OH stretching region.The ensuing analysis will demonstrate that the isomer composition of this water cluster size is consistent with predictions of high level electronic structure calculations. Hydrogen bonding interactions of fully coordinated water molecules that are characteristic of condensed environments consist of arrangements in which each molecule, on the average, both donates and accepts two hydrogen bonds (double donordouble acceptor configuration: DDAA). 25This connectivity leads to a complex, three-dimensional network of four-coordinated water molecules that is transient in the liquid 26 and static in solid water. 27Therefore, a special interest arises in identifying the emergence of water clusters which posses fully coordinated water molecules.In this context, the neutral (H 2 O) 20 clusters have previously served as a reference system for benchmarking theoretical methods, see ref. 28-31 and references cited therein.Additional reasons for studying clusters of this size is their potential connection to the ''magic number'' protonated water clusters, 23,29,32 the dodecahedral cage network of one of its families of isomers that is the building block of the structure I (sI) clathrate lattices 33,34 as well as their use as a model for cloud droplets in atmospheric reactions. 2he frequencies of the OH stretch oscillators can be used as a sensitive probe of the strength of the underlying hydrogen bonds: the stronger the intermolecular bond, the weaker the corresponding O-H covalent bond is, which leads to sizeable red shifts of such oscillators in the IR spectrum. 35To this end, vibrational spectroscopy of size-selected, neutral clusters is a powerful tool that probes the characteristic absorption patterns of the hydrogen bonded OH stretching vibrations 16 which, in conjunction with theoretical calculations, can be used to assign the underlying hydrogen bonded network via the ''structural-spectral correspondence'' principle. 36y applying such an experimental approach, the emergence of four-coordinated water molecules 37 for n 4 18 and the onset of crystallinity in larger clusters 38 have been characterized in previous experimental studies.We have recently reported an experimental approach, which allows to record precisely size-resolved infrared spectra in the OH stretching region of sodium tagged, neutral water clusters. 38,39The viability of the new method for arriving at structure assignments was indicated for sizes above n = 20, 39 but not rigorously proven.In the present work, the approach is further refined and coupled with first principles electronic structure calculations to assign the measured spectra for n = 20. In the experiment, a molecular beam containing the clusters is formed in a skimmed supersonic expansion.The clusters are then doped with a single sodium atom in a pick-up cell and subsequently irradiated with IR photons inducing a signal enhancement to the UV photoionization, which is delayed by 80 ns.A single scan of the OH stretch region (2850 to 3800 cm À1 ) with a tuneable IR laser system provides simultaneously measured, size-resolved IR spectra for the complete cluster size distribution.The problem of UV photon induced fragmentation can be suppressed already in small clusters 40,41 due to the special ionization mechanism of sodium doped, hydrogen bonded clusters. 42For larger clusters (n 4 18) a spectral contamination by the sodium atom was not observed. 38In this study we used neon (Ne) and argon (Ar) instead of helium as seeding gas.The reason is the better suitability of Ne and Ar expansions to cool water clusters 41 and to relax them in their thermodynamical minima. 43Further experimental details are found in 38,40,42 and in the ESI, † in which also the reproducibility of the main features in the presented spectra is shown. The electronic structure calculations were performed at the second-order Møller-Plesset perturbation theory (MP2) level of theory with Dunning's augmented, correlation-consistent basis sets (aug-cc-pVnZ) 44 of double through quadruple (n = D, T, Q) zeta quality allowing for the extrapolation of the energetics at the Complete Basis Set (CBS) limit (see ref. 28 for details).Geometries were optimized with the double and triple zeta basis sets whereas single point energies were obtained with the larger quadruple zeta set at the triple zeta geometries.Harmonic vibrational spectra and subsequent zero-point energy (ZPE) corrections are obtained with the aug-cc-pVDZ basis set.These calculation have been conducted on isomer B in the present study, the results for isomers A and C were taken from previous work. 28All electronic structure calculations were carried out with the NWChem suite of codes. 459][30][31] A ''family'' of isomers denotes clusters with the same oxygen network differing only in the position of the hydrogen atoms according to the Bernal-Fowler ice rules. 46he stability of the edge-sharing pentagonal prisms, first suggested using classical force fields, 47 was confirmed as the most stable family of minima from high level electronic structure calculations.The lowest isomer of this family 28 is shown in Fig. 1 (isomer A) together with the predicted infrared spectrum in the OH stretching region.It is a highly symmetric structure (as regards the oxygen atom network) and features in total eight water molecules in DDAA configurations, which are all placed at the surface of the cluster.Additionally, there are six water molecules having a single donor-double acceptor motif (DAA), which show a characteristic single peak in the most red-shifted part of the OH stretching spectrum. An energetically competitive, (cf.Table 1) drop-like 31 isomer of (H 2 O) 20 is also considered (isomer B in Fig. 1).This isomer features a total of six fully coordinated and interconnected water molecules, five of which are on the surface and one DDAA-like fragment at the center of the cluster.Four out of the former five surface fragments are hydrogen bonded to the latter interior one.The surface of isomer B consists of four 4-, six 5-and two 6-member rings.Interconnected 4-and 6-member rings cover roughly one half of the surface area, whereas interconnected 5-member rings cover the other half.The most red-shifted part in the ''fingerprint'' OH stretching region of the IR spectra provides a definitive feature used for the structural identification of isomer B. There exists a characteristic pattern of seven DAA oscillators which arrange in three peaks of different intensities caused by the different environment.The dominant band is predicted to be more red-shifted than the corresponding single peak found in the edge-sharing pentagonal prism, isomer A. Other families of (H 2 O) 20 isomers that have been discussed in the past, like those of the facesharing pentagonal prisms (isomer C in Fig. 1), the fused cubes, and the dodecahedron are predicted to be significantly less stable 28 (cf.Table 1).Note that the ordering of the isomers in the various families does not change upon either increasing the basis set to CBS or including harmonic ZPE corrections, as seen from Table 1.The small energy difference between isomers A and B, obtained at the highest level of electronic structure theory considered in this study including ZPE corrections, suggests that we expect only two different cluster networks to contribute to the IR spectrum in the OH stretching region of (H 2 O) 20 , each showing one characteristic feature in the region of DAA oscillators.3 the experimental IR spectrum generated in the Ne expansions is compared with a simulated one, in which the ratio of isomer A (35%) and B (65%) is chosen such that similar intensities for the characteristic DAA peaks are obtained.Note that the frequencies in all simulated spectra are scaled by 0.96 to account for the harmonic approximation.The summed-up spectrum agrees well with the experimentally observed absorption pattern over the complete OH stretching region.In the experiment the strong double-donor single-acceptor (DDA) modes appear at slightly lower frequencies (3400 cm À1 ) compared to the predicted harmonic spectra.Such deviations are expected due to the uniform scaling of the harmonic frequencies. 18The assignment Fig. 1 The most stable isomers of (H 2 O) 20 , as identified in this and previous work 28,31 and their predicted IR spectra in the OH stretch region.(A) Edgesharing pentagonal prism, (B) drop-like, (C) face-sharing pentagonal prism. of the DAA double peak to isomers A and B would further become more definitive upon depopulation of one of the isomers in the experiment.This was indeed achieved by changing the seed gas and stagnation pressure.The middle panel of Fig. 3 shows the spectrum obtained in Ar-seeded expansions, compared to the predicted IR spectrum of isomer B alone.Again, the experimentally observed OH stretch absorption pattern is consistent with the theoretical prediction thus providing further credence to the structural assignment.The DAA peak characteristic to isomer A is completely suppressed in this experiment and the structure sensitive DAA region agrees well with the predicted absorption pattern of isomer B. Both experiment and calculation consistently show the separation of DAA oscillators appearing below 3180 cm À1 and DDAA oscillators appearing above 3200 cm À1 .The high intensity of the low-lying DDAA oscillator is probably caused by a Fermi resonance with the overtone of the OH bending motion appearing around 3200 cm À1 , as recently shown by Suhm and co-workers for small water clusters in spontaneous Raman scattering experiments. 48Similar to the expansion with Ne as seed gas, the strong DDA features appear at 3400 cm À1 indicating that the calculated DDA part of the spectrum is a bit too compressed.The absence of isomer A in the Ar-seeded expansions could indicate a slightly higher stability of isomer B. However, also a kinetic effect seems plausible, which would indicate a better relaxation of clusters to the thermodynamic minima in the Ne experiment.The cluster temperatures are estimated to be around 100 K for the Ne-seeded expansions; the Ar-seeded experiment is expected to produce colder clusters (see discussion in ref. 39 for temperature estimates).The isolation of isomer B achieved in the Ar-seeded expansion allows for an additional consistency test of our spectral assignments, facilitated by subtracting the spectrum recorded using Ar as seed gas from the one taken in the Ne-seeded expansion.Ideally, this difference spectrum, shown in the lower panel of Fig. 3, should reveal the characteristic absorption pattern of isomer A: the single DAA peak around 3140 cm À1 and a broad intensity gap between this peak and the absorption maximum of the DDAA oscillators, predicted to be around 3300 cm À1 .This highly characteristic feature is indeed found in the difference spectrum, with the gap being larger than predicted from the harmonic frequencies.Again, the (anharmonic) effect of the Fermi resonance with the overtone of the OH bending motion might explain this observation.Here, the close lying DDAA oscillators seem to be blue-shifted, away from the resonance around 3200 cm À1 , as found for the in-phase, hydrogen-bonded OH stretching motion in the cyclic water pentamer. 48The consistency of the difference spectrum with the calculated one of isomer A alone suggests that indeed the isomer families A and B dominate the isomer composition of Na(H 2 O) 20 in the experiments.Since isomer A has a highly symmetric oxygen network, the experimental IR spectrum should be sensitive to the influence of our sodium atom tag.We analyzed this effect and found only a weak influence on the general IR absorption pattern for sodium doped versions of isomer A (see ESI †).However, satellite peaks around the single DAA peak are predicted.Such spiky features in the DAA region are found in the Ne seed gas experiment (and in the difference spectrum), in which isomer A is present (see Fig. 3).The simulations further indicate that DDAA oscillators are slightly blue-shifted in the Na tagged cluster.This effect would be an alternative explanation for the relatively large absorption gap between the DAA and DDAA features of isomer A and the missing gap between DDAA and DDA oscillators in all spectra.In the discussion above we have presented theoretical and experimental evidence that isomers A and B, the edge-sharing pentagonal prism and a drop-like structure, are the most stable configurations of (H 2 O) 20 .The assignment is anchored on the energy ranking based on high-level first principles electronic structure calculations (cf.Table 1) and the consistency between isomer selective, experimental IR spectra and theoretical predictions (Fig. 3).With regard to the ongoing tedious experimental and theoretical exploration of the (H 2 O) 6 structures, initiated over almost two decades ago, 8,11,14,15,49 the current study represents a significant step towards expanding the size regime of neutral water clusters that can be probed experimentally.The drop like cluster is the first experimentally detected cluster with a fully coordinated interior water molecule featuring the local environment in liquid water.This feature was theoretically expected to occur first in water clusters with an odd number of water molecules 17 starting with n = 17.The similarity of the spectral envelope for n = 19, 20, 21 (see Fig. 2) may indicate that the neighboring cluster sizes of n = 20 also feature large fractions of isomers with interior water molecules.The DAA region and the corresponding free OH stretch features are similar for n = 21 in Fig. 2 and the spectrum assigned to the drop like isomer of n = 20 in the middle panel of Fig. 3.We note that a centred dodecahedron was discussed as global minimum for n = 21 17 and that one half of the surface of isomer B shows this topology of interconnected five member rings.In future the approach presented in this work may allow for further structural assignments of water clusters.However, at the moment it is not clear whether one or two energetically distinguished isomer families as found for n = 20 exist for water clusters containing more than 20 water molecules. 39 Infrared detection of (H 2 O) 20 isomers of exceptional stability: a drop-like and a face-sharing pentagonal prism cluster † Christoph C. Pradzynski, a Christoph W. Dierking, a Florian Zurheide, a Richard M. Forck, a Udo Buck, b Thomas Zeuch* a and Sotiris S. Xantheas* c This journal is © the Owner Societies 2014 Phys.Chem.Chem.Phys., 2014, 16, 26691--26696 | 26693 In Fig. 2 the IR spectrum of the sodium tagged (H 2 O) 20 cluster is shown together with simultaneously measured ones of cluster sizes n = 19, 21.Here, we have to concentrate on the marked region of the most red shifted part of the spectrum of the DAA stretch motion.Only for the Na(H 2 O) 20 cluster we observe a characteristic double peak, which corresponds to the specific DAA features of isomers A and B. In the upper panel of Fig. Fig. 3 Fig. 3 Comparison of experiment with theory.Maximum intensity normalized OH stretch spectra of Na(H 2 O) 20 recorded in Ne (upper panel, offset 0.4) and Ar (middle panel, offset 0.8) seeded expansions are compared with simulated spectra of a mixture of isomer A (35%) and isomer B (65%).Expansion conditions: water at 323 to 353 K (0.12 to 0.47 bar) in Ar at 1.0 to 1.7 bar stagnation pressure, see Fig. 2 for Ne-seeded expansions and supplement for more details.Photoionization was performed at 360 or 388 nm.
4,342.8
2014-11-19T00:00:00.000
[ "Physics", "Chemistry" ]
The TileCal Online Energy Estimation for the Next LHC Operation Period The ATLAS Tile Calorimeter (TileCal) is the detector used in the reconstruction of hadrons, jets and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It covers the central part of the ATLAS detector (|η| < 1.6). The energy deposited by the particles is read out by approximately 5,000 cells, with double readout channels. The signal provided by the readout electronics for each channel is digitized at 40 MHz and its amplitude is estimated by an optimal filtering algorithm, which expects a single signal with a well-defined shape. However, the LHC luminosity is expected to increase leading to pile-up that deforms the signal of interest. Due to limited resources, the current hardware setup, which is based on Digital Signal Processors (DSP), does not allow the implementation of sophisticated energy estimation methods that deal with the pile-up. Therefore, the technique to be employed for online energy estimation in TileCal for next LHC operation period must be based on fast filters such as the Optimal Filter (OF) and the Matched Filter (MF). Both the OF and MF methods envisage the use of the background second order statistics in its design, more precisely the covariance matrix. However, the identity matrix has been used to describe this quantity. Although this approximation can be valid for low luminosity LHC, it leads to biased estimators under pileup conditions. Since most of the TileCal cell present low occupancy, the pile-up, which is often modeled by a non-Gaussian distribution, can be seen as outlier events. Consequently, the classical covariance matrix estimation does not describe correctly the second order statistics of the background for the majority of the events, as this approach is very sensitive to outliers. As a result, the OF (or MF) coefficients are miscalculated leading to a larger variance and biased energy estimator. This work evaluates the usage of a robust covariance estimator, namely the Minimum Covariance Determinant (MCD) algorithm, to be applied in the OF design. The goal of the MCD estimator is to find a number of observations whose classical covariance matrix has the lowest determinant. Hence, this procedure avoids taking into account low likelihood events to describe the background. It is worth mentioning that the background covariance matrix as well as the OF coefficients for each TileCal channel are computed offline and stored for both online and offline use. In order to evaluate the impact of the MCD estimator on the performance of the OF, simulated data sets were used. Different average numbers of interactions per bunch crossing and bunch spacings were tested. The results show that the estimation of the background covariance matrix through MCD improves significantly the final energy resolution with respect to the identity matrix which is currently used. Particularly, for high occupancy cells, the final energy resolution is improved by more than 20%. Moreover, the use of the classical covariance matrix degrades the energy resolution for the majority of TileCal cells. ACAT2014 IOP Publishing Journal of Physics: Conference Series 608 (2015) 012043 doi:10.1088/1742-6596/608/1/012043 Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1 Introduction The ATLAS [1] Tile Calorimeter (TileCal) [2] is the detector used in the reconstruction of hadrons, jets and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC) [3].It is composed by one central and two extended barrels covering the most central part of the ATLAS detector (|η| < 1.6).The energy deposited by the particles is read out by approximately 5,000 cells, with double readout channels.Figure 1 shows the TileCal readout granularity which consists of three radial layers (A,BC and D) with ∆η × ∆φ = 0.1 × 0.1 (0.2 × 0.1 in outermost layer).There is also the E layer, which consists of four scintillator plates per module and are located in the extended barrels.The signal provided by the readout electronics [4] for each channel is digitized at 40 MHz and its amplitude is estimated by an optimal filtering algorithm, which expects a single signal with a well-defined shape.With the increase of the luminosity during Run2, the pile-up signal deforms the signal of interest.For illustration purposes, Figure 2 shows the readout window with the expected signal pulse shape (black line) plus an out-of-time signal (red line).The resultant signal (magenta line) corresponds to the signal that is acquired by the front-end electronics. Due to limited resources, the current Digital Signal Processors (DSP) based hardware setup does not allow the implementation of very sophisticated energy estimation methods, therefore, the online energy estimation must be based on fast filters such as the Optimal Filter (OF) [5].The OF method uses the background second order statistics in its design, more precisely the covariance matrix.The identity matrix was used during Run1 to describe this quantity, although this approximation (valid for low luminosity LHC) can lead to larger variance and biased estimators under pile-up conditions, as it will be shown in Section 3.2. This work evaluates the use of the background covariance matrix in the design of the OF weights to improve energy estimation performance under pile-up conditions.Next section describes the TileCal energy estimation algorithm and how we plan to improve it for LHC Run2.Section 3 shows the results using a simulated data set, where different versions of the OF method are evaluated.Finally, the conclusions are presented in Section 4. TileCal Energy Estimation The OF method was implemented to operate both online and offline in TileCal during Run1.The algorithm is based on a weighted sum of the received ADC samples aiming at minimizing The vector s is are the received ADC samples, N is the number of samples available and the vector w corresponds to the OF weights, which are computed offline.The variance of the amplitude parameter to be minmized is given by: where the matrix C corresponds to the background covariance matrix. The current implementation of the OF in TileCal is called OF2, and it performs the optimization procedure subjected to the following constraints: where the vectors g and g' are the TileCal reference pulse shape and its derivative, respectively.The first constraint (Equation 3) regards the energy scale factor, while the additional second and third constraints (Equations 4 and 5) are added to make the estimation procedure immune against phase and baseline fluctuations, respectively. The weights w can be found by solving the following matrix system using Lagrange multipliers: where λ, ξ, v are the Lagrange multipliers. Since the electronic noise can be modeled as an uncorrelated Gaussian process, the covariance matrix C was replaced by an identity matrix during LHC Run1 data taking. If the third constraint (Equation 5) is removed from the optimization procedure, we call this method OF1.The difference is that OF2 computes the baseline value in an event-by-event basis, while OF1 relies on the stability of the baseline, and it subtracts a fixed value from each incoming ADC sample (see Equation 7). The constant value ped is computed through a special run and stored in a data base. It is worth mentioning that OF1 achieves similar performance than the Gaussian Matched Filter (MF) based approach, which was recently proposed for TileCal energy estimation [6].The MF method is developed from the likelihood ratio test and, consequently, can be designed for any background model.However, for non-Gaussian background, its design leads to a nonlinear digital circuit, which is difficult to implement. Online Energy Estimation for Run2 As the pile-up introduces correlation between the samples, the use of the background covariance matrix is expected to improve the performance of the OF method under pile-up conditions.Moreover, in TileCal, the occupancy of most of the readout cells are low, and the pile-up signal is considered as outlier for the majority of the cells.Additionally, since the covariance matrix is very sensitive to outliers, alternative ways of computing this quantity must be considered. The classical covariance matrix estimation [7] takes into account the whole dataset regarless the presence of outliers, leading to the miscalculation of the OF weights.As a result, the OF weights can be miscalculated leading to a larger variance and biased energy estimator. Alternatively, the Minimum Covariance Determinant Estimator (MCDE) [8] provides a more careful way of computing the covariance matrix.The MCDE algorithm takes randomly a subset of the noise events and computes its classical covariance matrix and its determinant.The algorithm repeats this procedure 500 times (default) and it selects the subset that resulted in the lowest determinant.This subset contains the events that has the lowest covariance between the samples (ADC digits) and therefore it consists of the most probable events, disgarding most of the outliers (high energy pile-up in this case). Results This section presents the data set used and the performance evaluation is carried out in terms of the estimation error for both OF1 and OF2 methods.The OF1 and OF2 weights were designed using the identity matrix as well as the covariance matrix, computed using the MCDE algorithm. Data set The data set consists of 100,000 events for TileCal E3 (η = 1.3) and E4 (η = 1.5) cells, which are the highest occupancy cells in TileCal.The events are full ATLAS Monte Carlo simulation of Minimum Bias (MB) events with 25 ns of bunch spacing and an averaged number of p − p interactions per bunch crossing (< µ >) of 40 [9].Only MB pile-up signals (both in-time and out-of-time) and electronic noise are present in the events. Performance evaluation The designed OF1 and OF2 were applied to the data set described previously.Since the events considered above comprise only background, the energy estimation is expected to be centered at zero, with the smallest dispersion as possible.These distributions can be seen as the error introduced by each method to the final energy estimate, under this condition of pile-up.Figure 3 shows the energy estimation for the E3 cell.It can be noticed that the use of the covariance matrix in the computation of the OF1 weights shows the best performance.Additionally, the positive tail in the distribution is due to the high order statistics from the in-time pile-up signals that can not be accessed through the covariance matrix alone. Figure 4 shows the energy distribution for the E4 cell which is the highest occupancy channel in TileCal.In order to summarize the parameters of each distribution, Table 1 shows the mean and RMS values for both E3 and E4 cells.As expected, the OF method becomes biased and it also increases its variance under pile-up.The best performance is achieved by OF1 using the covariance matrix.In the case of these two highest occupancy cells, the bias (mean value) could be stored in a data base and used to be subtracted from each estimate for correction.This bias is not present for the other TileCal cells.It is worth mentioning that the mean and RMS were computed similarly as the covariance matrix, using a robust approach [8]. Figure 5 summarizes the improvement in the energy resolution (RMS of the energy distribution) using the covariance matrix with respect to the identity matrix for all cells in the central and extended barrels.The improvement is computed according to the following equation: For cells located in the layers BC and D, which are not highly affected by the pile-up, the improvement is minimal for TileCal central barrel (|η| < 1).In those cells, the identity approximation remains a reasonable description of the background for the majority of the events considered in this pile-up condition.While for the cells located in layers A and E in the extended barrel regions, the improvement becomes substantial, especially for the E cells which are closer to the beam and more exposed to the pile-up. Conclusions We present a study on the use of the background covariance matrix in the algorithm that will be used for TileCal online energy estimation during the next LHC run.The background covariance matrix can be used to reduce the uncertanties and bias due to the pile-up under high luminosity conditions.The results show that the OF using a fixed baseline value and the correct background covariance matrix presented the best performance in terms of estimation error.Most of the cells located in the central barrel did not show improvements due to their relatively low occupancy in the considered scenario.However, in some E cells in the extended barrel, the improvement is large as 60% with respect to the identity matrix. Both the baseline value and the background covariance matrix are computed offline and stored in database.The OF weights are also computed offline and loaded in the DSPs for the online energy estimation. Figure 1 . Figure 1.TileCal cell segmentation for half barrel and one extended partition. Figure 2 . Figure 2. Illustration of the pile-up effect in TileCal. Figure 3 . Figure 3. Energy distribution for cell E3 using both OF1 and OF2. Figure 4 . Figure 4. Energy distribution for cell E4 using both OF1 and OF2. Figure 5 . Figure 5. Improvement in function of η by using OF1 designed with the covariance matrix with respect to OF1 designed with an identity matrix. Table 1 . Summary of the mean and RMS values of the energy distributions from E3 and E4 cells (in MeV).
3,095.4
2015-05-22T00:00:00.000
[ "Physics" ]
Relative entropy dimension of topological dynamical systems We introduce the notion of relative topological entropy dimension to classify the different intermediate levels of relative complexity for factor maps. By considering the dimension or ''density" of special class of sequences along which the entropy is encountered, we provide equivalent definitions of relative entropy dimension. As applications, we investigate the corresponding localization theory and obtain a disjointness theorem involving relative entropy dimension. 1. Introduction. Since entropy was introduced by Kolmogorov from information theory, it has played an important role in the study of dynamical systems. Shannon's entropy (information entropy) is known to be the average information content in information science. Entropy (resp. relative entropy) is an important conjugate invariant for a dynamical system (resp. a factor map). Since zero entropy systems make up a dense G δ subset of all homeomorphisms, there are several kinds of works about conjugate invariants of zero entropy systems, for example, sequence entropy [19,15], maximal pattern entropy [13], entropy dimension [2,6,3] and Slow entropy [17]. But there is no relative conjugate invariants for zero entropy factor maps. We are attempting to systematically study relative zero-entropy invariants, especially relative invariants that can classify the different intermediate levels of relative complexity. Entropy dimension for a topological dynamical system first introduced by M. De Carvalho in [2] measures the superpolynomial, but subexponential growth rate of the number of open sets that cover the space out of the sequence of iterated open covers. S. Ferenczi and K. K. Park have introduced the entropy dimension in [6] to measure the complexity of entropy zero measurable dynamics. It measures the growth rate of H( n−1 i=0 T −i P ). D. Dou, W. Huang and K. K. Park in [3] introduced the notion of the dimension (upper and lower) of a subset of Z with density 0. They used the dimension of a special class of sequences which were called entropy generating sequences to measure the complexity of a topological system and showed that the topological entropy dimension can be computed through the dimensions of entropy generating sequences. We would like to define these notions for the relative setting and study their properties. In positive entropy systems, the "independence" point of view enables people to have a better understanding how the complexity of a system produces. One can 6632 XIAOMIN ZHOU find an infinite subset W ⊂ Z + in the case of positive entropy such that along the sequence W , the symbolic names are "independent" and there exists c > 0 with lim inf n→∞ |W ∩[1,n]| n > c. We prove that if a system (X, T ) relevant to a factor map has positive relative entropy dimension, then there exists an relative entropy generating sequence S ⊂ Z + which is a union of disjoint finite sets along which the dynamics are "independent". Given a sequence S ⊂ Z + , we define the dimension of the sequence and show that the relative entropy dimension of the system is the supremum of the dimensions of the relative entropy generating sequences. H. Furstenberg in [7] first introduced the concept of disjointness to characterize the difference of dynamical behavior between two systems. Two well-known examples are that in measurable dynamics K−mixing systems are disjoint from ergodic zero entropy systems and that weak mixing systems are disjoint from group rotations. In the case of topological settings, these properties are explored in [1,10,12,11]. D. Dou, W. Huang and K. K. Park in [3] introduced the notion of the dimension set D(X, T ) ⊂ [0, 1] of a zero entropy topological system (X, T ) to measure the various levels of topological complexity of subexponential growth rate. They investigated the property of disjointness within entropy zero systems via the dimension set and proved that under some conditions on the dimension sets and minimality, two dynamical systems of disjoint dimension sets are disjoint. This is a refinement and also a generalization of the result that uniformly positive entropy(u.p.e.) systems are disjoint from minimal and entropy zero systems. Relevant results for measure-theoretic settings can be found in recent work by D. Dou, W. Huang and K. K. Park in [4]. In this paper we would like to introduce the notion of relative dimension tuples and the relative dimension set and prove that two extensions of disjoint relative dimension sets for all orders are disjoint over the same dynamic system under some conditions. This can also be regarded as a a generalization of the result that an open rel.-u.p.e. extension of all orders is disjoint from any relative minimal and relative zero entropy extension in [14]. The paper is organized as follows. In Sect. 2, we give the definitions and some basic properties of relative entropy dimension. In Sect. 3, we consider the dimensions of the relative entropy generating sequence and investigate the interrelations among these dimensions. In Sect. 4, we give the notions of relative dimension tuples and dimension sets and prove disjointness theorem with respect to the relative dimension set. 2. Relative entropy dimension. In this paper, a topological dynamical system (TDS, for short) is a pair (X, T ), where X is a compact metric space endowed with a self-homeomorphism T . Before we introduce the notion of relative entropy dimension for a TDS, we recall some definitions. Given a TDS (X, T ), denote by C X the set of finite covers of X and C o X the set of finite open covers of X. Given two covers U, V ∈ C X , we say that U is finer than Clearly, U ∨ V U and U ∨ V V. Given integers m ≤ n and a cover U ∈ C X let U n m = n i=m T −i U. Let (X, T ) and (Y, S) be two TDSs. Suppose that (Y, S) is a factor of (X, T ) in the sense that there exists a continuous surjective map π : (X, T ) → (Y, S) such that π • T = S • π. The map π is called a factor map from X to Y . Let π : (X, T ) → (Y, S) be a factor map and U ∈ C o X . For E ⊆ X, let N (U, E) denote the number of the sets in a subcover of U which covers E with smallest cardinality and define N (U|π) = sup y∈Y N (U, π −1 (y)). For α ≥ 0, we define It is clear that D(T, U, α|π) does not decrease as α decreases, and D(T, U, α|π) / ∈ {0, +∞} for at most one α ≥ 0. We define the relative upper entropy dimension of Similarly, D(T, U, α|π) does not decrease as α decreases, and D(T, U, α|π) / ∈ {0, +∞} for at most one α ≥ 0. We define the relative lower entropy dimension of U relevant to π by If D(T, U|π) = D(T, U|π) = α, then we say U has relative entropy dimension α. Clearly 0 ≤ D(T, U|π) ≤ D(T, U|π) ≤ 1 and if h(T, U|π) > 0, then the relative entropy dimension of U is equal to 1. Definition 2.1. Let π : (X, T ) → (Y, S) be a factor map between TDSs. The relative upper (resp. lower) entropy dimension of TDS (X, T ) relevant to π is When D(T |π) = D(T |π), we just call the common value the relative entropy dimension of (X, T ) relevant to π, denoted by D(T |π). When (Y, S) is a trivial system, we recover the classical entropy dimension(see [3] ), and in this case we shall omit the restriction on π. The following two propositions are the basic properties of relative entropy dimension. Hence lim sup This implies D(T, U ∨ V|π) < α. Since α is arbitrary, we have By (1) we have the result. For (4), from (1) the first inequality is obvious. If α > max{D(T, U|π), D(T, V|π), min{D(T, U|π), D(T, V|π)}, without loss of generality we assume α > D(T, U|π) and α > D(T, V|π). Then As a direct application of Proposition 2.2, we have Let (X, T ) be a TDS. A cover {U, V } of X which consists of two non-dense open sets of X is called a standard cover of X. Denote by C s X the set of all standard covers of X. The following proposition shows that the relative upper entropy dimension with respect to standard covers determine the relative upper entropy dimension of the system. Proposition 2.4. Let π : (X, T ) → (Y, S) be a factor map between TDSs. Then Proof. We follow the argument in the proof of Proposition 1 in [1]. In the following, we will investigate the dimension of a special kind of sequence, which we call the relative entropy generating sequence. Let π : (X, T ) → (Y, S) be a factor map between TDSs and U ∈ C o X . We say an increasing sequence of integers S = {s 1 < s 2 < · · · } is a relative entropy generating Denote by E(T, U|π) the set of all relative entropy generating sequences of U relevant to π and by P(T, U|π) the set of sequence S = {s 1 < s 2 < · · · } of Z + with the property that In other words, P(T, U|π) is the set of increasing sequence of integers along which U has positive relative entropy. Definition 3.1. Let π : (X, T ) → (Y, S) be a factor map between TDSs and U ∈ C o X . We define Similarly, we can define D e (T, U|π) and D p (T, U|π) by changing the upper dimension into the lower dimension. Similarly, we can define D e (T |π) and D p (T |π). Let k ≥ 2 and B be a nonempty finite subset of Z + . Assume U is the cover of {0, 1, · · · , k} B = z∈B {0, 1, · · · , k} consisting of subsets of the form z∈B {i z } c , where 1 ≤ i z ≤ k and {i z } c = {0, 1, · · · , k} \ {i z } for each z ∈ B. For S ⊆ {0, 1, · · · , k} B we let C S denote the minimal cardinality of subcovers of U one needs to cover S. Note that we shall use natural logarithms unless we explicitly indicate otherwise. The proof of this lemma is completely similar to the proof of Lemma 3.7 in [3]. With the help of above lemma, we have the following theorem. Let τ j−1 < η j < τ j for j ∈ N. By Lemma 3.4, there exists N j ∈ N such that for every finite set B with |B| ≥ N j and N ( i∈B T −i U|π) ≥ e a 2 |B| τ j , we can find W ⊆ B with |W | ≥ |B| ηj and {A 1 , A 2 } which is independent along W relevant to π. This shows F ∈ E(T, U|π). Note that lim sup U|π). This finishes the proof. If U c 1 is a singleton then we put U 1 1 = U 1 . If U c 1 has at least two different points y and y , fix 1 > 0 with 1 ≤ d(y,y ) 4 , and construct a cover of U c 1 by open balls with radius 1 centered in U c 1 ; call it A. Since U c 1 is compact, there exist A 1 , A 2 , · · · , A u ∈ A such that , · · · , u. By the choice of 1 , each closed set F i is a proper subset of U c 1 with diam(F i ) ≤ 1 2 diam(U c 1 ). Since By the above a), we have ∞ j=1 (U j 1 ) c = {x 1 }, · · · , ∞ j=1 (U j n ) c = {x n } for some x 1 , . . . , x n in X. Moreover, since {U 1 , . . . , U n } covers X and x i ∈ (U i ) c , we have (x i ) n 1 ∈ X (n) \ ∆ n (X). Finally, we show D(x 1 , . . . , x n |π) ≥ D(T, U|π). Since (x i ) n 1 ∈ X (n) \ ∆ n (X), without loss of generality, we assume x 1 = x n . There exists k 0 ∈ N such that B(x 1 , 1 k0 ) ∩ B(x n , 1 k0 ) = ∅. For any k ≥ k 0 , there exists j(k) ∈ N large enough such that (U This finishes the proof of the lemma. Proposition 4.3. Let π : (X, T ) → (Y, S) and π : (Z, R) → (X, T ) be two factor maps.
3,105.2
2019-08-22T00:00:00.000
[ "Computer Science", "Mathematics" ]
On the Thermodynamics of Particles Obeying Monotone Statistics The aim of the present paper is to provide a preliminary investigation of the thermodynamics of particles obeying monotone statistics. To render the potential physical applications realistic, we propose a modified scheme called block-monotone, based on a partial order arising from the natural one on the spectrum of a positive Hamiltonian with compact resolvent. The block-monotone scheme is never comparable with the weak monotone one and is reduced to the usual monotone scheme whenever all the eigenvalues of the involved Hamiltonian are non-degenerate. Through a detailed analysis of a model based on the quantum harmonic oscillator, we can see that: (a) the computation of the grand-partition function does not require the Gibbs correction factor n! (connected with the indistinguishability of particles) in the various terms of its expansion with respect to the activity; and (b) the decimation of terms contributing to the grand-partition function leads to a kind of “exclusion principle” analogous to the Pauli exclusion principle enjoined by Fermi particles, which is more relevant in the high-density regime and becomes negligible in the low-density regime, as expected. Introduction In recent years, the investigation of exotic models has significantly increased in the hope of making some progress in solving long-standing unsolved problems involved in the physics of complex models. In this regard, we certainly must mention the question of providing a satisfactory mathematical description of the quantum electrodynamics, which obtains predictions via the renormalisation technique that are in surprisingly perfect accordance with the experiments. Along the same line, we can identify the models which aim to study and unify the strong interactions (i.e., quantum chromodynamics) with the electroweak ones. All these models are called "standard" models and have the same strengths and weaknesses. That is, they are in good accordance with the experiments but not satisfactory from the mathematical point of view. The long-standing problem of unifying these three fundamental forces, which are present in nature, with the remaining one, that is, the gravitation, which was recently addressed through the use of the so-called noncommutative geometry (e.g., [1]), is very far from being solved, even in a partial form. We should also mention the potential relevance of the investigation of models enjoying exotic commutation relations with some other disciplines, such as information theory and quantum computing, both of which are connected with, and relevant for, concrete applications. Among such models, we can certainly find those associated with the so called qparticles, or quons, q ∈ (−1, 0) (0, 1), from the perspective of the extension to the so-called anyons, corresponding to the case when the parameter q assumes values in some roots of the unity, and plektons. Such exotic q-particles are naturally associated with the following commutation relations: H being the one-particle space, where the creators and annihilators act on the corresponding Fock spaces. The quons can certainly be seen as an interpolation between the particles obeying the Fermi statistics (i.e., q = −1) and those obeying the Bose statistics (i.e., q = 1), passing through the q = 0 value describing the classical particles and, thus, obeying the Boltzmann statistics. We can observe that for q = ±1, the commutation rules (1) are reduced to the well-known ones associated with the Bose and Fermi particles, respectively (see, e.g., [2]). All such models are relevant to the so-called quantum probability. In fact, the Boltzmann case q = 0, describing the statistics of classical particles concerning the physical meaning, is also known as free because it naturally arises from a particular case of quantum probability called free probability (see, e.g., [3]). In the setting of quantum probability, the various generalisations of the commutation rules (1) allow one to introduce and investigate exotic quantum stochastic processes (see, e.g., [4]). As is well-known in the Bose and Fermi cases, all the above-mentioned commutation rules naturally arise from the so called second quantisation, which is associated with the so-called grand-canonical ensemble. The various functors of the second quantisation allow one to construct the corresponding Fock spaces. The Fock space encodes the statistics that the involved particles obey and allows the involved commutation relations to be faithfully represented. We can also remark that the grand-canonical ensemble allows one to investigate one of the most fascinating phenomena occurring in the condensate state of the matter, involving bosons, that is, the Bose-Einstein Condensation in the fundamental state (see, e.g., [2,5]). In [6], it is shown that such a phenomenon of condensation also appears in the case of Bose-like quons, that is, when q ∈ (0, 1]. Returning to exotic commutation relations and their applications in quantum probability, we can cite the Boolean and Monotone ones. As in the case of all the models mentioned above, they satisfy commutation relations falling into the general form described in [7] (Corollary 3.2), and are therefore associated with a suitable Fock space. The Boolean Fock space is the simplest non-trivial example of second quantisation, because only one particle can be created and/or annihilated. In fact, it describes the absorption of a single photon, at most, from an apparatus (see [8]). The monotone statistics of particles, independently introduced in [9,10], do not seem to have any evident physical application. The arising Fock space is easily constructed, as described in Section 2, using the monotonic prescription induced by a totally ordered orthonormal basis of the one-particle Hilbert space. Now, we can also point out the well-known deep connection between the second quantisation scheme and the equilibrium statistical mechanics (see, e.g., [2,5]). Indeed, starting from a Fock space constructed by taking into account the statistics of the involved particles, we can compute, at least in principle, the so-called grand-partition function. Such a crucial function is supposed to encode all the thermodynamic properties enjoyed by a large number (of the order of the Avogadro number N A ∼ 10 23 ) of involved particles. This is certainly true for the Bose and Fermi cases and also, due to its simplicity, for the boolean one, in which the indistinguishability of the particles plays no role. Due to the Gibbs paradox (cf. [5]), the computation of the grand-partition function in the Boltzmann case, in terms of the associated full Fock space, deserves a suitable correction due to the supposed indistinguishability of the involved particles (see below). The general case of the quons (i.e., q ∈ (−1, 0) ∪ (0, 1)) is differently solved in [11], since the necessarily "deformed" statistics that such exotic particles obey are completely unknown. The case of non-interacting particles obeying monotone statistics, simply called monotone particles in the following pages, is unclear for two reasons. The first one is that the concrete physical applications of such a model are completely unknown. The second one is that we do not know whether the monotone scheme directly encodes the principle of the indistinguishability of particles, the latter being a fundamental prescription for the development of equilibrium statistical mechanics. Taking into account all the previous considerations, it becomes natural to address the investigation of the thermodynamic properties of particles obeying the monotone prescription, which are encoded in the monotone Fock space. Unfortunately, since there is no natural, total order of the one-particle subspace on an orthonormal basis, this investigation deserves an appropriate preliminary analysis. A simple one-particle physical system confined in a finite volume is essentially described by a Hamiltonian H, which is a self-adjoint positive operator with compact resolvent acting on a separable Hilbert space H. The statistics (i.e., Bose/Fermi or Boltzmann) of very large systems formed of a number of the order of the Avogadro number of non-interacting particles is encoded in the corresponding Fock space. Since there is a natural order of the eigenvectors of H induced by the corresponding eigenvalues (i.e., the energy levels of the system under consideration), one is tempted to use such an order to implement the monotone scheme for models of statistical mechanics. This can be carried out only when the eigenvalues of H all have a multiplicity of 1 or, in simple terms, when any energy level of the model is non-degenerate. Unfortunately, this is not the case for all concrete models when the degeneracy of all the energy levels increases to infinity in the thermodynamic limit (i.e., when, in particular, the volume of the system tends to occupy the whole environment), in which case the so-called "passage to the continuum" can be performed (see, e.g., [5,12]). The passage to the continuum is the fundamental tool used to investigate the thermodynamic properties of more realistic models in which the one-particle Hamiltonian has a continuum spectrum, such as a free particle living in R 3 . In the present paper, we propose a method that can be used to overcome this basic difficulty and, thus, take into account the possible degeneracy of the energy levels. Indeed, we simply generalise the monotone prescription to index sets, which are merely partially ordered according to those arising from the spectrum of a positive, compact resolvent Hamiltonian with possible degenerate energy levels. This model, called block-monotone in the following pages, which is expected to be more suitable for physical applications, is described in Section 3. Since the grand-partition function of a system associated with (block-)monotone particles is not directly computable in most infinite dimensional cases, the remaining part of the paper is devoted to a detailed analysis of a simple model formed of infinite uncoupled quantum harmonic oscillators. Since the corresponding Hamiltonian has non-degenerate eigenvalues, such a computation falls into the monotone scheme. The grand-partition function for such a model is computed in Section 4. Section 5 is then devoted to the explicit computation of the statistical weights appearing in the expansion of the monotone grand-canonical partition function and to a refined study of the high-and low-density regimes. Such an investigation leads to the following relevant facts. First of all, the computation of such statistical weights suggests that the Gibbs correction factor n!, connected with the indistinguishability of particles in the various terms of its expansion with respect to the activity, directly appears in the low-density regime, that is, when the temperature of the system becomes increasingly higher. This suggests that the monotone scheme directly encodes the indistinguishability of the involved particles. Secondly, the decimation of the terms contributing to the grand-partition function provides a kind of "exclusion principle" analogous to the Pauli exclusion principle observed for Fermi particles. Such an exclusion principle appears to be more relevant to the highdensity regime and becomes negligible in the low-density regime, as expected. The last part of Section 5 is devoted to a refined comparison between the grandpartition functions relative to the Boltzmann and monotone models, allowing us to estimate the correction of the relevant thermodynamic quantities, such as the average number, in the low-density regime. To conclude the present introduction, we point out that this preliminary investigation seems to provide a promising perspective concerning the potential physical applications of the block-monotone scheme, which we plan to return to in a future work. Preliminaries One-particle Hamiltonian. We start with a system whose Hamiltonian H is a selfadjoint positive (i.e., σ(H) ⊂ [0, +∞)) operator with compact resolvent, acting on a separable Hilbert space H, called the one-particle space. In such a situation, the spectrum σ(H) is formed of isolated points, accumulating at +∞ if H is infinite dimensional. In addition, the multiplicity g(ε) of each eigenvalue ε ∈ σ(H) is finite. In summary, by considering the resolution of the identity of H, we obtain Let k B ≈ 1.3806488 × 10 −23 JK −1 be the Boltzmann constant, and β := 1 k B T the "inverse temperature". Assuming that e −βH is trace class for each β > 0, we can define the partition function ζ := Tr(e −βH ). The grand-partition function. Here, we define the the grand-partition function in a relatively general framework relative to a gas comprising non-interacting particles obeying rather general statistics and thus potentially suitable for physical applications. The knowledge of such grand-partition functions plays a crucial role in the so-called equilibrium statistical mechanics. The standard method for such an analysis is the so-called second quantisation, (see, e.g., [2,5]). Indeed, for the one-particle Hilbert space H, we define the so-called full Fock space F 0 (H) ≡ F , given by with the convention that H ⊗ · · · ⊗ H 0-times where Ω is the so-called vacuum vector. The number operator N has a clear meaning (see e.g., [2]). For a linear operator A with domain D ⊂ H, we define and extend it to the whole Fock space by linearity. If A is self-adjoint, the closure dΓ(A) of dΓ o (A) will still be self-adjoint (see, e.g., [2]). Note that dΓ(I H ) provides the number operator. Now, let P be a self-adjoint projection acting on F . For a fixed positive operator H (i.e., a Hamiltonian) and the parameters β > 0 (the inverse temperature) and µ ∈ R (the chemical potential), such that Pe −βdΓ(H−µI) P ≡ Pe −β(dΓ(H)−µN) P is trace class, we define the grand-partition function as The most important cases, describing the thermodynamics of Bose and Fermi gases, are those when P is the self-adjoint projections onto the completely symmetric and antisymmetric subspaces (with respect to the natural action of the permutations on F ), respectively. When P is the identity operator I ≡ I F , the corresponding grand-partition function will describe the thermodynamics of classical particles, that is, those obeying the Boltzmann statistics. Unfortunately, this is not the case, as explained in [11,13]. Below, we outline how it is possible to recover such a grand-partition function in the Boltzmann case. We can also remark that the q-deformed Fock space F q (H) (e.g., [4,14] for the arising ergodic properties) could be used to compute the grand-partition function for the socalled quons and, thus, their thermodynamics. Unfortunately, in this case, too, the second quantisation method does not work. The grand-partition function for the free gas of quons is entirely computed in [11] without using the q-deformed Fock space. The so-called Boolean (e.g., [15]) and monotone (see below) models might also be described as outlined above. This is certainly true for the Boolean case, in which F boole (H) = C H and, thus, there is no question about the indistinguishability of the particles. We will see, in the forthcoming analysis, that this is also the case for the monotone model and its generalisations addressed in the present paper. Monotone Fock space. For the reader's convenience, we outline some basic facts regarding monotone Fock spaces and their fundamental operators (see [9,10,16] for more details). Here, we define a generalisation of the monotone particles which is suitable for physical applications. However, as a particular case arising directly from quantum physics, we study, in some detail, the thermodynamics of a simple model satisfying the monotone statistics/commutation relations. For the interested reader, we point out the existence of new investigations concerning exotic commutation relations that are related to those previously mentioned and might have potential physical applications (see [17]). For k ≥ 1, denoted by I k := {(i 1 , i 2 , . . . , i k )|i 1 < i 2 < · · · < i k , i j ∈ N, j = 1, . . . , k}, the class of all the ordered sequences of natural numbers are of length k. For k = 0, we take I 0 := {∅}. If k ≥ 0, the Hilbert space H k := l 2 (I k ) is called the k-particles space. In particular, the 0-particle space H 0 = l 2 (∅) is identified with the complex scalar field C. The monotone Fock space is then defined as F m := ∞ k=0 H k . With any increasing sequence α = {i 1 , i 2 , . . . , i k } of natural numbers, we canonically associate the vector e α , which, for all such sequences α, provides the canonical basis of F m . For each pair of such sequences α = {i 1 , i 2 , . . . , i k }, β = {j 1 , j 2 , . . . , j l }, we state that α < β if i k < j 1 . By convention, I 0 < α for each α = I 0 . In other words, if {e n | n ∈ N} is the canonically ordered basis of 2 (N) (or any ordered basis of a separable Hilbert space), the n-particle space is generated by the vectors e α ≡ e j 1 ⊗ e j 2 ⊗ · · · ⊗ e j n whenever α = {j 1 , j 2 , . . . , j n } with j 1 < j 2 < · · · < j n . If we relax the last condition by merely assuming that j 1 ≤ j 2 ≤ · · · ≤ j n , we will obtain the so-called weakly monotone Fock space (see, e.g., [16]). Note that F m and F wm are the range of the self-adjoint projections P m and P wm acting on the full Fock space F : Even if this is not used in the forthcoming analysis, we can report the structure of the monotone creation and annihilation operators and the generating commutation relations, which are nevertheless useful for application to quantum probability. Indeed, the monotone creation and annihilation operators are, respectively, for any i ∈ Z, given by One can verify that ||a † i || = ||a i || = 1 (see e.g., [16], Proposition 8). Moreover, a † i and a i are mutually adjoint and satisfy the following relations: In addition, the following commutation relation, designated in the weak operator topology of B(F m ) and falling into the general class of commutation relations managed in Corollary 3.2 of [7] for applications in quantum probability, is also satisfied. For some properties of monotone systems, including their ergodic properties, see, e.g., [18] and the literature cited therein. The grand-partition function for the Boltzmann case. The Boltzmann (or classical) case is very particular because, in Boltzmann statistics, the Gibbs paradox (e.g., [5]) takes place and, consequently, we should suitably correct the statistical weights. As for the computation of Z±1 in the Bose and Fermi cases (e.g., [2,5]), it might be natural to use the full Fock space F (H) and the grand-canonical Hamiltonian K := dΓ(H) − µN, as explained above, provided that K is trace class for a fixed β and µ. This corresponds to taking P = I F in (2). For the reasons explained in [11,13], such a formula is unrealistic. However, the correct formula should be Z 0 = e ζe βµ . It is interesting to see that, if one corrects (5) with the weight n! in the denominator of the series, thus taking into account the indistinguishability of particles, as it is customary to avoid the Gibbs paradox, we obtain the correct formula: Harmonic oscillator. Since we provide a detailed study of the simple model formed of non-interacting harmonic oscillators, for the reader's convenience, we report some basic facts that are used in the following analysis. Consequently, relative to the Boltzmann statistics, we compute the relative grand-partition function at the inverse temperature β and activity z = e βµ , µ being the chemical potential. Indeed, given that K acts as the Hook strength of the spring and m as the mass of the involved particle, it is well-known that the spectrum of the Hamiltonian of this model consists of non-degenerate eigenvalues, given by: where ω := √ K/m is the given frequency. In this way, the partition function is given by: After taking into account the Gibbs correction (e.g., [5,11,13]), for the grand-partition function, we obtain: Block-Monotone Particles The present section is devoted to a generalisation of the monotone prescription for the statistics of the particles which, on the one hand, is more suitable for potential physical applications and, on the other hand, is always different from the weak monotone scheme briefly outlined above. For such a purpose, we consider an index set I, being necessarily finite or countable, which is a finite or countable disjoint union of finite sets. Indeed, I := +∞ j=0 I j , where |I j | < +∞, j = 0, 1, . . . . The set I is naturally partially ordered, because if k j , l j are in the same subset I j , there is no pre-assigned order between them. Conversely, if k 1 ∈ I j 1 and Such a picture is suggested by the potential physical applications. In fact, a positive Hamiltonian H with compact resolvent acting on a separable Hilbert space H, as described in Section 2, induces a natural order, as shown above, on the natural basis of H, formed of the eigenvectors associated with the eigenvalues {ε j } of H, where the finite cardinality of the involved subsets are given by the degeneracies g j of the eigenvalues ε j . The picture arising from this analysis is defined as block-monotone. The corresponding block-monotone Fock space F bm and the relative creation and annihilator operators are easily constructed as follows below. Let {e j | j ∈ I} be an orthonormal basis of H equipped with the previously described partial order. Typically, such a partial order is induced by a positive Hamiltonian with compact resolvent. As noted above, F bm is a subspace of the full Fock one F ≡ F 0 , and on the n-particle subspace, the n-particle block monotone subspace is generated as follows below. Such an n-particle subspace is generated by all the sequences of the elementary (orthonormal) tensors e k 1 ⊗ · · · e k n with the condition k 1 < k 2 < · · · < k n relative to the partial order defined above. The block monotone creator and annihilator operators assume the same form as in (3) and (4), respectively, according to the above partial order. We denote P bm as the self-adjoint projection acting on the full Fock space projected onto F bm . This allows us to compute the grand-partition Z bm according to (2). We can now explicitly compute such a grand-partition for the simplest non-trivial finite dimensional case, where H is generated by the orthonormal basis {(e 1 , e 2 ), e}, the eigenvalues of a Hamiltonian whose eigenvalues are h, k, with a multiplicity of 2 and 1, respectively. We express such a grand-partition function in terms of the activity z = e βµ . In this simple situation, the block-monotone Fock space F bm ends with the two-particle subspace and is given by: Correspondingly, the grand-partition function is given by: We end the present section by noting that, if all the I j are singletons (or empty sets), the block-monotone scheme will be reduced to the usual monotone one. In the previous example, the comparison between the block-monotone scheme and the monotone one does not depend on the order that we fix on the first subset of eigenvectors of H, leading to: In the forthcoming analysis, we study, in some generality, a non-trivial situation of this kind. Conversely, the block-monotone scheme is never comparable with the weakly monotone one. Indeed, the weakly monotone version of the last example provides five additional elements on the basis of the two-particle subspace plus non-trivial contributions involving all the n-particle subspaces. Indeed, z n a n (h, k, β) . The Grand-Partition Function for Monotone Particles Since the explicit computation of the monotone grand-partition function is not available for most infinite dimensional cases, we reduce the matter to the simple case of the one-dimensional quantum harmonic oscillator briefly described in Section 2. The spectrum of the involved Hamiltonian (7) is formed of multiplicity-one eigenvectors. Therefore, in such a case, the block-monotone model described in Section 3 is reduced to the usual monotone one. Denoting such a monotone grand-partition function relative to the quantum harmonic oscillator as Z m , we obtain the following: In addition, 0 ≤ Z m ≤ Z 0 , where Z 0 is the Boltzmann grand-partition function given in (6), and thus Z m converges for all z ≥ 0 and β > 0. Proof. We start the second half by noting that: and, thus, Therefore, Z m ≤ Z 0 < +∞. Concerning the first half, taking into account the exclusion rule arising from the monotone assumption, it is straightforward to verify that, for the contribution relative to the n-particle subspace, n ≥ 1, Tr P m e −βdΓ(H) P m H ⊗ · · · ⊗ H n-times · · · · · · +∞ ∑ k n =k n−1 +1 e k n βhω = e nβhω 2 n ∏ k=1 1 e kβhω − 1 . Now, we can compare both grand-partition functions Z m and Z 0 . Indeed, after defining n (β)z n , # standing for "0" and "monotone", we can address two physically interesting, cases, β ↓ 0 (high-energy/low-density regime) and β ↑ +∞ (low-energy/high-density regime). For the high energy case, a (m) which leads to a (0) (8) and (9) explain that, in some sense, at least in this simple case of the harmonic oscillator, in the limit of high energies (i.e., β ↓ 0), the monotone grand-partition function Z m behaves, term-by-term, in the same way as Z 0 , corresponding to classical particles. In the next section, we provide a more refined analysis concerning this fact. For the appearance (i.e., (8)) of the Gibbs correction term n!, we can immediately argue that the monotone Fock space will naturally take into account the indistinguishability of particles. Now, we move on to the limit of low energies (typically when dealing with the so-called ground states) β ↑ +∞. Indeed, Remark 1. Equations and, thus, Relative to the Boltzmann partition function, according to the reasoning above, we obtain: Now, using the Stirling formula, when n ↑ +∞ and β ↑ +∞, retaining only the leading terms, we obtain: Remark 2. In the high-density regime described by β, n ↑ +∞, that is, when the state of the matter is very condensed, the monotone particles obey a kind of exclusion principle analogous to the Pauli exclusion principle for fermions. The heuristic Formula (10) seems to confirm the existence of such a principle. The Low-Density Regime We discuss the low-density regime corresponding to β, z ≈ 0 by showing that, in such a limit, Z m (β, z) ≈ Z 0 (z, β), for β, z → 0 . In order to progress to the investigation of the low-density regime, (11) reads: and (15) provides a useful condition, if and only if (1 − f (β, z)) > 0. It is now convenient to define t := z 2 /4 with the limitations 0 ≤ t < 1, obtaining: By passing to the logarithm and restoring the variables z and β, we obtain: On the other hand, if for some 0 < γ < 1, again, in terms of x and t, we have: Restoring the original variables, the above computation simply means that: when z and, necessarily, also β in the chosen region change to 0. Thus, we prove the following: Proposition 3. For each fixed 0 < γ < 1 and (z, β) in the region R delimited by 0 ≤ z < 2 and by the condition Concerning the low-density regime, Proposition 3 has the following meaning. Indeed, for β, z ≈ 0 in the region defined by (16), where the minus sign is explained in Remark 2 as an analogue of the Pauli exclusion principle for monotone particles. Such a monotone exclusion principle tends to become very relevant in the high-density regime (β ↑ +∞, see Remark 2), whereas it tends to vanish in the low-density regime (Proposition 3). In the latter case, the function 1 − f (β, z) provides the correction for the thermodynamic potentials relative to those of the monotone case, compared with the analogous ones relative to Boltzmann case. Moreover, using (17), we can determine that the average number of particles is where the second addendum in the last member is, indeed, negative because of (16). We conclude by noting that Proposition 3 allows one to compute the asymptotics of Connes' spectral action (cf. [1]), associated with the average number of monotone particles, as described in [19] for q-particles, but with the condition described in (16). We postpone the investigation of this aspect for a forthcoming analysis. Conclusions The previous analysis concerning monotone models suggests that their potential applications to physics appear to be meaningful and fruitful, even if the explicit computation of the thermodynamic quantities seems to be rather complicated. Therefore, it is natural to systematically address the investigation of the statistical properties of monotone systems, even if the involved grand-partition function is not completely computable for most infinite dimensional systems. We thus list some natural questions which could be addressed in future investigations below. The first one concerns the models in which the involved energy levels are degenerate. A simple example to address is the case when the degeneracy of the energy levels is uniform. A concrete but more complicated example is the isotropic harmonic oscillator in a d-dimensional environment, d > 1. Such a degeneracy is not uniform but can easily be computed, obtaining: The second natural question to be addressed is the investigation of the free gas formed of monotone particles. This is connected with the degeneracy and can be addressed either by considering the free particles confined in a box and then "passing to the continuum", as in Sections 4 and 5 of [19], or by removing the harmonic potential (i.e., performing an appropriate limit as the Hook constant changes to 0). For d = 1, the second approach seems to be directly applicable because, as pointed out above, the energy levels are nondegenerate, whereas the first approach already involves a non-trivial degeneracy in the case d = 1. The third but not least significant question is the systematic investigation of the statistical properties of monotone systems, with particular attention given to low-and highdensity regimes. As explained above, the detailed investigation of the low-density regime also involves the explicit computation of the asymptotics for β ↓ 0, which is connected to Connes' spectral action (e.g., [1]) for monotone systems and, thus, to noncommutative geometry. The study of the high-density regime, particularly at zero temperature (i.e., in the limit β ↑ ∞), could explain the quantitative effect of the decimation induced by the monotone prescription. This is simply the effect that we previously called the "monotone exclusion principle", that is, the analogue of the Pauli exclusion principle occurring in the case of fermions at zero temperature. For example, we can argue that the statistical properties of the (block-) monotone systems might have reasonable applications to complex systems which are absorbing (or emitting) quanta of energy. The system that we have in mind is an atom which is capturing a photon and then passing into an excited state or even emitting, again, a photon reaching a more stable state. Another example concerns the nuclei of fissile material (such as uranium U 92 235 ) in a nuclear plant which are capturing thermal neutrons and undergoing nuclear fission. In both cases, the relevant subject might not be the absorbed particles (bosons in the former case and fermions in the latter case) but the complex system which is absorbing (or emitting) the energy, according to the order of the eigenvalues of its Hamiltonian. These considerations might suggest that the block-monotone prescription has some role in the investigation of such complex systems from a statistical point of view. We conclude by pointing out that such a monotone exclusion principle should not allow for the occurrence of the condensation phenomena of monotone particles in the fundamental state. It would interesting to provide a rigorous proof of this conjecture for the free gas of monotone particles in a d-dimensional (or, more concretely, in the euclidean 3-dimensional) space. Conflicts of Interest: The authors declare no conflict of interest. Appendix A The main aim of the present work is to investigate whether particles obeying the monotone prescription have potential physical applications. For this reason, we did not pursue the proof of Proposition 2 in much detail in this stage. In particular, in the sketch of the proof, we used the negativity condition of the second derivative of the ∆ n (x) for n = 2, 3, . . . and for all x > 1. Even though this property is reasonable and intuitive, it is complex to provide analytic proof of it. For this reason, we numerically computed such a second derivative for different values of n, reporting their plots in Figure A1. On the other hand, while it is easy to see that ∆ n (1) = n(n−1) 4 , in order to estimate the last addendum in (13), we fitted ∆ n (1) 2 with a polynomial of degree 4 in the form with negligible errors (see Figure A2).
7,768.6
2023-01-22T00:00:00.000
[ "Physics" ]
E-learning’s usability measurement toward students with myopia visual impairment . Usability in e-learning is closely related to user interaction, so the e-learning interface needs to be taken into account. In e-learning, the user interface is an important factor of visual interaction between users and learning resources. Related to this, the user's vision aspect can affect the performance of e-learning usage, especially usability problem, so the study about usability related to user's vision needs to be done. This study measuring the usability of e-learning using USE Questionnaire and supported by eye-tracking technology. The results were compared between users with visual impairment and users with normal vision. The results obtained are, the usability variables have a correlation with each other. The four variables, which have a correlation to the variable tasks completion by the respondents (Task), are Ease of Learn, Ease of Use, Satisfaction, and Usefulness. This study shows that myopia visual impairment not causing difficulty in task completion. This is demonstrated in relatively similar user interactions, between normal-sighted users and abnormally-sighted users. The tendency to focus on web page elements per task for both groups of respondents are on elements such as announcement lists, course lists, navigation bars, drop-down menus, links and login text-boxes. Introduction According to ISO 9241-11 of 1998, usability is a measurement of the extent to which a particular consumer to achieve goals measured by efficiency, effectiveness, and satisfaction in context of use can use software. It also influenced by the interface display since the interface becomes an essential element of all web applications [1]. Nowadays many technology innovations depend on the user interface, where users facilitated to control and interact [2]. Usability reviewed through an evaluation called usability measurement. Usability measurement might be supported by eye tracking. A person eyes' movement is assessed to know when an individual see at a specific time and the order of direction where their eyes move from one point to another [3]. Eye movements and reading patterns, also pupil diameter are indicators of thought processes and mental states that occur when visual information extraction [4]. Many published studies that focus on the usability of a website with the goal of web design optimization for potential user groups [5]. E-learning is technological innovation where teaching and learning activities use computer, memory, and computer network [6]. Web-based e-learning is one of the influential learning resources that education institutes use today. The interface of e-learning becomes aspect that needs attention as the interface is the interaction point between the user and the learning source [6]. The visualization, as a main of interface, will affect the user [7]. Through the interface, users can get information, especially learning materials, and furthermore the appropriate interface design will enhance the use of elearning. Students with visual impairment having difficulties in obtaining information visually [8]. From the pre-survey that done at Atma Jaya Yogyakarta University, known about 126 of 227 respondents are students with visual impairment. The visual impairments that many students have in UAJY are nearsightedness or myopia and astigmatism. There are 89 students with nearsightedness, 9 students with astigmatism, and 28 students have both. This suggests that students with visual impairment of myopia and astigmatism become a significant group among students. This research aims to suggest alternative ways of measuring the usability of a website interface e-learning and comparing the user's interaction on the website interface between users with visual impairments and normal vision with eye-tracking technology. Hopefully, improvement on e-learning would increase students' interest in digital learning and slowly convert paper-based learning in order to contribute development of lowcarbon society. Literature Review E-learning become great influence in learning activity nowadays. Provides great flexibilities on study method whether it's instructor-led or self-study courses. Learning process could happen anywhere and anytime using computer and internet technology, without reducing educators' role significantly [9]. For example, Moodle examined by previous exploratory study in aim to get an insight of Moodle's usability. Found that Moodle have high attractiveness and fulfill the satisfaction of its users. Nevertheless, more research needed to improve users' comfortability when interaction with interface of learning software happens [10]. Web interface design plays an important role in students' interactions. Its' appearance will affect how students access the content and interact with other users. Especially the properties of web page elements. Some issues identified with label, color, and font. Change of color combination and increase of font size needed to improve visibility of web page elements toward students [11]. Another improvement needed on combination of easy navigation element such as arrows and visualization of bullets or numbers [12]. Usability implementation not only limited on web platform. For example, usability theory based on ISO 9241-11 implemented for designing m-learning interface [13]. Emotional aspect influence analyzed for interface design that runs on a mobile platform. Then data collection questionnaire was analyzed using One-Way ANOVA method [7]. Next study found the usability of the portal website, analyze the concepts fit and web interface design from the usability for instructors and students, as well as conformity with ideal conditions. Tested criteria according to usability criteria of ISO standards including ease of use, ease of learn, steps of use, the time it takes for users, and consistency of website elements [5]. Interaction from the user's eyes with the interface of learning system needed to examined to see the users' behavior and interest also determine whether their attention remains centered on the learning system [3]. Research with aims to examine the relationship between learning support technology and experience in online learning by students with visual impairments was held. Researchers adopt a user-centered cognitive approach. It used to understand students' thinking processes when experiencing learning difficulties, also represent problems from the students' needs and abilities in online interaction [14]. Other researcher also aims to find out the usability of a university web site and identifying problems encountered for students who have visual impairment. The usability criteria tested include ease of use, ease of use regarding usage steps, user time, and consistency of web site elements. Utilization and user satisfaction of the website obtained by questionnaire and interview [8]. A case study on the implementation of guidelines for web pages that have full accessibility for users with visual impairments also been held [15]. Research by the author used quasi-experiment method with an eye-tracker tool and usability measurement questionnaire. Usability criteria which being tested adopt criteria that were used in USE Questionnaire (Usability, Satisfaction, Easy of Use) which are Usefulness, Ease of Use, Easy of Learn, and Satisfaction. Methodology Sampling method used is Purposive Sampling. This method aims to capture significant variations of groups with specific characteristics. Samples used in this study are group of students with visual impairment consist 30 people and a group of students with a normal vision consists of 30 people from Atma Jaya Yogyakarta University students of Faculty of Industrial Technology and Faculty of Economics, in consideration of highly frequent usage of the website e-learning. The number of 30 respondents per group to meet the minimum rules of T-Student table statistics also aim for optimal heatmap visualization result from the eye-tracking data processing [16]. Students who are willing to be respondents provided with informed consent and personal data sheets. Here are the steps and procedures of data collection of respondents: 1. Experimenter explains to the respondent a description of the research activities, give instructions on the task, and ask for written consent from the respondent to undergo the experiment. 2. Respondent entered the room for experiments and calibrated the eye tracker tool to the respondent's eyes. 3. Respondent perform tasks without the help of an experimenter. Tasks performed as many as five tasks that are a form of adaptation of tasks from previous study [8]. Tasks include content search, accessibility or navigation, and file retrieval. Results and Discussion Descriptive statistical analysis used for the results of questionnaire processing. The independent variables of this research are user demography such as Faculty, Gender, Duration of internet usage for assignment, the Frequency of e-learning usage, and Vision condition. The dependent variable is the duration of the work on the assignment, and the results of a questionnaire that contains Ease of Learn, Ease of Use, Satisfaction, Usefulness, and Task. Validity and reliability test done to determine the items of the questionnaire that processed in the form of variables to be tested. Questions of the original inquiry amounted to 32 with the amount of incoming data amounted to 53 data. Validity test using comparison with table R for Degree of Freedom = 51 has value R = 0.2706. There are some items, which have R-value less than the values in R table, they are TASK1, TASK3, TASK6, TASK7 items. These items eliminated from the item in case of forming the variables to be tested. Cronbach's alpha shows the value of each item questionnaire more than 0.7, so the questionnaire has good reliability. Outlier test done by changing the data value into Zscore and determining the outlier threshold value. The outlier threshold value used is 1.96. There are six respondents classified as extreme and outlier data; they are respondents with number 6, 10, 13, 31, 41, 46. The data numbers 6, 10, 13 is an outlier and number 31 is the extreme data in Usefulness variable. In Satisfaction variable, data number 46 becomes outlier with Z-score -2.90. Data number 41 becomes data outlier on Task variable with Z-score value -3.14. No outlier data on Ease of Learn and Ease of Use. The normality of data distribution tested using the Shapiro-Wilk method and obtained the result that four of the five variables tested have abnormal data distribution. The variable that shows the value of SIG less than 0.05 is Ease of use variable, Ease of learning, Satisfaction, and Task. Usefulness variables have a relatively normal data distribution with the number of the significance of 0.2. Seeing the tendency of distributed data is not normal then used non-parametric analysis tool because it does not require the normal distribution of data. There are no differences in interface elements that are focused in the stages of task completion. The video recording of respondents' behaviour shows the habit of the respondents in using the website e-learning is more influential in the steps of task completion. For example, usage of navigation bar for searching of specific announcement in Task 1, which actually searchable in the middle of page. Highlighted web page element which contain selection items might become important anchor point for both group of respondents. This kind representation could influence the ability of user in read and remember page content, which affect users' preference for using the system [17]. Also in a previous study, visual attention of the user shifted from content to selection items with fixed position and hierarchical structure [18]. Users' visual attention tendency is focused on upper left part of the page. Upper-left part of webpage is more attractive, where navigation bar is placed [19]. In completion of Task 3 proven that some respondents still took a step in using dropdown menu on right side to access their profile web page instead using navigation bar. Although both elements contain selection items, some of respondents' habit in using drop-down could affects more and drop-down menu on the right column have iconic representations by using profile name and photo which might attract user's attention [20] Spearman's correlation coefficients for variables Usefulness of the four other variables are of high value, and significance number is worth less than 0.05. Against Ease of Use variable, the number of significance 0.000, also towards Satisfaction variable have the same value. Usefulness variable toward Ease of Learn variable significantly in value 0.004. Ease of Use and Ease of Learn variable also has a high correlation value toward the users' Satisfaction. Significance number of the four variables of usability toward Task variable indicates values below the alpha = 0.05 and most significant is the variable Ease of Use with the numerical value of the significance of 0.000, followed by Ease of Learn with significant value 0.002, users' Satisfaction with value 0.011, and Usefulness variable with value of 0.029. This shows that the tested usability variable correlates with the task work done by the respondent. Conclusion The four variables have a correlation to the variable task completion by the respondent (Task). Ease of Use with the value of significance worth 0.000, then followed by Ease of Learn variables with the value of 0.002, Satisfaction with the value 0.011, and Usefulness variable with a value of 0.029. Interaction of respondents with normal eyesight and respondents with visual impairment toward the user interface of the website e-learning didn't encounter significant differences. The tendency of focus on web page elements per task for both groups of respondents is relatively the same. Visual impairment myopia and astigmatism do not cause difficulties with the task. Suggestions for the further research are respondents quantity should be added for avoiding abnormal distribution of statistical data, eye-tracking data processing for heatmap could be more precise by separate color level which represent fixation duration of respondents' focus area. Optimization for e-learning on navigation bar and dropdown menu usage for accessing links for Site News, My Course, My Profile. Both navigation elements shown to be frequently used in respondents' behavior.
3,080
2018-08-01T00:00:00.000
[ "Computer Science" ]
Cauchy Problems for Evolutionary Pseudodifferential Equations over p-Adic Field and Applied Analysis 3 Proof. Let φ ∈ D(T), then ∫ Kp 󵄨󵄨󵄨󵄨 φ ∧ (ξ) 󵄨󵄨󵄨󵄨 2 dξ < ∞, ∫ Kp ⟨ξ⟩ 2α󵄨󵄨󵄨󵄨 φ ∧ (ξ) 󵄨󵄨󵄨󵄨 2 dξ < ∞, (15) with ⟨ξ⟩α ≤ ((1 + ⟨ξ⟩2α)/2); thus ∫ Kp ⟨ξ⟩ α󵄨󵄨󵄨󵄨 φ ∧ (ξ) 󵄨󵄨󵄨󵄨 2 dξ ≤ ∫ Kp 1 + ⟨ξ⟩ 2α 2 󵄨󵄨󵄨󵄨 φ ∧ (ξ) 󵄨󵄨󵄨󵄨 2 dξ < ∞, Introduction In recent years -adic analysis has received a lot of attention due to its applications in mathematical physics; see, for example, [1][2][3][4][5][6][7][8][9] and references therein.The definition of pseudodifferential operator is very important in the theory of PDE on -adic field.In 1960s, Gibbs defined logic derivative over dyadic field.Then, Vladimirov et al. [8] generalized logic derivative over -adic field, and we called the operator referred to as Vladimirov pseudodifferential operator.Chuong et al. have done a lot of work on PDE over adic field using Vladimirov operator; see, for example, [9][10][11][12].However, as a kind of operation, Vladimirov pseudodifferential operator is not closed in the test function space (Q ).This makes the definition of Vladimirov operator difficult to be applied to distribution space (Q ).In 1992, Su [13] redefined derivative and integral operator over -adic field.The definition makes the operator closed in (Q ) and can be extended to its dual space (Q ).In 2011, Su [14] has applied the differential operator to study two-dimensional wave equations with fractal boundaries. Preliminaries We will use the notations and results from Taibleson's book [16].Let Q be the -adic field, in which is a prime number. It is a nondiscrete, locally compact, totally disconnected and complete topological field endowed with nonarchimedean norm | ⋅ | : for , ∈ Q , so that it is also ultrametric.Define D as the ring of integers in For ∈ Q , it has a unique expression = + +1 +1 + ⋅ ⋅ ⋅ , ∈ Z with || = − .For each ∈ Z, we choose elements , ∈ Q , ∈ Z + , so that the subsets then, the Haar measure of where denote the Haar measure on Q normalized by the condition where the element () is called test function. For the test function space , we give the following topology: for ∈ (Q p ), there exists unique integers (, ) such that the function is constant on the coset of , with supports in the ball ; lim → +∞ () = 0 converges uniformly for ∈ Q p .Then, is complete topological linear spaces. Denote by = (Q ) the distribution space of test function space . is a complete topological linear space under the dual topology. Let () be a fixed nontrivial character of Q which is trivial on D. For the -adic field, can be constructed by the base value [17] as Then for For ∈ (Q ), we define its Fourier transform ∧ by and inverse Fourier transform ∨ by In 1992, Su [13] has given definitions of the derivative for the -adic local fields Q , including derivatives of the fractional orders and real orders. exists at ∈ Q , where () is a fixed nontrivial character of Q .Then it is called a pointwise derivative of order of at .Note that the defined domain of in the definition can be extended to the space (Q ), where (Q ) denote the set of all functionals (distributions) on (Q ). Let ( ) be the domain of defined as We have the following. Lemma 3 (see [17]). is a positive definite self-adjoint operator on ( ); { } is an orthonormal base of 2 consisting of eigenfunctions of the operator , defined as follows: where Φ 0 () is a characteristic function of a unit ball.And Main Results We will solve the following pseudodifferential equation over -adic field by using the orthonormal base { } constructed in Lemma 3. First, we consider the case of homogeneous equation. Then one has a formal solution and (, ) Proof.Consider the following. Step 1.We will write ∑ instead of ∑ ,, in the following proof. To determine the coefficients , , , and , we assume that ∈ ( ) can be expanded as lacunary series = ∑ (), where With the initial condition (0, ) = () and then (0) = , we obtain The same as with ∈ ( /2 ), we get = ∑ (), where With the initial condition (0, ) = () and then (0) = , we obtain Then the exact solution of the equation is Step 2. We will prove that the solution we obtained in Step 1 satisfies the conditions in Theorem 4. (i) Consider that Then the series of (, ) converges uniformly in With the assumptions of ∈ ( ), ∈ ( /2 ), the series is converging uniformly in 2 (Q ). Next, we will consider the case of nonhomogeneous equation.Proof.Consider the following. Step 1. Similarly to the proof of Theorem 4, we expand (, ) as lacunary series where and we obtain (45) It is clear that the exact solution of the equation is with Step 2. It will be proved that the solution satisfies the conditions of Theorem 5. Conclusion In this work, a class of evolutionary pseudodifferential equations of the second order in over -adic field Q was investigated where is a -adic pseudodifferential operator defined by Su Weiyi.The exact solution to the equation was obtained and the uniform convergence of the series of the formal solution was constructed.
1,302
2014-04-27T00:00:00.000
[ "Mathematics" ]
Extraction of optical solitons in birefringent bers for Biswas-Arshed equation via extended trial equation method Abstract: This article obtains optical solitons to the Biswas-Arshed equation for birefringent bers with higher order dispersions and in the absence of four-wave mixing terms, in amediawithKerr type nonlinearity. Optical dark, singular and bright soliton solutions are articulated by applying an imaginative integration technique, the extended trial equation scheme. Various additional traveling wave solutions are produced with this integration technique, which include rational solutions, Jacobi elliptic function solutions and periodic singular solutions. From the mathematical analysis some constraints are recognized that ensure the actuality of solitons. Governing equation The coupled system obtained from (1) in birefringent bers without 4WM reads [46] ip t + a pxx Here b and a are the coe cients of spatio-temporal dispersion (STD) and group velocity dispersion respectively, while d and c are the coe cients of third order STD and third order dispersion respectively, for = , . Next, γ and λ stand for self-steepening terms, while the nonlinear dispersions are con rmed by β , θ , α and µ . Here Q (ζ ) is the amplitude, ν gives the soliton velocity, ϵ is the phase constant, κ and ω are respectively the frequency and wave number of the soliton. Implanting Eqs. (3)-(5) into (2) and dividing into imaginary and real parts, we attain and where * = − and = , . The balancing principle suggests that Q l * = Q . Therefore, from Eq. (6) we have along with the restraint conditions Comparing the two values of ν in (9), leads to the restraints Now, from Eq. (7), we have where = , . Eq. (12) will now be scrutinized by extended trial equation method. . Application of extended trial equation method In this subsection, we employ the extended trial equation technique [7,8] to Eq. (12) for constructing the exact solutions of the system (2). Case-1. The solution of Eq. (12) can be expressed as where δ i are unknown constant to be determined such that δϱ ≠ and where η , ..., ησ and χ , ..., χρ are arbitrary constants to be identi ed such that ησ ≠ and χρ ≠ . Eq. (14) can be transformed into an integral form as: The balancing process reveals that σ = ρ + ϱ + . By assuming σ = , ϱ = and ρ = in (16), we arrive at Implanting Eq. (17) along with Eq. (14) into (12) and evaluating the resultant system of equations, we attain where Substituting the values of parameters from (18) into (14) and using Eq. (15), we obtain where As a consequence, the following exact solutions can now be written for the coupled system (2). If and Whenever where and υ j , j = , . . . , are the roots of Λ(Ψ) = . By assuming δ = −δ υ and ζ = , the solutions given by (23)- (32) can be transformed to the plane wave solutions singular soliton solutions bright soliton solutions Moreover, when δ = −δ υ and ζ = , Jacobi elliptic function solutions (33) and (34) trimmed as Remark 1. When the modulus m → , singular soliton solution emerge as where υ = υ . Remark 2. When the modulus m → , periodic singular solution emerge as where υ = υ . Case-2. Eq. (12) can be written as through the transformation Q = P / . Therefore, the solution of Eq. (50) can be expressed as where δ i are unknown constant to be determined such that δϱ ≠ . The balancing process reveals that σ = ρ + ϱ + . By assuming σ = , ρ = and ϱ = in Eq. (52), we arrive at Implanting Eq. (53) along with Eq. (14) into Eq. (50) and evaluating the resultant system of equations, we attain where Substituting all the values from (54) into (14) and using Eq. (15), we obtain where As a consequence, the following exact solutions can now be written for the coupled system (2). Conclusion The work expounded in this article successfully that addresses optical solitons of Biswas-Arshed equation with Kerr-law nonlinearity in birefringent bers with higher order dispersions and in the absence of fourwave mixing terms by the application of extended trial equation technique. With this integration scheme, we have commendably recovered dark, bright and singular optical solitons along with other traveling wave solutions, comprising rational solutions, periodic singular solutions and Jacobi elliptic function solutions in the presence of some constraints. It is concluded that our derived results for the Biswas-Arshed equation in birefringent bers are exclusively new and have not been stated earlier. The outcomes of this paper are attention-grabbing and provide a stimulus to the audience of optical solitons. Later, this equation will be studied with the addition of four-wave mixing terms by the aid of appreciated integration schemes. These precious results will be presented as soon as possible. Funding information: The authors state no funding involved. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. Con ict of interest: The authors state no con ict of interest.
1,216.6
2021-01-01T00:00:00.000
[ "Physics", "Engineering" ]
INFLUENCE OF NiO NANO-FLAKES DISPERSION ON THE VISCOSITY OF LUBRICATING OIL Protecting interacting surfaces of mechanical systems against friction and wear have a wide range of industrial applications. Viscosity is the supreme property of any lubricant which overpowers viscous drag in hydro-dynamically lubricated mechanical systems. The dispersion stability of NiO-nanolubricants is achieved by ultrasonication technique. The detailed study of the viscosity of NiO nano-flakes dispersed in SN500 lubricants with weight fraction of 0.25-1.5% was performed in the temperature between 40-90 °C. The results show that increasing the weight fraction of NiO nano-flakes resulted in consistence viscosity increment. Further, the measured viscosity is compared with different concentration and temperature dependent theoretical models. On the basis on experimental viscosity data a theoretical correlation is recommended to predict the viscosity of NiOnanolubricants with less than 5% margin of deviation. INTRODUCTION Protecting interacting surfaces of mechanical systems against friction and wear have a wide range of industrial applications. Lubricants are designed to provide a protective film that condenses friction and wear of interacting surfaces in relative motion, protect them against corrosion and reduce the consumption of energy. The requirements of good liquid lubricants are the resistance to low foaming tendency, aging, oxidation, good load-carrying capacity and significant viscosity-temperature behavior [1]. The enhanced viscosity of liquid lubricants will increase the ability of the oil film to stick for longer time period providing a superior protection against friction and wear [2]. Further application of nanotechnology measures a new passive way to explore this topic in detail. Researchers have proven that nanoparticle dispersions into the water or liquid lubricants not only have the advantages of superior thermo-physical properties, but also provide enhanced tribological properties due to the decrease in friction coefficient and superior anti-wear possessions [3]. The nanoparticles has increased relative surface area and quantum effects which significantly alter the physical, optical, electrical, chemical magnetic and mechanical behavior of liquid lubricants when it is homogeneously dispersed [4]. Many authors utilized nanotechnology to enhance the viscosity of liquid lubricants [5][6][7][8][9][10][11][12][13]. Generally, viscosity decreases with increasing the temperature for all types of liquid lubricants, nanofluids and nanolubricants. Nanolubricants are colloidal dispersions of nanometer-sized materials (1-100 nm) such as nanoparticles, nanotubes, nanorods, nanofibers, nanosheets and nanowires into conventional liquid lubricants [14]. Nanolubricants were initially industrialized to decrease engine wear through the "ball bearing" effect [15]. The type of nanoparticle, shape, size and the amount of surfactant used affects the thermo-physical properties of any nanolubricants [16]. Ijam et al. [17] experimentally investigated the thermophysical properties of grapheme oxide nano-sheets suspended indeionized water/ethylene glycol (DW/EG) with a weight fraction of 0.01-0.10% and temperature of 25-45 o C. The results show that the viscosity of 0.10 wt.% DW/EG nanofluid is enhanced by 35% due to the dispersion of graphene oxide nano-sheets compared to the base fluid at 20 o C and it is reduced by 48% with increasing the temperature in the range of 20-60 o C [17]. Attari et al. [18] studied the effects of temperature and mass fraction of NiO, TiO 2 , ZnO, Fe 2 O 3 and WO 3 nanoparticles on viscosity of nanolubricants and their experimental results show that the increment of temperature decreases the viscosity of the nanolubricants [18]. On the contrary, Chen et al. [19] reported that the relative viscosity is independent of temperature due to insignificant Brownian diffusion in comparison to convection in high shear flows [19]. For the same nanoparticle aspect ratio and size, the consequence of the non-spherical particles on the viscosity was superior in comparison to the spherical nanoparticles [20]. In this study, viscosity of NiO-nanolubricants were experimentally examined with the weight fraction of (0.25-1.5)% and temperature in between (40-90) o C in absence of surfactants. The NiO nano-flakes were prepared by sol-gel technique. The chemical constituents, size and morphology were evaluated by EDS, XRD and SEM. Different weight fractions of NiOnanolubricants were prepared by dispersing NiO nano-flakes into SN500 lubricant. The dispersion stability of NiO-nanolubricants is achieved by ultrasonication technique. The viscosity of SN500 lubricant and different concentrations of NiO-nanolubricants is estimated using Brookfield LVDVE digital viscometer. The measured viscosity of NiO-nanolubricants is being compared with the predicted values acquired from the existing additive concentration and temperature dependent theoretical models. Preparation and characterization of NiO flakes The NiO flakes were synthesized through sol-gel method in which sodium hydroxide is mixed with nickel acetate tetrahydrate at room temperature [21][22]. The solution is agitated continuously for 2 hour and then it is dried in a furnace at 500 o C for 2 hours. The EDS spectrum, XRD spectrum and the SEM morphology of the resulting dark green powder is shown in Figure 1. The EDS spectrum of the sol-gel by-product is represented in Figure 1(a) which confirms the presence of Ni and O-atoms in the lattice without any impurities. The XRD pattern of NiO particles is shown in Figure 1(b) the distinct diffraction peaks which is in accordance with the standard JCPDS No-65-5745 [23][24][25][26][27][28][29][30][31][32]. The diffraction peaks are indexed with (111), (200), (220), (311), (222) and (400) respective planes with the lattice constant value of 4.1788 Å. It confirms that the prepared NiO particles have face center cubic (FCC) structure and any other secondary phase related peaks is not observed. Further, the crystallite size of the NiO particles is estimated from the dominant peak using Scherrer formula [33]. The calculated value is 72.97 nm and the nano-crystalline morphology of NiO is studied by SEM (Figure 1(c) and Figure 1(d)). The particles have quite flat surface texture and the morphology of NiO particles appears as flakes which are stacked one above the other due to great surface energy. Preparation of NiO-nanolubricants The NiO nano-crystalline flakes of size 72.97 nm were dispersed into SN500 lubricant by digital probe sonication at room temperature. The preparation footsteps of NiO-nanolubricant with different additive concentration are as follows: 250 mL of SN500 lubricant is taken in a 300 mL beaker. Required weight percentage of NiO nano-crystalline flakes of size 72.97 nm is dispersed into SN500 lubricant at room temperature. Continuously sonicate at 20 kHz for 20 min by the digital probe ultrasonicator. Viscosity measurements The viscosity of SN500 lubricant and different concentrations of NiO-nanolubricants is estimated using Brookfield LVDVE digital viscometer, USA. It is a rotating type viscometer in which the torque required to turn the fluid medium is calibrated to the viscosity of liquid at a known spindle speed. RESULTS AND DISCUSSION The viscosity of the SN500 lubricant and NiO-nanolubricants are investigated in temperature range of 40 o C to 90 o C and weight fraction of (0.25-1.5)%. The viscosity of NiOnanolubricants is examined with different weight fraction as shown in Figure 2. It is found from the graph that the viscosity is enhanced almost linearly with increasing all weight fractions of NiO nano-flakes. This enhancement is due to the large surface area of NiO nano-flakes which increases the effort to resist the moment of the molecules of SN500 lubricant and elevates the internal shear stress during fluid movement. This resistance is more at higher weight fractions of NiO nano-flakes in SN500 lubricant. Hence, the viscosity increases with increasing the weight fraction under the same size of NiO nano-flakes. Further, the increase in concentration of NiO nano-flakes in SN500 lubricant leads to more interaction between them in the unit volume of NiO-nanolubricants. The NiO have large surface area which has large effect on the suspension viscosity under increased solid phase volume fraction due to the dispersion of solid additives. At low nano-flakes concentration, interaction between the dispersed nano-flakes is less and the probability of interaction and contact among the dispersed additives is less as it has enough gaps to overcome the large surface energy. The increase in the concentration enhances the probability of interaction between them and surface area plays vital role in it. The interaction develops agglomeration which may have lamellar structured oil molecules-additive materials and agglomerated NiO nano-flakes. It increases the flow resistance and the possibility NiO nano-flakes to settle down. The percentage of viscosity enhancement of NiO-nanolubricants is estimated by the following formula: The percentage of viscosity decrement of SN500 lubricant and different concentrations of NiOnanolubricants is estimated by the following formula: Further, the enhancement in viscosity of SN500 lubricant due to the dispersion of nanomaterials has been reported by other authors. For example, Maheswaran et al. [34] studied the viscosity of garnet-SN500 lubricant oil nanofluid with a volume fraction of (0.25-0.75)% and temperature range from 30 o C to 75 o C. They reported a dramatic enhancement of dynamic viscosity at 75 °C from 16.40 cP to 32.93 cP due to the dispersion of 0.75 wt% of garnet nanoparticles [34]. The relationship between the additive concentration and the functional temperature of different additive concentration of NiO-nanolubricants is shown in Figure 3. The increment in temperature causes the weakening of intermolecular adhesion forces, affect the particle migration, decreases the internal shear stress and diminishes the average intermolecular forces of SN500 lubricant and different concentrations of NiO-nanolubricants. Also, it deteriorates the forces between subatomic molecules as the temperature get increases. The weakening of intermolecular adhesion forces enhances the probability of interaction between the NiO nano-flakes in SN500 lubricant where increase in concentration results in more interaction between them due to surface energy and the Brownian motion of the homogeneously dispersed nano-flakes. Hence, the viscosity of NiO-nanolubricants and its dispersion stability with time and measured temperature range are the significant parameters for any heat transfer and lubrication application. The increase in nanoparticle concentration causes an attraction between the NiO nano-flakes, which results with time clustering and thus inhomogeneity in the prepared NiOnanolubricants at high temperature and high nanoparticle concentrations. At low temperature and low nano-flake concentration, the strong intermolecular strength resists additive-additive interaction which overcomes the possibility of agglomeration and sedimentation. Therefore, the viscosity of SN500 lubricant and all concentrations of NiO-nanolubricants decrease with increasing operating temperature. The relationship between the additive concentration, functional temperature and viscosity enhancement of different additive concentration of NiOnanolubricants are shown in Further, the experimentally measured viscosities of NiO-nanolubricants are compared with the theoretical equations proposed by other investigators. In theoretical perspective, the thoughtful understanding of viscosity of nanolubricants is a challenging task as it does not behave precisely the similar to the ordinary two-phase fluid. Einstein [35] proposed a theoretical correlation to approximate the viscosity of a fluid containing low concentration of spherical particles as below: Hatschek [36] Ijam et al. [17] developed a theoretical correlation with less than 2% of standard deviation to estimate the nanoparticle concentration dependent viscosity of graphene oxide nano sheets-DW/EG nanofluids based on the fitting with the experimental data: Wang et al. [39] proposed a theoretical correlation to estimate the nanoparticle concentration dependent viscosity of nanofluid: Pak and Cho have experimentally investigated the viscosity of Al 2 O 3 and TiO 2 nanofluids and they summarized the following particle size and concentration dependent viscosity equation [40]: Tseng and Lin developed the following expression to predict the viscosity of TiO 2 nanofluids based on experimental data fitting [41]: The experimentally obtained temperature dependent viscosity of NiO-nanolubricants has been compared with the theoretical equations proposed by other investigators. Nguyen et al. proposed the following theoretical correlation to estimate the viscosity of nanofluids by considering the influence of nanoparticle concentration and operating temperature [44]: Yiamsawas et al. proposed the following correlation to compute the viscosity of the EG-water mixture [45]: Zawawi et al. reported the following correlation to determine the nanoparticle concentrations and temperature dependent viscosity of nanolubricant with ±5% of standard deviation [46]: Esfe et al. investigated the rheological behavior of ZnO-MWCNT/10W40 engine oil nanofluids at the temperatures between 5-55 °C. They proposed the following correlation to forecast the dynamic viscosity of nanolubricants in terms of nanoparticle concentration, temperature and dynamic viscosity [47]: The margin of deviations of the predicted viscosity values are determined by the following equation. It is clear from the Figure 5 that the above theoretical correlations failed to provide an accurate estimation for the measured viscosity of NiO-nanolubricants due to difference in measurement techniques, surface chemistry, morphology, and shear rate, etc., especially when the solid phase volume fraction reached above 0.5wt%, and the margin of deviations becomes prominent. The viscosity values estimated by Einstein [35], Hatschek [36], Batchelor [37], Ijam et al. [17], Wang et al. [39], Pak and Cho [40], Tseng and Lin [41], Roscoe et al. [42] and Redhwan et al. [43], theoretical correlations show anomalous hike. These models determined the viscosity values based on the weight fraction of dispersed nano-flakes, while ignoring other parameters such as particle size, operating temperature, particle shape, density even the effect of Brownian motion is not taken into consideration. Hence, it becomes highly necessary to build the theoretical model which can consider and can account for the effect of other factors that affects the viscosity. The temperature dependent theoretical correlations are reported by Nguyen et al. [38], Yiamsawas et al. [45] and Zawawi et al. [46]. In these correlations the effect of nanoparticle weight fraction is not considered. Nguyen et al. [38] reported a theoretical correlation to estimate the viscosity of CuO-nanofluids. The viscosity value obtained by this equation decreases with increment of the weight fraction of NiO nano-flakes. At 40 °C, the margin of deviations of the predicted viscosity values by Zawawi et al. [46] is 4.7% whereas it increases with the increase of weight fraction. The margin of deviations of the predicted viscosity values by Esfe et al. [47] varies from 4.38% to 9.96% when the weight fraction of the NiOnanolubricants changes from 0.25 wt% to 1.50 wt% at 40 °C temperature whereas it varies from 5.5% to 2.8% when the weight fraction of the NiO-nanolubricants varies from 0.25 wt% to 1.50 wt% at 50 °C temperature. Hence, the theoretical correlation of Esfe et al. [47] can be used to predict the viscosity of NiO-nanolubricants up to 0.50 wt% of NiO nano-flakes and up to 50 °C temperature. According to the comparison, the margin of deviations of the predicted viscosity values by Zawawi et al. [46] is less than 5% when the weight fraction of the NiO-nanolubricants varies from 0.25 wt% to 1 wt% in the measured temperature range. Hence, this model was used for estimating the viscosity of the NiO-nanolubricants because of its better agreement with the experiment data. CONCLUSIONS In this study, the factors that affect the viscosity were discussed which includes temperature, particle size, shape and aspect ratio of the dispersed particles. The NiO nano-flakes are prepared by sol-gel technique. The chemical constituents, size and morphology are evaluated by EDS, XRD and SEM. Different weight fractions of NiO-nanolubricants were prepared by dispersing NiO nano-flakes into SN500 lubricant. The viscosity results showed that increasing the weight fraction of NiO nano-flakes resulted in significant viscosity enhancement and it was further compared with different theoretical models. The theoretical correlation of Zawawi et al. [46] can be used to predict the viscosity of NiO-nanolubricants up to 0.25 wt% of NiO nano-flakes and up to 40 °C temperature. The theoretical correlation of Esfe et al. [47] can be used to predict the viscosity of NiO-nanolubricants up to 0.50 wt% of NiO nano-flakes and up to 50 °C temperature whereas the theoretical correlation of Zawawi et al. [46] can be used to predict the viscosity within less than 5% of margin of deviation.
3,564.4
2020-04-24T00:00:00.000
[ "Materials Science", "Engineering" ]
Solution Behavior Near Very Rough Walls Under Axial Symmetry: An Exact Solution for Anisotropic Rigid/Plastic Material : Rigid plastic material models are suitable for modeling metal forming processes at large strains where elastic effects are negligible. A distinguished feature of many models of this class is that the velocity field is describable by non-differentiable functions in the vicinity of certain friction surfaces. Such solution behavior causes difficulty with numerical solutions. On the other hand, it is useful for describing some material behavior near the friction surfaces. The exact asymptotic representation of singular solution behavior near the friction surface depends on constitutive equations and certain conditions at the friction surface. The present paper focuses on a particular boundary value problem for anisotropic material obeying Hill’s quadratic yield criterion under axial symmetry. This boundary value problem represents the deformation mode that appears in the vicinity of frictional interfaces in a class of problems. In this respect, the applied aspect of the boundary value problem is not essential, but the exact mathematical analysis can occur without relaxing the original system of equations and boundary conditions. We show that some strain rate and spin components follow an inverse square rule near the friction surface. An essential difference from the available analysis under plane strain conditions is that the system of equations is not hyperbolic. Introduction Most metallic materials are plastically anisotropic. The most common type of anisotropy is orthotropy. This type of anisotropy is induced, for example, by flat rolling of sheets. Rolled sheets are usually subject to subsequent sheet metal forming operations. Some of these operations induce intensive plastic deformation near frictional interfaces. An example of such an operation is the hole-flanging process [1]. Standard experimental methods are not adequate for determining material properties in regions of intensive plastic deformation [2]. In turn, new experimental methods require non-standard theoretical approaches. A prominent feature of several rigid plastic models is that some components of the strain rate tensor tend to infinity (or negative infinity) in the vicinity of certain frictional interfaces. This mathematical property of the strain rate tensor is in qualitative agreement with the distribution of strain rates that can induce a thin layer of intensive plastic deformation near the frictional interface. However, the precise asymptotic representation of solutions near frictional interfaces depends on the constitutive equations. Therefore, the corresponding asymptotic analysis should be carried out for each set of constitutive equations of interest. The first complete investigation of the behavior of singular rigid perfectly plastic solutions near frictional interfaces was done in [3]. It was shown that the quadratic invariant of the strain rate tensor is inversely proportional to the square root of the normal distance to the friction surface. A particular case of plane strain conditions was considered separately, as it reveals some features that are different from the general case. In contrast to the general case, the plane strain equations are hyperbolic. The general analysis is valid under plane strain conditions if the frictional interface is an envelope of characteristics and is not valid if it is a regular characteristic. The extension of the main result derived in [3] to anisotropic plasticity has been restricted to plane strain conditions [4]. As in isotropic plasticity, the equations are hyperbolic, and the derivation in [4] is based on the general slip-line theory developed in [5]. The present paper adopts Hill's quadratic yield criterion [6]. The corresponding system of equations comprising the yield criterion, its associated flow rule, and the equilibrium equations is not hyperbolic under axial symmetry. Therefore, the methodology used in [4] is not applicable in this case. In addition to the strain rate tensor's quadratic invariant, the spin tensor is important for anisotropic materials [7]. Under plane strain and axial symmetry conditions, only one component of this tensor does not vanish. This component's behavior near frictional interfaces has been studied in [8], where it was shown that this component approaches infinity (or negative infinity) in the vicinity of the friction interface. The double slip and rotation model proposed in [9] was considered in this paper in conjunction with the Mohr-Coulomb yield criterion. Therefore, the equations are hyperbolic under both plane strain and axial symmetry conditions. The surface near which the spin component is singular is an envelope of characteristics. The present paper provides an exact solution for an axisymmetric boundary value problem representing the mode of deformation in the vicinity of frictional interfaces with high friction stresses. The system of equations is not hyperbolic. The original boundary conditions include two frictional interfaces and assume that sticking occurs at each of these interfaces. It is then shown that this boundary value problem may have no solution. On the other hand, there is no physical reason for the non-existence of the solution to the boundary value problem. Therefore, the only possibility of bringing the mathematical result and physical insight into line is to assume that sliding occurs at one of the frictional interfaces. Doing so results in singular behavior of the quadratic invariant of the strain rate tensor and the only non-vanishing spin component near the frictional interface where sliding occurs. Material Model The material is supposed to be rigid plastic (i.e., the elastic portion of the strain tensor is neglected). The boundary value problem to be solved is axisymmetric. Let θ be the azimuthal coordinate of a cylindrical coordinate system ( ) , , r z θ whose z-axis coincides with the axis of symmetry of the boundary value problem. Under axial symmetry, Hill's quadratic yield criterion for orthotropic materials reduces to [6] ( ) ( Here θθ σ is the circumferential stress and one of the principal stresses. In addition, xx σ , yy σ and xy σ are the stress tensor components referred to the principal axes of anisotropy Here θθ ξ is the circumferential strain rate and one of the principal strain rates. Furthermore, xx ξ , yy ξ and xy ξ are the strain rate tensor components referred to the axes X and Y; λ is a non-negative multiplier. The equations in (2) result in the equation of incompressibility: 0. Boundary Value Problem Consider an infinite hollow cylinder of inner radius a and outer radius b (Figure 1). The axis of the cylinder coincides with the z-axis of the cylindrical coordinate system. The orientation of the X-axis relative to the r-axis is ϕ . It is assumed that the radial stress is constant at the outer radius of the cylinder. However, since the material is pressure-independent, the value of this constant does not change anything but the magnitude of the hydrostatic stress, which is not essential for the present paper's objective. Therefore, it is assumed that Here rr σ is the radial stress. The axial and shear stresses in the cylindrical coordinate system are denoted as zz σ and rz σ , respectively. The radial velocity u is prescribed at the inner radius of the cylinder: for r a = . The axial velocity v is prescribed at both the inner and outer radii, which are friction surfaces. With no loss of generality, it is possible to assume that v vanishes at the inner radius of the cylinder. Then, 0 v= (6) for r a = and 0 v V = > for r b = . The conditions (6) and (7) imply the regime of sticking at both friction surfaces. The stress and velocity are independent of both θ and z. Therefore, the equilibrium equations in the cylindrical coordinate system reduce to 0, 0. The components of the strain rate tensor in this coordinate system are The transformation equations for stress components in a meridian plane give The inverse transformation is General Solution The second equation in (8) can be immediately integrated to give where k is a constant of integration. The direction of the axial velocity at r = b demands that k is positive. Equation (3) is equivalent to the equation It is convenient to introduce the following quantities: It is evident that It follows from the third equation in (9) and (13) One can eliminate the strain rate components in this equation by means of (2). Then, using (16) The last equation in (11), (14) and (16) It follows from (2) and (16) Using (9) and (15) (9), (15) and (23) One can solve equations (17) Here, μ is a dummy variable of integration. Using (11) It remains to determine k involved in the solution (14). The solution (27) and the boundary condition (7) Since the integrand involves k, (30) is the equation for determining this parameter. A difficulty is that this equation may have no solution. Analysis of the General Solution The value of ϕ varies in the range 0 Special Case 0 ϕ = The physical sense of this special case is that the principal axes of anisotropy coincide with the coordinate lines of the cylindrical coordinate system. Equations (25) and (26) Substituting (34) into (37) at 0 respectively. The integral in (38) is evaluated in terms of elementary functions. As a result, Applying l'Hospital rule to the right-hand side of this equation one can find that This solution corresponds to the expansion of the cylinder without shearing. It is now assumed that the ratio V U increases from 0 V U = . Differentiating the right-hand side of (39) with respect to k gives ( ) Mk a It is convenient to rearrange this equation as ( ) It is evident that At 0 ϕ = , the axial velocity is determined from (27) It follows from (9), (12), and (15) Using (48), (49), and (55), it is possible to express the right hand side of this equation as a function of r. As a result, This solution and the boundary condition (7) It is seen from the general structure of equations (21) and (65) It is straightforward to determine the distribution of the radial stress from (29) using (65) and (67). Conclusions An axisymmetric boundary value problem for material obeying Hill's orthotropic, quadratic yield criterion has been formulated and then solved in closed form. The problem and its solution are not feasible for experimental verification. This research's primary objective is to reveal some qualitative mathematical features of the solution that are independent of the specific boundary value problem. The boundary value problem involves two frictional interfaces. Of particular interest is the behavior of the solution near these interfaces. The original formulation of the boundary value problem requires that the regime of sticking occurs on each interface. However, this regime is not compatible with other boundary conditions at certain values of input parameters. In this case, the solution exists only if the regime of sliding is permitted at one of the frictional interfaces. It is worthy of note here that the friction law usually controls this change of friction regimes. However, in the case under consideration, the material model and other boundary conditions do. The parameter k introduced in (14) (24) and (58) that the derivative v r ∂ ∂ is inversely proportional to 4 s . Therefore, the shear strain rate and spin components in the cylindrical coordinate system follow the inverse square rule shown in (47). This rule is valid if the normal strain rate in the direction orthogonal to the friction surface does not vanish at the surface (i.e., ξ ≠ 0 rr at r a = in the boundary value problem solved). The singularity above causes difficulties with numerical solutions [11,12]. Efficient numerical methods, such as the extended finite element method [13], should account for the asymptotic representation (47). Numerous experimental results confirm that material properties generated by material forming processes at the vicinity of frictional interfaces are very different from those in bulk (for example, [14,15]). The present paper's main result shows that the material model considered predicts high gradients of the shear strain rate and spin components near frictional interfaces, which may be connected to high gradients of material properties near such interfaces. Conflicts of Interest: The authors declare no conflict of interest.
2,798.6
2021-01-24T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks Naive T lymphocytes exhibit extensive antigen-independent recirculation between blood and lymph nodes, where they may encounter dendritic cells carrying cognate antigen. We examine how long different T cells may spend in an individual lymph node by examining data from long term cannulation of blood and efferent lymphatics of a single lymph node in the sheep. We determine empirically the distribution of transit times of migrating T cells by applying the Least Absolute Shrinkage & Selection Operator () or regularised to fit experimental data describing the proportion of labelled infused cells in blood and efferent lymphatics over time. The optimal inferred solution reveals a distribution with high variance and strong skew. The mode transit time is typically between 10 and 20 hours, but a significant number of cells spend more than 70 hours before exiting. We complement the empirical machine learning based approach by modelling lymphocyte passage through the lymph node . On the basis of previous two photon analysis of lymphocyte movement, we optimised distributions which describe the transit times (first passage times) of discrete one dimensional and continuous (Brownian) three dimensional random walks with drift. The optimal fit is obtained when drift is small, i.e. the ratio of probabilities of migrating forward and backward within the node is close to one. These distributions are qualitatively similar to the inferred empirical distribution, with high variance and strong skew. In contrast, an optimised normal distribution of transit times (symmetrical around mean) fitted the data poorly. The results demonstrate that the rapid recirculation of lymphocytes observed at a macro level is compatible with predominantly randomised movement within lymph nodes, and significant probabilities of long transit times. We discuss how this pattern of migration may contribute to facilitating interactions between low frequency T cells and antigen presenting cells carrying cognate antigen. Introduction The anatomy of the immune system is unusual in that a number of its cellular components are motile. Antigen independent recirculation is a characteristic feature of vertebrate, and especially mammalian, adaptive immune systems, allowing optimum opportunities for cells to interact. In particular, recirculation of the lymphocyte pool is necessary so that rare, antigen-specific T lymphocytes have the best chance to encounter a dendritic cell (DC) carrying their cognate antigen. Mackay et al. [1] describe two major pathways of recirculation in sheep, depending on the function and phenotype of the T lymphocyte. The vast majority of cells arrive at the LN via high endothelial venules (HEVs) and depart via the efferent lymphatics. Unless the T lymphocyte encounters an antigen specific for its receptor, the T lymphocyte will exit the LN via the efferent lymph vessels which drain into the principal lymphatic vessel, the thoracic duct, and continue their recirculation by re-entering blood flow. An alternative route exists, namely departing blood at peripheral vascular beds, especially at sites of inflammation and reaching the LN via the afferent lymphatics. Most lymphocytes in afferent lymphatics are of effector phenotype, whereas naive (or central memory) predominate in efferent lymphatics. Antigen-presenting cells such as DCs carry antigen from surrounding tissue via afferent lymphatic vessels to the LN, where the antigen is presented to T cells. Since infection is often local, DC migration together with T cell recirculation maximises the chance that a naive T lymphocyte will locate its cognate antigen. Within the past decade, it has become possible to observe individual labelled lymphocytes moving within lymph nodes using two photon microscopy. In contrast to the directional flow of cells seen at a macro level, T cell movement within the lymph node appears to be predominantly random [2,3] with no clear evidence of directionality. However, the measurements of individual lymphocytes obtained by microscopy are typically only over short times and distances, since the cells migrate rapidly outside the field of view. A robust study of lymphocyte migration therefore needs to bridge the microscopic and macroscopic view points. A number of previous studies have extended the timescale of the microscopy observations using insilico modelling. Even prior to the introduction of two photon technology for tracking lymphocyte movement, Stekel [4,5] used differential equations to model three compartments of T cell recirculation (blood, spleen and the lymphatic compartment), and used this to derive a time course for the appearance of T lymphocytes in thoracic duct blood. Stekel linked the macroscopic properties of his model to the microanatomy of the lymph node, by proposing that transit times through lymph nodes are determined by adhesive interactions with dendritic cells which slow T cell migration [5]. Beltman et al. [3] combined twophoton microscopy imaging of the in vivo LN with a cellular-Potts based model of T lymphocyte migration to determine the effects of the topology of a LN on the migratory characteristics of T lymphocytes. In bestowing the insilico T lymphocytes with a preferred direction of motion which corresponds to their recent displacement, T lymphocytes exhibited`persistent motion on a short timescale, random motion on a long timescale, and large velocity fluctuations with apparent periodic pausing'. Bogle and Dunbar [6] also investigate a 3D lattice-based model to simulate T cell behaviour in the paracortex at realistic cell densities, successfully reproducing a cell motility coefficient that matches estimates made in previous studies. More recently, two studies [7,8] have specifically investigated random walk migration models in the context of two photon data on lymphocyte movement through lymph nodes. The studies listed above have mostly focused on obtaining long term predictions of lymphocyte migration times derived from models based on microscopic observations. In this study we adopt a complementary approach, to infer the distribution of lymphocyte transit times directly from long term observations of bulk migration through a single lymph node. Rather than impose an a priori model, we use machine learning approaches to infer this distribution from experimental data obtained by cannulating individual lymph nodes and collecting lymph over long periods. We use observations from a sheep model, since single lymph node cannulation, on which this approach is based, cannot be carried out in the rodent models explored in the previous studies. A brief analysis of this data has been published previously [9]. We further demonstrate that the inferred distribution of transit times inferred from the data is compatible with predicted distributions generated from a discrete Markov chain random walk model of migration, and the inverse Gaussian distribution which describes Brownian motion first passage times [10]. The results of our analysis provide an estimate both of mean transit time, and of the distribution of transit times observed, and suggest that random movement of lymphocytes within a lymph node is sufficient to account for bulk flow even in larger immune systems such as that of the sheep. Estimating the Distribution of Transit Times in Blood The data analysed in this study were collected from sheep in which both blood and the efferent lymphatic from prescapular (15 data sets) or popliteal (2 data sets) lymph nodes were cannulated (see Materials and Methods) for a minimum of 100 hours. An initial sample (at least 10 9 ) of cells in the efferent lymph (and hence primarily drawn from the recirculating naive lymphocyte population as shown in several previous publications e.g. [1]) was labelled with CFSE, and then infused back into the circulation via the carotid artery. The cells from efferent lymph were almost entirely composed of lymphocytes, of which around 70% were T lymphocytes (data not shown) [11]. After infusion of labelled lymphocytes back into the circulation, blood and lymph samples were then collected at various time intervals, and the proportion of labelled cells was analysed by flow cytometry. A representative flow cytometry histogram of CFSE expression in lymphatic lymphocytes is shown in fig. 1a and a representative timecourse (raw data) from one experiment (sheep R705) is shown in fig. 1b and 1c. Samples were collected less frequently at later time points, when the rate of change in cell numbers was smaller. We first estimated the distribution of times that the labelled cells spent in blood. Under the assumption that lymphocytes leave blood at a rate proportional to their concentration, we fitted an exponential model of the form N(t)~N 0 e {Et to each individual for the initial eight hours post infusion, where N is the percentage of lymphocytes in blood at time t hours, with N 0 and E to be fitted. We thought it reasonable to fit the first eight hours post infusion as only a small proportion of total labelled cells will have exited the lymphatics and returned to the circulation before this time (see fig. 1b, and [12,13]). The estimated mean lifetime of lymphocytes in blood in each sheep before trans-endothelial migration is given in Table 1. Calculating the Distribution of Naive T lymphocyte Migration Times in Individual Ovine LNs The objective of our analysis was to learn the probability distribution of times which individual cells spend within the lymph node. The underlying assumption on which the analysis is based is that the number of cells which leave the lymph node at time t is the sum of cells entering the lymph node at time (t{t), multiplied by the proportion of cells p which spend time (t) in the lymph node, summed over all the possible t. The set of p values then defines the probability distribution of times which individual cells spend within the lymph node. The number of labelled cells entering the lymph node at time t was assumed to be proportional to the proportion of labelled cells in blood at time t, while the number leaving at time t was assumed to be proportional to the proportion of labelled cells in lymph. Since the frequency of sampling was less at later time points, missing data points were estimated using either linear or cubic spline interpolation (example shown in fig. 1d and 1e). Both blood and lymph time profiles were relatively smooth at these late time points, making the data less sensitive to alternative interpolation methods. The probability distributions obtained using either interpolation method were similar (see fig. S1), and only the results obtained using linear interpolation are shown below. The set of probabilities (p) was then inferred using LASSO as described in Material and Methods. This approach was preferred intially, since it made no a priori assumptions about the shape of the distribution to be inferred. At the same time, it allowed the total distribution to be constrained to sum to 1, thus creating a bona fide probability distribution. A representative example of the distribution of p obtained is given in fig. 2a, along with the predicted and observed efflux in the same individual ( fig. 2b). A similar analysis was carried out on data from seventeen cannulations, and the fit of the distributions we inferred from these cannulations is shown in Table 2. The distributions typically showed expected migration times in the range of 17 to 45 hours (Table 2), with a median expected migration time of 30.2 hours. This data is in broad agreement with previous estimates, giving confidence to the subsequent analysis. A noticeable feature of all the distributions was that they were strongly right-skewed, with characteristically long tails, and associating significant probabilities with migration times in excess of 50 hours. Many distributions indicated the possible existence of multiple peaks in migration times (as indicated by arrows in fig. 2a). The existence of multiple peaks is also evident in fig. 2c, where for each two-hour interval, the probability for migration to occur in that interval derived from each cannulation was used to create 95% confidence intervals for the piecewise means. Generalising the Distributions of Naive T lymphocyte Migration Times in the Ovine LN In order to obtain a more generalised solution to our analysis, we concatenated data sets and smoothed the output using S{LASSO as detailed in Materials and Methods. We chose three data sets at random and implement the S{LASSO for 0ƒmƒ5000 to determine the optimal m on this concatenated training set. The random sampling of data sets, concatenation and S-LASSO analysis were repeated 50 times (random sampling with replacement). The same analysis was then performed by randomly selecting and concatenating nine data sets for the training set. The distributions with the best predictive power in each case are given in fig. 3a and fig. 3b. Fig. 3c and fig. 3d show 95% confidence intervals for the piecewise probabilities obtained from the optimal distributions from each of the fifty samples. The size of the training set and the optimal value of the smoothing parameter m are inherently linked. An increase in training set size will usually result in a reduction in the optimal value of m, as an increase in training set size will reduce the risk of over-fitting, and hence reduce the level of smoothing necessary. Based on randomly concatenating a training set of three data sets, the optimal distribution overall was given by an optimal value of m~77, and a corresponding average sum of squared error (SSE) per data set in the test set of 0.61. The average value of m over the optimal 50 distributions was 52.62. Forming a concatenated training set of nine data sets, we found the the optimal value of Table 1. Mean lifetime of labeled T cells in blood. Sheep Mean Lifetime in Blood (hrs) m~28, and the corresponding average SSE per data set in the test set was 0.66. The average value of m over the optimal 50 distributions was 24.5. Importantly, both analyses retain the long tail of higher transit times observed in most of the individual profiles, and also the presence of multiple peaks. Thus, these features are likely to represent true features of the underlying system, rather than computational artifacts arising from over fitting. To provide further statistical support for the existence of a long tailed distribution, we fitted, via LASSO, distributions where the last n parameters of the distribution were constrained to zero, thereby incrementally restricting a distribution with longer tails. As we increased n, we found the total SSE on a test set of 8 randomly chosen datasets increased, and increased at a faster rate than by constraining a random set of n parameters (data not shown), indicating that the long tails observed in the original distributions were a key characteristic. An additional possible confounder was the possibility that the distribution seen arose from the combination of two or more populations of cells with very different migration properties (although previous studies have clearly demonstrated that almost all T lymphocytes in efferent lymph are naive, rather than memory phenotype [1]). Therefore we additionally stained for CD4 expression in a sample (n = 4) of sheep, and calculated probability distributions for the CD4 CFSE+ cells specifically ( fig. 3e). The distribution showed similar characteristics to the overall unfractionated population. The Relationship between the Empirical Probability Distributions and Random Walk Models of T Lymphocyte Migration The distribution of transit times obtained above requires inference of a whole set of parameters p i , i[f2,4,::,100g. We wished to see whether the observed data could be fitted as well by using a priori distributions with smaller number of parameters. In light of the previous work exploring random walk models (e.g. [7,8]), we first examined whether a simplified Markov chain random walk model of migration would capture the characteristic shape of transit times inferred above. We initially used a discrete one dimensional model, which captured movement along the dimension between lymphocyte entry at the HEVs and exit via the medulla into the efferent lymphatic. This model is similar to the one dimenional model analysed by Stekel [5]. However, migration perpendicular to the dimension of flow was approximately captured by allowing a probability that the cell stays on the same vertex at each time point ( fig. 4a). Since this parameter is adapted to best fit the data, we hypothesised that our 1-dimensional model would capture some of the influence of the other dimensions and of the geometry of the lymph node, although we recognise that this will introduce some errors compared to a full three dimensional migration model (see below). We calculated the probability distribution of times taken to travel from the first to the last vertex (first passage times). The model is governed by four parameters, as shown in fig. 4a, and in Materials&Methods. We initially optimised over (b,c,dt,n) for 0ƒb,cƒ0:9 to 1 decimal place, 20ƒnƒ200 vertices and 1ƒdtƒ10 minutes. The optimal parameter set obtained from this analysis gave (b,c,dt,n)~(0:4,0:2,4,20). We then fixed dt and n to these optimal values, and optimised b and c to 2 decimal places. Each pair of these parameter values gives a corresponding distribution of migration times, and predictions based on this corresponding distribution give a corresponding SSE over the test set. The distribution that gave the lowest SSE over the test set is shown in fig. 4b. A heat map showing log (a=b) vs. c and corresponding SSE is given in fig. 5, where areas of red represent a high SSE (bad fit) and areas of blue represent a low RSS (good fit). Fig. 5 shows that Minimal SSE over a range of c values occurs when log (a=b)&0 (i.e. a&b). In other words, optimal fit to the experimental data occurs when the probability of forward and backward movement are approximately equal: a random walk without drift. The net flow of lymphocytes through this model is therefore driven not by directional migration, but only by the fact that entry and exit from the lymph node are unidirectional. The predicted efflux data, based on the optimal probability distribution shown in fig. 4b, are compared to actual efflux data for one representative experiment ( fig. 6). The random walk model captures the qualitative features of LN efflux following i:v: introduction of labeled lymphocytes into a sheep. The sharp rise in the percentage of labeled lymphocytes during the first six to ten hours post-infusion is followed by a peak in lymphocyte detection in efferent lymphatics at around 30 hours. This peak is then followed by a gradual decline to an equilibrium. Since the one dimensional random walk can be considered as a discretized special case of Brownian motion, but recognising that movement in the lymph node is in three dimensions, we chose to explore the inverse Gaussian distribution, which describes the probability distribution of first passage times for Brownian motion with drift (eqn. 1) [10,14]. Optimising the parameters of this distribution by minimising the SSE of predicted versus observed efflux in a training set of data, and then evaluating on a test set (see Methods), we noted that the SSE decreased as the drift parameter of the distribution was decreased, with optimal fit being obtained at values of low positive drift (n = 1). The random walk models above were compared to a directed migration model in which T cells entering at time t move through the lymph node together directionally, with normally distributed times. We therefore predict LN efflux by optimising the parameters m and s in a normal distribution of transit times over the same training set and compare SSE over the same test set. Fig. 6 shows the predictions of lymph output based on optimised parameters for a discrete step random walk, the inverse Gaussian distribution and the normal distribution. The discrete onedimensional random walk and the inverse Gaussian distribution give very similar predictions, and follow the shape of the data well. LN efflux inferred from normally distributed T cell migration times was a poorer predictor, showing a delay in reaching peak levels, and also a larger concentration of cells appearing at this peak. Table 3 shows a summary of the performance of the four models discussed above, by comparing the SSE (predicted versus experimetnal efflux) obtained on the eight data sets in the test set. Each model is optimised for parameters learnt from the training set. The normally distributed model of migration times shows the worst fit to the data overall. The multimodal LASSO derived distribution, the one dimensional Markov chain model and the inverse Gaussian distribution all behave in a rather similar fashion, with no one model providing consistently the best estimate. Further data, perhaps at higher temporal resolution, will be required to distinguish these alternative models. By changing one entry in the stochastic matrix that defines the random walk stochastic matrix, we can model the steady state distribution of migrating lymphocytes within the lymph node during periods without antigenic challenge. During such periods, the number of lymphocytes leaving the LN is equal to the number entering it, since lymph node size remains constant. The stochastic matrix that defines the system becomes The optimal parameter sets determined above (i.e. lowest RSS on test) were tested in the modified stochastic matrix under stationary conditions (i.e. steady state where entry and exit balance). Lymphocyte movement governed by this optimal parameter set (see fig. 5) predicted an accumulation of cells nearer the HEVs (vertex 1 in the model) in the lymph node and a gradual decline in cell concentration closer to the medulla region of the LN (node n) ( fig. 7). This prediction is in agreement with experimental data [15], where T lymphocytes are observed to congregate near HEVs, which is thought to maximise chance encounters with cognate DCs. Those parameter combinations which poorly predict LN efflux (dashed line with crosses and dashed line with squares) predict far greater accumulation near HEVs, to such an extent that T cells are barely present in many other cross-sectional regions. Discussion In this study we present the results of machine learning and model based analysis of lymphocyte migration data obtained from long term cannulation of sheep lymphatics. The major conclusion of the study is that the rate of migration of T cells through an individual lymph node is very heterogeneous, and is consistent with models in which individual T cells within the node move randomly from the point of entry to the point of exit. In the first part of the paper we derive a probability distribution of transit times directly from bulk migration data, using constrained least squares LASSO. The modal transit times were observed to lie between 10 and 20 hours, while expected (mean) transit times ranged from 24 to 44 hours ( Table 2). These estimates broadly agree with those published in the literature. Early studies in sheep, cannulating the efferent lymphatic of a single node [16] but using radiolabelling rather than fluorescent label, observed maximum radioactivity in the efferent lymphatics between 27 to 36 hours, and a subsequent rapid fall in radioactivity. A more recent study [17], using fluorescently labelled lymphocytes showed a much slower fall in the percentage of labelled cells, closer to that observed in the present study. The loss of radiolabel from cells may have contributed to an artificial shortening of detection times seen in the earlier studies. In rodents, where cannulation of individual lymph nodes has not been possible, most studies have measured the overall migration from blood through the lymphatic system by cannulating the thoracic duct. Ford & Simmonds [18] studied recirculation in the rat, estimating modal LN migration times to be between 16 and 18 hours, and Sprent [19] studied the mouse, with maximum percentage of cells in efferent lymphatics between 24 and 30 hours (varying in different mice). It is difficult to really compare the quantitative estimates of modal transit times between these studies, since passage to the thoracic duct may include serial passage though lymph node chains, and relatively long passage times though the draining lymphatic network. Nevertheless, there seems to be no evidence of any clear relationship between transit times and size (e.g. sheep and mice lymph nodes differ by more than three orders of magnitude). Although at first sight surprising, the observation may be understood in terms of the equation under the assumption of steady state when input and output to the node are equal, where E(t) is the mean transit time, N total is the total number of cells in the LN and N in is the number of cells entering the LN per hour. Thus in larger species, a faster inflow (greater blood flow to individual lymph nodes, and hence a greater area of HEV, for example) may offset the larger volume of lymph node. The most striking outcome of the analysis of transit times using LASSO is the high variance, breadth and skewness of the inferred probability distributions. The estimated probabilities of transit times less than 2-4 hours is low, presumably reflecting the minimum time required for a lymphocyte to leave blood within a HEV, move through the lymph node and reach the efferent lymphatic. A recent study suggested that in the mouse this distance may be quite short, because T cells exit via medullary sinuses which extend deep into the T cell zone [8]. Unfortunately, similar detailed lymph node anatomy is not available in the sheep. However, the distributions (both individual, or derived by pooling data or smoothing the distributions), consistently demonstrate a long tail of transit times, with significant numbers of cells taking 70 to 100 hours to leave the lymph node. Thus, as noted by Stekel [4,5] in his analysis of rat migration data, significant 'mixing' or retardation of lymphocytes occurs during passage though the node. The cellular mechanisms which might give rise to the observed distribution are discussed in more detail below. An unexplained feature of the transit time distributions 'learned' from the data by the constrained regression analysis (LASSO) is the existence of several secondary probability modes. Possible explanations for the features could include heterogeneous entry/ exit points (e.g. HEVs at different distances from the efferent lymphatics, and hence different path lengths for migration), or the presence of heterogenous populations of cells. The vast majority of T cells within the efferent lymphatics are naive cells [20] and gating on CD4 cells did not change the overall pattern of the distribution observed. However, we cannot completely rule out the possibility of further heterogeneity within the naive CD4 T cell population and further experiments will be required to distinguish these possibilities. Multimodality emerged as a robust feature of the LASSO learning algorithm, persisting in analysis of concatenated data sets, and after computational smoothing. They are unlikely to represent artifacts introduced by interpolation since different interpolation methods give rise to similar distributions. However, the multimodal distributions do not in fact provide consistently better estimates of efflux data than some of the unimodal models considered. The biological significance of this finding will therefore require further analysis. Several previous studies have used insilico modeling to predict lymphocyte migration times across lymph nodes, based on hypotheses of lymphocyte migration behaviour. An early example [4,5] which predated the advent of two photon imaging of lymphocytes in vivo proposed that lymphocytes travelling through the lymph node were slowed by competitive reversible binding to limited sites on dendritic cells via adhesion molecule interactions. Although this model accurately predicted the results of cannulation experiments in normal and lymphopoenic rats, more recent microscopic studies suggest that interactions between naive T cells and dendritic cells are very short lived [21]. Instead, the two photon data (e.g. [2,3]) suggest that T cell migration in the lymph node is made up of a series of steps at high velocity (in the order of 10-30 mm/minute for 2-5 minutes) followed by an abrupt change of direction and another period of high velocity movement in another direction. Although T cells may show some directional behaviour in the immediate vicinity of an antigen-bearing DC [2,22], the overall observed migration of individual T cells is largely non-directional [7,23]. This conclusion has been supported by insilico studies demonstrating that random walk models of T cell movement, coupled with information on lymph node architecture, successfully predict movement of murine lymphocytes through lymph nodes [7,8]. In the light of this previous modelling literature, and of the two photon microscopy data, we tested some simplified random walk models of lymphocyte migration using the same set of sheep migration data. Following the studies of Stekel [5], we initially used a one dimensional discrete MC model (shown in fig. 4a), although we recognise the model fails to capture all the complexity of the three dimensional architecture of real lymph nodes. A computationally straightforward three dimensional equivalent is provided by the analysis of first passage times for Brownian motion with drift, which is described by the inverse Gaussian function [10,14]. Interestingly, in both one dimension and three dimensions, the best fit was obtained when drift is close to zero i.e. when the relative probability of forward and backward movement is close to one. Under these conditions, the qualitative shape of the passage time distribution was similar to that inferred directly from the data, showing high variance and strong skew towards long transit times. In contrast, optimised a priori distributions with more symmetric features (random walk with strong drift, or directional migration represented by a normal distribution) were much less effective at predicting the observed migration data. Although the insilico models we investigated clearly remain an over simplification, since they do not capture the details of lymph node architecture, random migration with minimal drift or direction seems to be a consistent feature emerging from the data. The recirculation of naive T cells is now widely believed to function in order to expedite the interaction between a rare antigen-specific T cell and a dendritic cell presenting its cognate antigen. Optimisation of this search strategy is therefore likely to play an important part in determining migration behaviour. Since the distribution of T cell clones through the circulation is random, a rapid and unidirectional passage of T cells across a population of dendritic cells might appear to be the optimal way of selecting antigen specific clones. Nevertheless, a number of factors contribute towards favouring a slower passage though lymph nodes. Each time T cells leave a lymph node they spend a significant time in circulation, with a median expected time of 7.3 hours (Table 2). Generally, an increased ratio of time spent within lymph nodes to time spent recirculating in blood favours the likelihood of an antigen specific interaction. However, the rate of antigen specific DC/T cell interactions is determined by the number of T cells sampled by a DC per unit time. Since a dendritic cell can interact with multiple T cells, and given a fixed distribution of sampling times [2], optimal search efficiency requires a fixed ratio of T cells to DC within a node. If one assumes that the number of dendritic cells within a node is determined a priori by the size of the area drained by the afferent lymphatic system to that node, the optimal number of T cells required at any one time to optimally service all the dendritic cells of the node will be similarly fixed. Since the mean transit time is related to the total number of recirculating cells within the lymph node (see eqn. 4), the long mean transit time observed may reflect the overall kinetics required to ensure there are sufficient T cells within the node to service the resident DC population. Interestingly, this interaction may also explain the fall in lymph node output following antigen stimulation, since antigen stimulation is accompanied by a rapid increase in numbers of lymph node DC, requiring a corresponding increase in transiting T cells within the node. This model, in which dendritic cells dictate T cell numbers, and hence T cell migration kinetics, is closely related to the model of T cell migration explored by Stekel [5], although as discussed above we do not have to posit any long term dendritic cell/T cell interaction. In conclusion, we extend previous computational analysis of lymphocyte migration to a sheep experimental model, where it was possible to follow migration through a single lymph node over several weeks. The distributions demonstrate a very heterogeneous range of lymphocyte transit times compatible with random walk models of migration within the node, in which overall bulk transit of lymphocytes is achieved by directional input and output only. The study thus adds to the emerging consensus that migration of T lymphocytes though the lymph node is predominantly random, even in larger animals such as the sheep where distances of migration are significantly larger than in the mouse. Random migration ensures good mixing of transiting lymphocytes with resident antigen presenting cells, and hence may facilitate rare antigen specific encounters. Experimental Methods for Data Collection The experimental procedures are published in more detail as part of a PhD thesis [24]. The experimental protocol was approved by The Australian National University's Animal Ethics Committee. The experiments were carried out at the John Curtin School of Medical Research, Australian National University, Canberra, Australia. After completion of experiments, sheep were euthanized by an intravenous bolus injection of phenobarbitone solution, according to the Australian national University Animal Ethics guidelines. Cannulation procedures. and five years were kept in metabolic cages. The cages allowed animals to lie and stand freely, and provided free access to water and food. Animals were kept caged for at least 3 days prior to surgery to allow them to accustom themselves to the experimental laboratory environments. Sheep were anaesthestised (thiopentone sodium, 4 mg per kg animal weight) and a size 9.0 cuffed endotracheal tube was introduced into the trachea of the sheep to maintain its airway during surgery. A gas mixture of 1-3% halothane and 100% oxygen was administrated via the endotracheal tube from a Boyle's anaesthetic apparatus. Under full anaesthesia, wool covering the operative area was removed using an animal clipper with a fine comb blade. In each animal, data was collected from two locations, namely the external jugular vein and either the prescapular or popliteal lymh node. The popliteal lymph node, was cannulated using the operation described by Hall and Morris [25]. The operation to cannulate the prescapular lymph node has been described by Pederson and Morris [26]. The desired efferent lymphatic vessel was located and the direction of lymph flow was identified. If more than one lymphatic vessel was found, the largest one was chosen to be cannulated while the others were tied off. The efferent vessel was carefully dissected to remove all surrounding fat and tissues and ligated with 3 metric silk at the distal end of the efferent vessel as far from the node as possible. A second, loose, ligature was placed around the vessel about 1-2 cm below the first ligature to provide sufficient length for cannulation. The cannula was slowly inserted into the lymphatic vessel until its tip passed through the point of the second ligature. Then the second ligature was firmly tied. Additional ligatures were sometimes required to maintain the alignment of the cannula and ensure there were no obstructions. At the final step, the flow of lymph was checked to ensure that it was running freely before the wound was closed. Benzyl penicillin was applied to the wound. The skin was closed with Michel wound clips without any suture to Merino ewes aged between three muscles or aponeurosis. The external cannula was secured to the skin by sutures with 1 metric silk. Efferent lymph was allowed to drain freely under internal pressure within the lymphatic vessel. The technique of percutaneous vascular catheterisation follows the procedure described by Seldinger [27]. It allows catheter entry into an area without an incision as a needle is used to introduce the catheter. Thus it minimises injuries to surrounding tissues. A 14gauge needle was introduced through the wall of the external jugular vein, followed by the advanced portion of a wire guide. The needle was withdrawn while the wire guide was in place. A 16-gauge catheter was introduced into the vessel with a twist motion along the wire guide. The wire guide was removed and the catheter was secured to the skin of the sheep by multiple sutures with 1 metric silk. Normal saline solution was used to flush the catheter and blood was drawn to ensure that there was a free flow within the catheter. General anaesthesia was normally terminated at the time of skin closure. Sheep were allowed to recover from the influence of general anaesthesia in the operating theatre. Then they were moved to metabolic cages for full recovery. The endotracheal tubes were left in place during the recovery period to prevent any possible airway aspiration and removed after post-operative evaluation. To prevent clot formation in draining lymph, a sterile solution of Ethylene Diamine Tetra Acetic acid, at a concentration of 100 mg/ml was continuously infused by a peristaltic pump and mixed with draining efferent lymph via a three way glass connector to achieve a final concentration in the lymph of approximately 2 mg/ml. A sterile mixture of 0.9% normal saline solution and heparin at a final concentration of 50 units/ml was continuously infused by a peristaltic pump at a rate of approximately 10 ml/hr through the indwelling intravenous catheter to prevent blood clot formation at the tip of the catheter. Lymph collection and labelling. nimals were allowed to recover for a few days after the operation before starting an experiment. Overnight lymph (i.e. about 12-18 hours) from the cannulated efferent lymph vessel was collected in polypropylene bottles, standing in a container partly filled with ice. Lymphocytes were labelled with fluorescent dye under sterile conditions. Lymphocytes (10 9 minimum) were washed with Phosphate Buffered Saline (PBS) three times. The final concentration of the lymphocyte suspension was adjusted to approximately 5|10 7 cells/ml. 5-(and-6)-carboxyfluorescein diacetate, succinimidyl ester, (CFSE), was mixed with the lymphocyte suspension at a concentration of 3-5 per 5|10 7 lymphocytes. and incubated in a warm bath at 37uC for 15 minutes with occasional mixing. An equal volume of cell-free lymph was added to stop the reaction. Lymphocytes were washed 3 times and resuspended in PBS. If cell viability or staining (as measured by flow cytometry) was less than 90% the experiemnt was terminated. CFSE labelled lymphocytes (generally about 1-2610 9 cells) were infused back into donor animals via the indwelling intravenous catheter as a bolus within a minute followed by 10-20 ml of normal saline to flush any lymphocytes adherent to the tubing. For each blood sample, approximately 3-5 ml of venous blood was manually drawn from the indwelling venous catheter into a 5 ml sterile plastic syringe containing 0.5 ml of 2% EDTA. Heparinised normal saline was used to flush the remaining blood in the tubing after sampling. The frequency of blood sampling was greater during the first few hours and decreased after that. In most experiments, blood sampling was undertaken at 2, 5, 10, 20, 30 minutes, and 1, 2, 3, 4, 6, 12 and 24 hours after infusion, and thereafter every 24 hours until the end of the experiment (i.e. about 7 to 12 days depending on the conditions of each individual animal and experiment. The efferent lymph (well mixed with 2% EDTA) was allowed to drain freely and collected into 15 ml plastic containers using a fraction collector programmed to collect the lymph every 2 to 8 hours. A period of about 15-30 minutes was required for collection of each sample of efferent lymph, depending on the lymph flow rates in individual animals at the time of experiment. The volume of draining efferent lymph was recorded and the lymphocyte concentration was counted using a haemocytometer every 12-24 hours to determine the total number of lymphocytes drained from the efferent lymph during that period. Lymphocytes from lymph samples were washed three times in PBS with 2% Bovine Serum Albumin, 2 mg/ml EDTA and 0.1% Azide (PBS/BSA/EDTA/Az) and analysed by flow cytometry. Blood samples were treated with a mixture of 0.83% ammonium chloride and 0.17 M Tris buffer (9:1) to remove red blood cells. In some experiments, CD4 expression was measured before subsequent fixation. The cells were labelled using standard indirect immunofluorescence protocols. The monoclonal anti-CD4 antibody SBU-T1 (25.91) obtained from the Centre for Animal Biotechnology (School of Veterinary Science, The University of Melbourne, Victoria, Australia) was used as first layer, and visualised using a phycoerythrin-labelled anti-mouse immunoglobulin (Silenus Laboratories, Australia) as second layer. All flow cytometry analysis was done on a FACScan flow cytometer. Samples were gated on forward and side scatter to exclude debris and cell clumps. The number of cells analysed was approximately 20,000-50,000 for each sample. All flow cytometry studies were undertaken within a week of the cells being fixed. A final concentration of 10 mg/ml of propidium iodide was used to stain lymphocytes before and after labelling with CFSE to determine their viability. Least squares regression. Sampling of blood was carried out more frequently during the first twenty-four hours of the experiment, and at every twenty-four hour interval subsequently and so we estimated labelled lymphocyte counts between late data collection points. We used both linear and cubic spline interpolation (as shown in fig. 1), and both methods gave similar probability distributions. We assume that T cells enter the lymph node at a rate proportional to their frequency in blood, and then individual T cells travel through the lymph node exiting after time t. We model the different transit times t as a discrete series of 2 hour time intervals (i.e. 0,2,4,6 …). The proportion of T cells which spend time t within the lymph node before exiting is denoted by p t . Our objective is to obtain a best estimate of the distribution p. The proportion of cells in the blood at time t, the input, is denoted by b t . The proportion of cells exiting in the efferent lymphatic at time t, the output, is denoted by l t . l t will then be proportional to a weighted sum of b 0 ,b 2 ,:::,b t , with the weights given by the proportion of cells with transit times t~t, …, t~2, t~0 (denoted by the vector p). Thus, the percentage of labeled lymphocytes in efferent lymph can be expressed concisely in matrix notation as:
9,738.4
2012-09-20T00:00:00.000
[ "Biology" ]
DeepBhvTracking: A Novel Behavior Tracking Method for Laboratory Animals Based on Deep Learning Behavioral measurement and evaluation are broadly used to understand brain functions in neuroscience, especially for investigations of movement disorders, social deficits, and mental diseases. Numerous commercial software and open-source programs have been developed for tracking the movement of laboratory animals, allowing animal behavior to be analyzed digitally. In vivo optical imaging and electrophysiological recording in freely behaving animals are now widely used to understand neural functions in circuits. However, it is always a challenge to accurately track the movement of an animal under certain complex conditions due to uneven environment illumination, variations in animal models, and interference from recording devices and experimenters. To overcome these challenges, we have developed a strategy to track the movement of an animal by combining a deep learning technique, the You Only Look Once (YOLO) algorithm, with a background subtraction algorithm, a method we label DeepBhvTracking. In our method, we first train the detector using manually labeled images and a pretrained deep-learning neural network combined with YOLO, then generate bounding boxes of the targets using the trained detector, and finally track the center of the targets by calculating their centroid in the bounding box using background subtraction. Using DeepBhvTracking, the movement of animals can be tracked accurately in complex environments and can be used in different behavior paradigms and for different animal models. Therefore, DeepBhvTracking can be broadly used in studies of neuroscience, medicine, and machine learning algorithms. INTRODUCTION Behavior measurement and evaluation is one of the key methods to understand brain functions in neuroscience, especially with respect to movement and social behaviors. Different behavior paradigms (e.g., treadmill, open field, y maze, water maze, elevated plus maze, and three-chambered maze) have been developed and used to evaluate the movement, anxiety, social behavior, disease development, sleep disorder, the effect of medication, etc., of an animal (Feng et al., 2020;Yu et al., 2020). More importantly, with technical developments in electrophysiological recording, optical imaging, and optogenetics manipulating in freely behaving animals, we can study brain function in neural microcircuits. To understand the behavior of an animal systematically, it is essential to accurately and quickly quantify the movement of the animal (e.g., direction, speed, distance, and range of motion). However, due to the complexity of laboratory conditions and interference from the camera or other experimental devices used in behavioral recording, it is a significant challenge to track animal locomotion efficiently and precisely. Given the importance of movement tracking of laboratory animals, numerous open-source programs and commercial systems have been developed for recording and analyzing animal behavior, such as Limelight (Actimetrics, USA) (Jimenez et al., 2018;Ishii et al., 2019;Takemoto and Song, 2019), ANY-maze (Stoelting Co, USA) (Morin and Studholme, 2011;Rodrigues et al., 2019;Feng et al., 2020;Scarsi et al., 2020), Ethovision R XT (Noldus, The Netherlands) (Noldus et al., 2001;Yu et al., 2020), TopScan (Clever Sys Inc., USA) (Grech et al., 2019;Griffiths et al., 2019), Super-maze (Shanghai Xinruan information Technology Co., China) (Hao et al., 2012;Qiao et al., 2017), and others (Samson et al., 2015;Gulyás et al., 2016;Bello-Arroyo et al., 2018;Hewitt et al., 2018). In previous studies, motion tracking in videos captured by the video camera is the most common and low-cost approach to achieve tracking of multiple parameters. Most tracking methods are based on background subtraction algorithms and were developed for rodents. With such an algorithm, an accurate route map can be calculated and drawn based on the contour of targets in a high-definition video image. However, these algorithms are subject to breakdown as both experimental paradigms and laboratory conditions become more complex. In addition, modern technical methods such as electrophysiological recording, optical imaging, and optical stimulation are now widely used with freely behaving animals. High background noise becomes a significant problem, making it difficult to quantify the movement of an animal. The use of background subtraction algorithms alone often cannot effectively separate the target from the high background noise. To overcome these challenges, many alternative methods, such as microwave Doppler radar (Giansanti et al., 2005) and RFID technology (Lewejohann et al., 2009;Catarinucci et al., 2014), have been proposed for tracking animal motion. However, those systems tend to involve additional devices attached to the head of an animal which may be unstable or have a negative influence on the flexibility of the movement of the animal. Also, those systems are expensive and difficult to modify by the user because of high integration and low flexibility. Recently, researchers in the field of computer vision have advanced a number of algorithms to process image data, including some novel solutions for the detection of moving animals and humans. Some machine learning algorithms have shown high precision in object detection, such as deformable parts models (Felzenszwalb et al., 2010;Unger et al., 2017), R-CNN (Girshick et al., 2014), and deep neural networks (Geuther et al., 2019;Yoon et al., 2019). Using those algorithms, several toolboxes were developed for precisely calculating the postures of laboratory animals during movements, such as DeepLabCut (Mathis et al., 2018;Nath et al., 2019), LEAP Pereira et al., 2019), DeepPoseKit (Graving et al., 2019), TRex (Walter and Couzin, 2021), and DANNCE (Dunn et al., 2021;Karashchuk et al., 2021), which greatly simplify and speed up the analysis of multiple behaviors. Although these algorithms may be used to track the gross movement of an animal, they are timeconsuming and insufficiently accurate for our purposes because the exact centers cannot be obtained in the process of creating a training dataset. YOLO is a new generation of deep learning algorithm based on convolutional neural networks (CNN) for object detection (Redmon et al., 2016;Redmon and Farhadi, 2017). Compared with R-CNN, a previous detection algorithm that selects a region of interest (ROI) for possible targets and then identifies targets by classification, YOLO transforms the detection process to a regression problem, predicting the coordinates of the bounding box of the target and classifying the probabilities (p-value) of the target directly from the full image through a single network, making it easier to optimize for better performance. Using YOLO, we can predict both the species and locations of experimental animal subjects in a video. However, YOLO only provides the position of an area around the animal (bounding box) instead of the actual position of the animal. The range or position of the bounding box may change abruptly between two sequential frames of the video, even with subtle animal movement. Therefore, it is also difficult to accurately track the position of an animal using only YOLO. Considering the advantages and disadvantages of previous algorithms, we postulated that if YOLO and the background subtraction method were combined, the animal motion could be tracked more accurately and efficiently. In this study, we propose a laboratory animal behavior tracking method named "DeepBhvTracking" based on both a deep learning algorithm (YOLO) and a background subtraction algorithm. To successfully track animal motion, we first obtain the approximate location of the experimental animal by drawing a bounding box using YOLO, and then, we measure the position of an animal based on the background subtraction algorithm. With our method, movement can be tracked under complex conditions accurately and quickly. All codes with respect to DeepBhvTracking are open-source; the scripts can be customized, and different experimental animal detection models can be easily trained. Overall, DeepBhvTracking is a widely applicable and high-powered behavior tracking method for laboratory animals. Materials All experimental procedures were approved by the Animal Use and Care Committee of Zhejiang University following the National Institutes of Health (NIH) guidelines. Adult wild-type C57BL/6 mice (n = 6) were used for most experiments. For movement comparison, mice with PRRT2 (n = 6) and FMR1 (n = 6) mutations were used. Our tracking method was also tested in adult common marmosets (n = 3). In this study, all test videos were taken using a standard webcam (1,080 p for origin videos). For fast processing speed, original videos were read and resized to the resolution of 360 * 640 * 3 pixels for further analysis. Model Training 1. Image labeling: Because YOLO is a supervised algorithm, manually labeled images are required for model training (Redmon et al., 2016). To provide training data using a wide range of animal behaviors, images with RGB format were extracted randomly from videos, and a rectangular region around the animal was marked in each image using the Image Label App in the computer vision toolbox of MATLAB. The details of using the Image Label App are available online at https://www.mathworks.com/help/vision/ ref/imagelabeler-app.html. After testing under laboratory conditions, we found that around 300 labeled images were usually required for accurate detection under each condition. 2. Image preparation: To test prediction accuracy, the dataset of labeled images was randomly divided into three sets: training (70%), validation (10%), and test (20%). Labeled images in the training set were transformed and resized, including color distortion and information dropping for broad adaptability. The original validation and test sets were retained to evaluate the accuracy of the model. 3. Detector training: In this task, a pretrained deep neural network was transferred and combined with the YOLO algorithm for target detection. For YOLO, the images were normalized to the same resolution for feature extraction. Previous studies indicated that the normalized image size and differences among the pretrained networks have a significant impact on the accuracy and speed of training and tracking. To test this possibility, different pretrained networks including resnet18, mobilenetv2, and resnet50 were tested for each of the following normalized image sizes: 224 * 224 * 3, 320 * 320 * 3, 416 * 416 * 3, and 512 * 512 * 3. In this task, the detector was trained by mini-batch gradient descent (batch size is 16 frames), and the parameters of the network were updated after several iterations via back-propagation. The number of total epochs is 20 and the learning rate is 0.0001. Detailed principles and algorithm derivation follow from previous studies (Redmon et al., 2016;Redmon and Farhadi, 2017). After training, the detector was used for tracking evaluations. Video Tracking The position of an animal was tracked by combining the deep learning algorithm YOLO with a background subtraction algorithm. Our strategy was to define the bounding box of the target using YOLO and then to obtain the centroid of the target by background subtraction inside the bounding box. Background subtraction tracked moving animals through a pixel-by-pixel comparison of the present image with a background image, as described in detail by others (Barnich and Van Droogenbroeck, 2011). First, to avoid interference from objects outside the maze, we manually defined the tracking area and set areas outside the tracking area to the background color. Second, the detector trained by YOLO was used to track the position of the animal with a bounding box. In many cases, multiple boxes were detected in one image. In this case, the bounding box with the highest p-value was chosen for future use. Next, the bounding box was enlarged 1.5 times to completely cover the whole animal. Finally, a traditional background subtraction method was used to obtain the contour of the animal in the bounding box, and the centroid of the animal was calculated based on the contour. Laboratory Animal Tested by DeepBhvTracking To evaluate the effectiveness of our tracking method-DeepBhvTracking, black or white mice and marmosets were tested in different behavior paradigms: open field, elevated plus maze, L maze, inverted V-shape maze, and treadmill. We also compared the performance of four tracking methods (background subtraction, YOLO detection, DeepLabCut, and DeepBhvTracking) in three classical behavior paradigms with different noise levels: open field, L maze, and three-chambered maze. Open field is a high signal-to-noise ratio scenario without the interference of wires or operation of an experimenter; in contrast, L maze involves interference from electric wires and hands of the experimenter because of behavior training and calcium imaging. Three-chambered maze is a low signal-tonoise ratio scenario due to the similar color between mice and background. To avoid bias from the training dataset on the performance of different algorithms, 300 images extracted from six videos based on a K-means algorithm were used to train the models of the three deep learning methods: YOLO detection, DeepLabCut, and DeepBhvTracking. In this study, we also compared movement differences among movement-deficit mice (PRRT2, FMR1) and wild-type mice. The movement of each animal was recorded for 8 min and tracked by DeepBhvTracking. Comparison With Other Methods To test tracking efficiency, we compared the training time, tracking speed, error to ground truth, and movement speed of animals detected by the different methods. To compare training speed, both DeepLabCut and DeepBhvTracking models were trained using the same dataset, pretrained neural network (resnet50), and parameters (image number: 300; batch size: 8; iterations: 2000). Tracking speed reflected the video processing speed (frames/second, fps). Error to ground truth was used to estimate the distance between the real location and the estimated location of the target. Statistical Analysis Error bars in all figures represent mean ± SEM, and the number (n) of samples employed is indicated in the legends. All data were analyzed by ANOVA followed by LSD test for multiple comparisons, * indicates p < 0.05, * * indicates p < 0.01, * * * indicates p < 0.001. Training and Tracking Optimization In this task, the position of an animal was tracked by combining pretrained neural networks with the YOLO algorithm and background subtraction (Figure 1). To accurately track the position of different kinds of animals, a well-trained detector was required. We found that the training time was longer, and the tracking accuracy decreased if we trained the model on different kinds of animals together (data not shown). Additionally, we found that the tracking accuracy was highly correlated with the color of the animals and uncorrelated with behavior paradigms. So, we trained the detectors separately for different kinds of animals, namely, white mice, black mice, and marmosets (Supplementary Table 1). Initially, the tracking accuracy was low due to the training dataset that was randomly chosen; those images did not accurately represent the complex postures of the animals. To address this, we added a feedback method to merge undetected images in the training dataset for a better detector. After several iterations, an improved detector was achieved (Figure 1A). We used 1,991 labeled images from different behavior assays for detector training in black mice, 1,458 images in white mice, and 400 images in marmosets (Supplementary Table 1). During tracking, we found that the deep learning algorithms only provided a bounding box around the target, where the tracking center is the center of the bounding box instead of that of the animal. Moreover, occasionally multiple bounding boxes were obtained or the bounding box did not completely cover the animal, causing the location of the tracking center to change abruptly, resulting in a discontinuous motion trace after analysis. To overcome these limitations, we first detected the bounding box of the target by YOLO at a low threshold. Then, we enlarged the bounding box to completely cover the animal. Third, the contour of the animal was calculated based on background subtraction in the region of the enlarged bounding box (Figures 1B left,C). Last, the centroid of the animal was determined from the center of the contour (Figures 1B right,C). Three pretrained deep neural networks were evaluated with different image sizes. We found the training time increased with increasing image size (Figure 2A). Of the three neural networks, resnet50 took the longest time at all image sizes (Figure 2A) during detector training (two-way ANOVA, F = 360.75, p < 0.001). In the detection step, tracking speed (two-way ANOVA, F = 92.93, p < 0.001) and accuracy (two-way ANOVA, F = 197.00, p < 0.001) were highly correlated with image size (Figure 2B, see legend). Compared with other networks, resnet50 showed higher precision, and resnet18 showed faster processing speed ( Figure 2B). Considering the tradeoff between speed and precision, we set the resnet50 at a resolution of 480 * 480 * 3 pixels (one of the preset parameters used during detector training) as the pretrained deep neural network for constructing our detector. However, resnet18 at 512 * 512 * 3 pixels was also a potentially useful network for simple scenarios (e.g., mice in open-field maze) because it had a high detection speed at relatively high precision. The number of training images used in those three detectors is shown in Supplementary Table 1. Smooth Movement Map Tracked by DeepBhvTracking In vivo imaging and electrophysiological recording in freely behaving animals are widely used to understand the neural mechanisms of a particular behavior. Inherent with these techniques are human interference and recording wires that may be captured by the video camera during motion tracking. To address these issues, we compared the tracking accuracy and speed for four tracking methods in three tasks with different kinds of environmental noise (Figure 3). In these tasks, the open field is a simple task without other observable interference; the L maze involves wire and hand image interference due to the wire cable of freely moving calcium imaging; and the three-chambered maze involves conditions with a white mouse in a brightly lit room light which resulted in low contrast between target and environment (Figure 3). To test training efficiency, we first compared the training time and tracking speed between DeepLabCut and DeepBhvTracking using the same dataset, pretrained neural network (resnet50), and parameters (image number: 300; batch size: 8; iterations: 2000). DeepLabCut showed a slower training speed than YOLO during both training stages (Figure 3I, two-way ANOVA followed by Bonferroni's test, p = 0.029) and tracking stage algorithms that are based on deep learning (Figure 3J, two-way ANOVA, p < 0.001) with the same computer environments. First, in high background noise conditions, we found that obvious tracking errors were obtained with the background subtraction method alone and that more than 1/10 of frames in one video Note that because of over-fitting, resnet50 is unable to train at an image size of 512 pixels; 480 × 480 images were used instead. Detection precision increased for all networks with increasing image size. In our training model, resnet50 at 480 pixels was used for all conditions. were required for manual tracking (Figures 3A,E). Using this method alone, the target mouse could be marked outside of the tracking area in some frames due to erroneous calculation of the center based on detected artifacts ( Figure 3A). Second, we found that the bounding box of the position of an animal can be easily captured using YOLO with better performance (Figures 3B,F). However, calculating the center of the bounding does not accurately reflect the position of the animal as shown by obviously abnormal motion in the trace (Figures 3B,F, red arrows). Clearly, there are multiple rectangles in the tracking trace which arise from the rapid reorientation of the bounding box. DeepLabCut tracked the center of mice directly and performed well in the three-chambered task (Figure 3G), but there are multiple incorrect frames detected in L-maze which arise from the periodic detection of the hand of the experimenter ( Figure 3C). Also, this method cannot exclude systematic error introduced during training dataset preparation (human-defined centroid of the animal). Using DeepBhvTracking, the movement trace is smooth and most accurately represents the actual movement of the animal (Figures 3D,H). Statistically, compared with DeepLabCut, we found that errors to ground truth, which was used to estimate the distance between the real location and the estimated location of the target, decreased both in the open field ( Figure 3K, one-way ANOVA, F = 23.93, p < 0.001; Figure 3L, one-way ANOVA, F = 7.886, p = 0.001) and L-maze ( Figure 3K, L maze, one-way ANOVA, F = 10.70, p < 0.001) conditions. Movement speed was lowest in the open field and L maze (Figure 3L, p < 0.01) compared with the other three methods. While the YOLO detection algorithm could avoid the interference of wire and hand, the trajectory was not smooth enough because the center of the bounding box does not represent the center of animals (Figures 3B,F,K). DeepLabCut and DeepBhvTracking have similar performances in the three-chambered maze ( Figure 3L, LSD test, p = 0.871). However, errors to ground truth tracked by DeepLabCut were higher than DeepBhvTracking (LSD test, p = 0.040) in the L maze; this may be due to the inaccurate center position of the animal during training dataset construction. In summary, DeepBhvTracking can provide a relatively precise tracking result with fast processing speed in a variety of paradigms (Figure 3). Widely Applicable Tracking in Different Paradigms and Animal Models by DeepBhvTracking To check the applicability and flexibility of DeepBhvTracking in different paradigms, black C57BL/6 mice were tested in different environments including open field (Supplementary Figure 1A), L maze (Figure 3C), treadmill (Figure 4A), elevated plus maze (Figure 4B), and inverted V-shape maze ( Figure 4D). We obtained smooth movement traces for all conditions (Figures 3C, 4A,B,D; Supplementary Figure 1A). It is worth noting that DeepBhvTracking achieved good performance even in a low target-to-background contrast such as black mice on treadmill ( Figure 4A) or white mice in a white environment ( Figure 4C). Moreover, in the treadmill assay, animals run only in a restricted area because it will be punished by an electric shock if it falls behind the treadmill. It is usually very difficult to calculate the movement speed of the animal when performing neuronal decoding. Our DeepBhvTracking method overcomes this challenge and achieves a smooth movement trace in treadmill conditions. White mice were trained separately and were tracked in a three-chambered box ( Figure 4C) and open field ( Figure 4F). Accurate movement tracking was achieved for both conditions. In addition, the movement of marmosets in a 1 m 3 home cage was tested and a clear movement map was achieved by DeepBhvTracking. Finally, two animals were tracked by labeling each animal with a different color sticker during video recording ( Figure 4F blue and orange trace), which indicated our method may also be adapted to social behavior analysis. Hence, DeepBhvTracking is easy and feasible to use with different animal models and different behavior paradigms. DeepBhvTracking Can Be Used to Test Movement and Emotion Deficits The open-field test is one of the most widely used paradigms for assessing locomotor activity and anxiety in rodents. To further test the effectiveness of our tracking method, we performed openfield tests in C57BL/6 wild-type mice and in two widely used movement-deficit mutant animals: PRRT2 (Chen et al., 2011) and FMR1 (Baba and Uitti, 2005) (Supplementary Figure 1). For each animal, we first draw out the movement trace of the animal achieved by DeepBhvTracking (Supplementary Figure 1A) and manually checked for no frame losses. Then, spatial pseudo heat maps of movement time (Supplementary Figure 1B) and speed (Supplementary Figure 1C) were calculated. We found that animals stayed longer at the corners than the central area and run faster in the middle of the open field (Supplementary Figures 1B,E). Moreover, both mutants ran faster in the open field than the wild-type animals (Supplementary Figure 1D, one-way ANOVA, F = 4.356, p = 0.025; Supplementary Figure 1F, two-way ANOVA, F = 16.15, p < 0.001). Open field can also be used to test the anxiety level base of an animal on the time spent in the corner or center. Based on our tracking method, mice with PRRT2 mutants stayed at the corner for a shorter time than wild animals (Supplementary Figure 1E, two-way ANOVA, F = 18.11, p < 0.001). These results may indicate a low anxiety level for PRRT2 mutants in our open field conditions. Further experiments should be performed to confirm this result. DISCUSSION Accurate behavioral measurement and evaluation are the key steps for pharmacology, neuroscience, and psychological studies. However, commercially available software and opensource programs have many limitations, especially when the experiments are performed under complex environmental conditions. To overcome these difficulties, we designed DeepBhvTracking to track the position of an animal combining deep learning with the YOLO algorithm and background subtraction using the widely used software MATLAB. By incorporating the YOLO detection algorithm, the detection effectiveness is improved by generating a bounding box of the tracked animal (Figures 3B,D,F,H). Simultaneously, background subtraction was applied in the bounding box to acquire an exact location of the animal, which corrects for the slight position deviation inherent in YOLO alone (Figure 3). We have previously used several commercial software packages: Limelight, ANY-maze, and open-source programs. Although they have superior GUIs, the accuracy is insufficient in dim light and complex environments, and they are confounded by interruption of the recording process, for example, a human hand. Time-consuming manual tracking is required under these circumstances. In addition, the software can only be used under certain predefined conditions and is exceedingly difficult to modify for new environments. Recently, several algorithms, such as DeepLabCut (Mathis et al., 2018;Nath et al., 2019), LEAP Pereira et al., 2019), and DeepPoseKit (Graving et al., 2019), based on deep learning have been developed to estimate the posture of an animal during movement. Undoubtedly, those methods can obtain detailed movement information about the targets and have been broadly used in multiple studies (Dooley et al., 2020;Huang et al., 2021). However, these methods have intrinsic limitations to accurate estimation of the centroid of irregular animal targets during training FIGURE 4 | Movement tracking in diverse paradigms using DeepBhvTracking. (A) Black mice in treadmill task. With a black mouse on a black background, the ratio between signal and noise is very low. Moreover, the animal only runs in a restricted area because it has been trained not to fall behind the treadmill. It is very difficult to calculate the speed of the animal while performing neuronal decoding in this task. Our DeepBhvTracking method overcomes this challenge and achieves a smooth movement trace. dataset preparation (human-defined center of an animal). Also, tracking speed is very slow with those methods (Figure 3J). Although DeepBhvTracking is also a supervised algorithm, we used background subtraction to correct the systematic error of training dataset preparation. So, DeepBhvTracking is stable, more accurate (Figure 3), less susceptible to background noise, and suitable for different kinds of animals and behavior paradigms (Figure 4). Furthermore, using a feedback training strategy, one can easily improve the detector by adding more labeled images. In addition, DeepBhvTracking also takes advantage of a background subtraction algorithm that defines the centroid of an animal more precisely. With these improvements, we can further increase the tracking accuracy and effectiveness. Finally, DeepBhvTracking is capable of tracking two animals in one video (Figure 4F), if animals are marked with different colors; this indicates that this method is also feasible for the study of social behavior. As a new method, DeepBhvTracking is wellsuited to detect multiple types of animals in different scenarios (Figure 4), and it is straightforward to train or optimize the detector according to individual needs. Based on the position tracked by the DeepBhvTracking, the movement distance, the elapsed time, and the speed of the animal can be calculated easily (Supplementary Figure 1). It is worth noting that DeepBhvTracking can only track the whole-body centroid of an animal; there is no information regarding head direction or body parts. But information of the contour of an animal remains, which makes it possible to define more fine details of an animal, such as the head or tail. In addition, the body parts of animals could be tracked by labeling them with different colors. For example, if we label the nose of a mouse with a red mark and its tail with a green mark, the location of the nose or tail could be tracked by DeepBhvTracking as long as the detector was previously trained to recognize the red and green marks separately. CONCLUSION We have designed a strategy to track the centroid of an animal combining deep learning with the YOLO algorithm and background subtraction, a tool we call DeepBhvTracking. With this improved method, the motion of laboratory animals can be tracked accurately in a variety of different behavioral paradigms. This in turn offers the potential to speed up many studies in neuroscience, medicine, and so on. DATA AVAILABILITY STATEMENT The data and codes that support the finding of this paper is available at GitHub online (https://github.com/SunGL001/ DeepBhvTracking). ETHICS STATEMENT The animal study was reviewed and approved by the Animal Experimentation Ethics Committee of Zhejiang University. AUTHOR CONTRIBUTIONS GS, CL, and RC performed the behavioral recording and finished the data analysis. GS and XL wrote the code. RC, CY, HS, and KS carefully read and edited the manuscript. XL designed the experiments and approved the draft. XL and GS wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the Natural Science Foundation of China (32071097, 31871056, 61703365, and 91732302), the National Key R&D Program of China (2018YFC1005003), and Fundamental Research Funds for the Central Universities (2019XZZX001-01-20 and 2018QN81008). This work was also supported by the MOE Frontier Science Center for Brain Science & Brain-Machine Integration, Zhejiang University. ACKNOWLEDGMENTS We thank Xiangyao Li and Zhiying Wu for generously providing the PRRT2 and FMR1 mice. We thank Junqing Chen for helping during the code writing process.
6,780.2
2021-10-28T00:00:00.000
[ "Computer Science" ]
Reorienting the Debate on Biological Individuality: Politics and Practices Biological individuality is without a doubt a key concept in philosophy of biology. Questions around the individuality of organisms, species, and biological systems can be traced throughout the philosophy of biology since the discipline’s inception, not to mention the sustained attention they have received in biology and philosophy more broadly. It’s high time the topic got its own Cambridge Element. McConwell’s Biological Individuality falls short of an authoritative overview of the debate on biological individuality. However, it sends a welcome message to new and seasoned scholars to reorient the debate towards practically and politically relevant themes. A Call to Change Course Over the last two decades, a relatively well-defined debate on biological individuality has arisen within philosophy of biology.This debate is broadly concerned with questions related to what makes some biological entity an individual (Kaiser and Trappes 2021).Philosophers discuss concepts and definitions of individuality in relation to evolutionary biology but also disciplines like immunology, developmental biology, microbiology, and ecology.They develop theories regarding the evolution of new levels of individuality, such as multicellularity or complex forms of cooperation.And R. Trappes they debate whether there are many best ways of carving up the living world into individuals, and how this pluralism might map onto disciplinary divides or domains of life. Alison McConwell's Biological Individuality (2023) is best read as an impulse to this debate, a call to reorient philosophical investigations away from quibbling over problem cases and definitions and towards considering the epistemic, ethical and political implications of concepts of biological individuality.In doing so, McConwell builds explicitly on the practice turn that the debate on biological individuality has undergone in the last decade (45 − 7).The practice turn, McConwell rightly insists, implies considering the usefulness of concepts of individuality for biologists in many different disciplines, the role of ideology and political imaginaries in conceptualising individuals in the living world, and the ethical implications of assigning entities the status of individuality.Philosophers working on biological individuality, in other words, need to get their hands dirty; philosophy of biological individuality needs to get practical and political. In sending this message, McConwell hopes to advise students and junior scholars, as well as more advanced scholars already invested in the topic.She writes candidly about her motivation: "As a graduate student, I found the topic very complicated and difficult.The sections of the Element are written in a way that draws from what I wish I would have known and where I hope to see work go in the future."(3) McConwell's engaging style-unfortunately somewhat hampered by poor copy editing and by several incongruous engagements with an anonymous reviewer-helps lend the text a pedagogical feel, as does the use of figures and tables.Noteworthy too are the pointers McConwell gives throughout regarding directions for future research (e.g., 25, 42, 55); these are very valuable in a thoroughly explored area like biological individuality.The advice to engage with practising biologists using qualitative empirical methods (54) is also on trend, though unfortunately not coupled with references for methodological guidance (see, e.g., Wagenknecht et al. 2015;Nersessian and MacLeod 2022;Hangel and ChoGlueck 2023). Biological Individuality works as a platform for guiding the debate away from a "cottage industry" (51) and towards more productive terrain.Its major failing is in its treatment of the literature.There is of course no way a slim Elements volume could cover all important aspects of such a massive and multifaceted debate as the one on biological individuality.I myself have received push-back from players in the debate for supposed failures in treating the literature.I certainly don't want to pay that on.But the Elements series is explicitly intended to provide authoritative introductions characterised by "balanced, comprehensive coverage of multiple perspectives" (Cambridge Elements in Philosophy of Biology).The treatment in Biological Individuality of both historical and contemporary literature on individuality is neither comprehensive nor balanced-as McConwell herself explicitly acknowledges (1).Readers looking for a systematic introduction to the topic (for teaching, say, or for a quick entry into the field) should look elsewhere (e.g., Pradeu 2016;Lidgard and Nyhart 2017a;Wilson and Barker 2019). Politics Biological Individuality is valuable for the prominence it grants to ethical and political considerations.McConwell takes care to bring these issues in repeatedly throughout the text, rather than being relegated to a separate section or an afterthought.For instance, readers get not only an excellent overview of David Hull's seminal work on the individuality of species, but also insights into the political implications of treating species as individuals.As McConwell writes, "Throughout history, many people were dehumanized as deviants from humanity.In response, Hull's view implies one is human insofar as they are part of the human lineage, rather than satisfying some necessary (set of) features that all and only humans have."(15) This is an insightful observation that is easy to neglect in favour of purely theoretical considerations. Similarly progressive politics can be found throughout the history of thought on biological individuality (Nyhart and Lidgard 2021).At the same time, conceptualisations of biological individuality have been informed by and used to support the ideology and practices of eugenics.McConwell astutely identifies this tension in the political meaning of biological individuality, directing philosophers of biology to bear in mind the "dark side" of biological individuality (78). It is unfortunate that this important reminder comes out of an overly lengthy and somewhat convoluted presentation of Julian Huxley's views on biological individuality and his links to eugenics.Greater contextualisation and balance would have helped to avoid the impression that Huxley was the only major figure in early 20th century biology thinking about individuality, and to clarify that many biologists at the time applied their theories to social and political issues.Particularly important here is the existing work on the history of individuality and organism concepts and their relation to theoretical and political movements such as reductionism, holism, vitalism, and mechanicism (e.g., Cheung 2010; Wolfe 2010; Lidgard and Nyhart 2017b; Baedke 2019a), not to mention the large amount of scholarship on the history of eugenics in the life sciences.Some philosophers of biology might be uncomfortable with the idea that they ought to consider the political implications of their work.Surely arguments for democracy or eugenics based on theories of biological individuality are best left in the past?Yet McConwell argues that current work on individuality is not politically neutral.Philosophers of biology should therefore face up to the political implications of their work. McConwell cites arguments that current philosophy of biology still tends to operate with a colonial logic, which denies truth gluts or true contradictions; this is evident in the (contested) assumption that something either is or is not an individual (Sinclair 2020).Moreover, McConwell suggests that a colonial objectification of nature is evidenced in work about the individuality of ecological systems (48 − 9).In particular, she points out that a clearly delineated status of individuality, separate from human managers, is often seen as necessary for the recognition and protection of ecological systems.This "separates the manager (i.e., humans) as external entities imposing their will often for use and exploitation of the land and by that action objectifies nature."(49) There is also a provocative discussion of how the individuation of traits or characters relies on positivist standards; what is left unclear is whether these positivist standards are also colonial, as well as exactly how trait individuation and individuality relate.More generally, greater precision regarding the "pillars of modernism-a complex of enlightenment, colonial, and positivist ideals" (51) would have helped provide a stronger starting point for those philosophers embarking on the project of reassessing the politics of their work. One important locus for thinking about the ethical and political dimensions of theorising about individuality is feminist philosophy of science, science and technology studies (STS) and biopolitics.These areas are often overlooked in mainstream philosophy of biology, making it all the more valuable that McConwell explicitly recognises them as sites for "considering individuality as the complex juncture of bio-social spaces."(57) McConwell cites two of Donna Haraway's most well-known texts, "A Cyborg Manifesto" (Haraway 1991) and The Companion Species Manifesto (Haraway 2003), using these texts to point out how individuals actively construct their own boundaries in collaboration with other organisms and technology.A few more indications of relevant works in feminist philosophy of science and STS would have been helpful.For instance, there is a rich literature in feminist theory that addresses bodily boundaries, interdependencies, and transcorporeality (reviewed in, e.g., Alaimo and Hekman 2008;Hird 2009).Biopolitics is also a fruitful resource for thinking about the political dimensions of definitions and practices of reproduction, life, and death-all implicated in the concept of individuality (Esposito 2008;Mills 2018). Scholarship on individuality outside philosophy of biology and analytic metaphysics is in fact vast; other relevant areas include philosophy and sociology coming from French, German, and Italian traditions (e.g., Simondon 1992;Gayon 1998;Beck 2002;Honneth 2004;Hengehold 2017).Many of these areas of research focus on social and political aspects of individuality, such as how to understand individuality while also recognising interdependence and vulnerability, or how social organisation and economic structures can lead to greater individuality amongst members of society.There is much to explore in these fields that could augment the reorientation that McConwell calls for in the philosophy of biological individuality. Practices Few philosophers seek to defend a single definition of biological individuality for all contexts and purposes.Instead, a consensus has emerged in the debate on biological individuality around pluralism (Pradeu 2016).A common version of this pluralism has it that there are different concepts of individuality for different disciplines in biology, perhaps even forming different kinds of biological individuality: evolutionary individuality and physiological individuality, for instance.Others hold that there are many valid concepts of individuality corresponding to different epistemic practices, and especially to different ways of individuating entities in the living world. For her part, McConwell distinguishes between organismality and individuality, and then between several types of individuality: evolutionary, immunological, metabolic, ecological, and developmental (33-36).The latter "domain-driven" (32) set of distinctions falls mostly in line with other overviews of biological individuality.One of the difficulties of disciplinary or domain-driven pluralism in scientific concepts is how to make sense of interdisciplinary communication and collaboration.Although this challenge is recognised with respect to other scientific concepts (Haueis 2021), it often receives little attention in discussions of individuality.Refreshingly, McConwell does briefly touch on the complexities introduced by vague, ambiguous, and changing disciplinary boundaries (42)(43).This will hopefully provide some impetus for research on the connections between individuality concepts across scientific domains. In contrast to domain-driven pluralism, there is no current consensus about the distinction between organisms and individuals (Prieto 2023).McConwell associates organismality with historical figures, etymology, and the tradition of organicism (4-8).This association tends-perhaps unintentionally-to relegate this concept to biology's past, which doesn't do justice to the recent resurgence of the organism across the life sciences (Nicholson 2014;Baedke 2019b; Fábregas-Tejeda and Martín-Villuendas 2023).In addition, the book's separate presentation of organismality and individuality risks creating the misleading impression that research into autopoiesis, autonomy, and agency does not belong to the debate on biological individuality.As with interdisciplinary connections, there is room for further analysis of how concepts of individuality and organismality interact in different scientific contexts. McConwell treats evolutionary individuality in particular detail, covering early discussions about species as individuals, as well as more recent work on units of selection and major transitions in evolution.In the process, she introduces further sorts of pluralism.On the one hand, in addition to domain-driven pluralism there can be conceptual pluralism within domains (37).Examples of the latter include recognising both functional and material concepts of evolutionary individuality, or units of evolution (species) as well as units of selection.On the other hand, McConwell introduces a notion of diachronic pluralism, in which new types of individuality emerge (and disappear) over time, especially through evolution (41). The evolutionary focus is in line with McConwell's own research trajectory and reflects broader tendencies in philosophy of biology.It does however result in a picture that is skewed towards theoretical philosophy of biology, with less attention devoted to the often more practice-oriented and socially-relevant discussions of physiological, developmental, ecological, and behavioural individuality (Bueno et al. 2018).For example, McConwell does introduce immunological, ecological, and metabolic individuality, including practically important and normatively charged issues such as cancer, organ transplants, ecological conservation, and personalised medicine.Yet these topics receive a scanty five pages (26)(27)(28)(29)(30)(31), in contrast to the detailed and diagram-rich 21 pages devoted to topics in evolutionary individuality (8-25; 39-42). Looking beyond evolutionary individuality, we find an already robust tradition of research into practically and socially relevant aspects of individuality.For instance, McConwell cites holobiont individuality-the question of whether we are multispecies individuals including our microbiomes-as an example of a socially relevant topic that philosophers of biology should address (30; 57 − 8).Fortunately, many philosophers are already debating holobiont individuality in light of its potential theoretical, practical, and ethical consequences (e.g., Chiu and Eberl 2016;Skillings 2016;Kirby 2017;Şencan 2019;Suárez and Stencel 2020;Formosinho et al. 2022).Similarly, McConwell's proposal to study individuality in synthetic biology and biotechnology could be connected to the vast body of existing work in bioethics about identity in relation to cloning, genetic modification, and gene editing (e.g., Hauskeller 2004;Ankeny and Bray 2018;Douglas and Devolder 2022).Another politically and ethically relevant topic that McConwell fails to mention is the individuality of pregnant organisms and foetuses; again, this is a topic that has received recent attention, for instance in relation to immunological and metabolic criteria of individuality (Kingma 2020;Meincke 2021;Morgan 2022).The project of reorienting the debate on biological individuality can and should build on such existing positive examples. One of the live questions for practice-based philosophy of science is how to understand the relationship between scientific practice and concepts or ontology (Feest and Steinle 2012).McConwell distinguishes several different practice-based approaches to biological individuality.For instance, some philosophers analyse scientific practices, especially the practices through which scientists individuate organisms or other biological systems, to identify implicit concepts of individuality at work.Others develop individuality concepts with a view to their practical usefulness, for instance for the purposes of counting units of selection.Less clearly practice-based is the study of puzzle cases from biology-biological systems that do not fit our intuitions about individuality, such as huge clonal meadows of sea grass or lichens with their tight symbiotic associations between fungi and algae.McConwell apparently lumps the study of puzzle cases under the practice turn (47), and later argues that such puzzle-driven discourse should be replaced by greater engagement with practising biologists.On the other hand, given that the biologists we interact with may themselves be puzzling over problem cases, puzzle-driven discourse could be here to stay. The discussion of practice-based conceptual analysis concentrates on the explanatory and practical uses of concepts, skirting around the matter of metaphysics.Perhaps for this reason, important recent work on individuality in relation to process ontology and personal identity go unmentioned (see, e.g., entries in Nicholson and Dupré 2018;Meincke and Dupré 2021).This area of research includes substantial discussions of how to understand the project of practice-based metaphysics of science (see also Bausman et al. 2023).It also makes clear that the debate about biological individuality needn't restrict itself to epistemology.Biological individuality can act as a starting point for thinking about some of the big issues in contemporary philosophy of science and metaphysics, including pluralism, pragmatism, and perspectival realism. values has long been subject to criticism, revision and complexification (Longino 1996).Rather than separating politics and practice, the social and the scientific, we need a framework that recognises their enmeshment.As McConwell argues, this is sure to reinvigorate the debate on biological individuality and connect it to a much wider network of thought on individuality and the life sciences.
3,595.2
2024-02-28T00:00:00.000
[ "Philosophy", "Biology" ]
The Oxidative Half-reaction of Old Yellow Enzyme THE ROLE OF TYROSINE 196* Tyrosine 196 in Old Yellow Enzyme (OYE) was mu-tated to phenylalanine, and the resulting mutant enzyme was characterized to evaluate the mechanistic role of the residue. The residue demonstrates little effect on ligand binding and the reductive half-reaction, but a dramatic slowing by nearly 6 orders of magnitude of its oxidative half-reaction with 2-cyclohexenone. Observation of the oxidative half-reaction with a series of substrates allows us to propose a model describing the mechanism of the oxidative half-reaction. In addition, the curtailed reactivity with enones allows for characterization of the manner in which reduced enzyme primes the substrate for the redox reaction by observation of the Michaelis complex with reduced enzyme bound to substrate. BL21-DE3 harboring the Y196F OYE1-pET plasmid was used to ex- press mutant OYE1 by induction with 400 m M IPTG 8 h after 1% inoculation of 10 L of LB/Amp. The cells were harvested 8 h after induction. Purification of the mutant enzyme was carried out as described previously (17, 18). Unless otherwise noted, all studies with Y196F OYE1 were conducted at pH 7.0 and 25 °C in the presence of 0.1 M potassium P i . Ligand Binding Studies— Titrations by various ligands of OYE were recorded by following absorbance changes with a Varian Cary 3 UV- visible spectrophotometer. Titrations were conducted by addition of small volumes of concentrated ligand to oxidized enzyme in 1-ml quartz cuvettes. Dissociation constants were determined by assessing from the amount of spectral perturbation the relative amounts of bound versus unbound ligand relative to unbound enzyme. using a High-Tech Scientific SF-61 stopped flow spectrofluorimeter apparatus with a diode array spectrophotometer. Tyrosine 196 in Old Yellow Enzyme (OYE) was mutated to phenylalanine, and the resulting mutant enzyme was characterized to evaluate the mechanistic role of the residue. The residue demonstrates little effect on ligand binding and the reductive half-reaction, but a dramatic slowing by nearly 6 orders of magnitude of its oxidative half-reaction with 2-cyclohexenone. Observation of the oxidative half-reaction with a series of substrates allows us to propose a model describing the mechanism of the oxidative half-reaction. In addition, the curtailed reactivity with enones allows for characterization of the manner in which reduced enzyme primes the substrate for the redox reaction by observation of the Michaelis complex with reduced enzyme bound to substrate. Old Yellow Enzyme (OYE 1 ; EC 1.6.99.1), revealed as the first protein requiring the use of an FMN prosthetic group by Theorell (1), opened a new chapter in the history of enzymology by unveiling the significance of small organic molecules in catalysis. Despite the enzyme's antiquity, the physiological function of OYE has persisted as an unsolved question. OYE, originally isolated from brewer's bottom yeast (2), has been shown to be a heterogeneous mixture consisting of several isoforms (3). The enzyme exists as a dimer of 49-kDa subunits containing one non-covalently bound FMN per subunit (4). The gene for the isoform OYE1 from Saccharomyces carlsbergensis has been cloned (5), providing the opportunity to apply molecular biology techniques to probe enzyme function. In addition, a crystal structure of OYE1 has been solved at a resolution of 2.0 Å, revealing several residues likely involved in catalysis and ligand binding (6 -8). The structure supports the long established fact that NADPH serves as the physiological reductant for the enzyme and provides insight into the nature of potential oxidants. Although molecular oxygen serves as a substrate for the oxidative half-reaction, the slow rate and biochemical non-productivity of such a reaction make it unlikely to be the physiological oxidant. Work in this laboratory has revealed that quinones and, more recently, many ␣,␤unsaturated aldehydes and ketones serve as efficient substrates for the oxidation of OYE, in which the olefinic bond is reduced (3,9). OYE is characterized by its ability to form deeply colored long wavelength charge-transfer complexes on binding a wide variety of phenolic compounds (4,10,11). These phenolic ligands, as well as the substrates for the oxidative and reductive half reactions, share a common binding site (12) on the si-face of the flavin. The charge-transfer absorption has been attributed to stacking interactions observed in the crystal structure at the flavin-ligand interface between the systems of oxidized FMN and the representative ligand p-hydroxybenzaldehyde (7,11). When similar long wavelength absorption was observed upon equilibration with a variety of cyclic enones, most notably with 2-cyclohexenone, these were originally attributed to analogous charge-transfer complexes (3,13). Further investigation revealed a novel dismutation reaction occurring whereby oxidation of the cyclic enone led to aromatization via a stereospecific trans dehydrogenation (14). Several ␣,␤-unsaturated aldehydes and ketones were shown to be efficient substrates for the turnover reaction with NADPH ( Fig. 1). Studies with R-[ 2 H]NADPH and D 2 O have established that the reduction of cinnamaldehyde, a representative ␣,␤-unsaturated aldehyde, proceeds via stereospecific transfer of hydride to the carbon ␤ to the carbonyl and uptake of a solvent-derived proton at the ␣ position (14). Analysis of the crystal structure reveals that Tyr-196 is ideally suited to serve as an active-site acid in the reduction of an enone, as it is primed to deliver a proton to the ␣ position (8). Studies with the MurB Escherichia coli enzyme, which catalyzes the NADPHdependent reduction of the vinyllic bond of enoyl-pyruvyl-UDP-GlcNAc in cell wall biosynthesis, have revealed a catalytically significant serine residue homologous to Tyr-196 in OYE (15). In an effort to characterize the mechanism of reduction of the enone, a tyrosine to phenylalanine (Y196F) mutation of OYE1 was constructed. The reduction of the substrate double bond is demonstrated to consists of two phases, hydride transfer and proton uptake, which are coupled to varying degrees depending on the identity of the substrate. Kinetic characterization of the mutation with a variety of substrates reveals the role of Tyr-196 in the protonation reaction. In addition, Y196F OYE1 is used to probe the manner in which the enzyme primes the substrate for the oxidative half-reaction in the Michaelis complex. The Old Yellow Enzyme demonstrates that it has much new information to reveal regarding the nature of catalysis of redox reactions. EXPERIMENTAL PROCEDURES Mutagenesis for Y196F OYE1-OYE1 cloned in the pET expression system as described previously (5) was used in conducting a novel method for polymerase chain reaction-based mutagenesis (16). By substituting a silent mutation introducing a restriction site, two overlapping polymerase chain reaction fragments containing the desired mutation are fused and mutant fragment subcloned into the wild type vector. The generation of the mutation is confirmed by screening digestion for the incorporation of a CfoI restriction site encoded by the introduction of a silent mutation in the mutagenic primer and by sequencing by automated fluorescent sequencing carried out by the University of Michigan Biomedical Research Core Facility. E. coli strain BL21-DE3 harboring the Y196F OYE1-pET plasmid was used to express mutant OYE1 by induction with 400 M IPTG 8 h after 1% inoculation of 10 L of LB/Amp. The cells were harvested 8 h after induction. Purification of the mutant enzyme was carried out as described previously (17,18). Unless otherwise noted, all studies with Y196F OYE1 were conducted at pH 7.0 and 25°C in the presence of 0.1 M potassium P i . Ligand Binding Studies-Titrations by various ligands of OYE were recorded by following absorbance changes with a Varian Cary 3 UVvisible spectrophotometer. Titrations were conducted by addition of small volumes of concentrated ligand to oxidized enzyme in 1-ml quartz cuvettes. Dissociation constants were determined by assessing from the amount of spectral perturbation the relative amounts of bound versus unbound ligand relative to unbound enzyme. Turnover Assay-Turnover assays were conducted with NADPH as the substrate for the reductive half-reaction and various substrates for the oxidative half-reaction. The standard assay was conducted in 1 ml with 100 M NADPH, 1 mM substrate for the oxidative half-reaction, and 10 nM enzyme. Aerobic turnover reactions were conducted at air saturation (256 M) of O 2 . Turnover numbers, reported as moles of pyridine nucleotide oxidized per molecule of FMN per minute, were determined by following the rate of change in absorbance at 340 nm as a function of [NADPH] and extrapolating to a theoretical maximum for [NADPH] by fitting a double-reciprocal plot. The reaction product was extracted into dichloromethane and concentrated. The products of the turnover reaction were analyzed by GC/MS using instrumentation described previously (14). Stopped Flow Spectrophotometric Studies-The reductive half-reaction with NADPH and the oxidative half-reaction with various substrates were studied using a Kinetic Instruments stopped flow spectrophotometer with tungsten lamp, monochrometer, and photomultiplier for absorbance detection as described previously (19). The system was made anaerobic by incubation overnight with an anaerobic solution of 3,4-dihydroxybenzoate and protocatechuate-3,4-dioxygenase. All buffer and substrate solutions were made anaerobic by bubbling argon through the solution for at least 15 min before use. For the oxidative half-reaction with oxygen, solutions of various concentrations of oxygen were made up by equilibrating 0.1 M potassium P i solutions by bubbling with the desired concentration of oxygen. Enzyme solutions were made anaerobic in a tonometer by alternately applying a vacuum and purging with argon. For studies of the oxidative half-reaction, reduced enzyme was generated by addition of an NADPH-generating system containing 4 mM glucose 6-phosphate, 40 units of glucose-6-phosphate dehydrogenase, and 0.03 M NADP ϩ , from a sidearm after the tonometer had been made anaerobic. Analysis was conducted by fitting data to exponential equations using the Marquardt algorithm (20) with Program A, developed by C.-J. Chiu, R. Chung, J. Diverno and D. P. Ballou at the University of Michigan. Reoxidation of Enzyme by 2-Cyclohexenone-The slow oxidative halfreaction with 2-cyclohexenone was studied with the Varian Cary UVvisible spectrophotometer by taking enzyme in an anaerobic cuvette with slightly less than a stoichiometric equivalent of NADPH in one sidearm and varying concentration of 2-cyclohexenone in the second sidearm. After making the cuvette anaerobic, the enzyme was first reduced by addition of the contents of the NADPH sidearm. After a stable reduced enzyme spectrum appeared, reoxidation was subse-quently followed for up to 6 h after addition of 2-cyclohexenone. The rate of reoxidation was determined by fitting a single exponential to the appearance of absorbance at 462 nm marking the regeneration of oxidized flavin. Diode Array Observation of the Michaelis Complex-The complex of 4-dimethylaminocinnamaldehyde bound to Y196F OYE1 was observed using a High-Tech Scientific SF-61 stopped flow spectrofluorimeter apparatus with a diode array spectrophotometer. RESULTS Generation of Y196F OYE1-Purification of Y196F OYE1 was conducted by affinity chromatography with N-(4-hydroxybenzoyl)aminohexyl-agarose (18). The successful isolation of the Y196F noted by the generation of a single band on SDSpolyacrylamide gel electrophoresis suggests that ligand binding interaction with the p-hydroxybenzaldehyde analog remains tight with the mutant enzyme. The expression and purification system generated Y196F OYE1 in a yield of 26 mg/liter of culture. The spectrum of Y196F has a wavelength maximum identical to that of the wild type enzyme at 462 nm ( Fig. 2) and an extinction determined as described previously (21) of ⑀ 462 ϭ 10,700 M Ϫ1 cm Ϫ1 . Ligand Binding-Through site-directed mutagenesis, we seek to specifically isolate and alter the residue Tyr-196 while leaving other aspects of the system unchanged. The concern regarding undesired structural rearrangements was allayed through several lines of evidence. The interaction of Y196F with a series of phenolic ligands provides ample data regarding the nature of the FMN and conformation of the active site. Ligand binding as measured by K d is seen to be alternately unaffected, as with p-chlorophenol, or moderately enhanced, as with p-fluorophenol. Upon binding to these phenolic compounds, the long wavelength absorption of the bound complex has been attributed to formation of a charge-transfer complex (11). The perturbation of the oxidized enzyme spectrum upon addition of ligand looks much like that of the wild type enzyme (Fig. 2). However, with Y196F OYE1, across the series of bound ligands, the max of the charge-transfer complex is seen to be modestly blue-shifted (Table I). In earlier studies, by replacement of FMN with a series of artificial flavins, the energy of the long-wavelength transitions was shown to correlate with the oxidation-reduction potential of the flavin. A plot of the reported redox potential of the free artificial flavins and the location of the charge-transfer maxima of the p-chlorophenol bound to enzyme demonstrates a linear relationship, which correlates with a modest reduction in Table I. flavin potential for Y196F of approximately 10 -15 mV (11,22). Turnover-Initial efforts to characterize the kinetic parameters of the reaction of Y196F were attempted though analysis of the aerobic steady state turnover assays with NADPH. Assays of the wild type enzyme, conducted by following oxidation of NADPH at saturating conditions and air saturation of oxygen, typically give turnover rates that accelerate from 51 min Ϫ1 to over 220 min Ϫ1 upon the addition of 1 mM 2-cyclohexenone (3). When a similar assay was conducted with Y196F, the rate of turnover decreased from an initial rate of 60 min Ϫ1 with NADPH/O 2 to a rate of 2 min Ϫ1 upon the addition of 1 mM 2-cyclohexenone. The products of the turnover reaction were analyzed by GC/MS and showed no detectable cyclohexanone. In the turnover scheme, apparently 2-cyclohexenone is no longer the oxidant of choice for Y196F. When the turnover was observed anaerobically with 2-cyclohexenone as the only substrate for the oxidative half-reaction, we observed a nearly complete loss of catalysis. These results also signal a significant departure from data obtained with residues proposed to be involved in ligand binding (21). In the case of mutation of the His-191 and Asn-194 residues, upon addition of the enone substrate, aerobic turnover was unchanged by the mutation as the enzyme fails to bind the enone well and O 2 reactivity is unaffected. For Y196F, inhibition of the normal turnover was observed in the presence of the substrate. Several possibilities exist to explain such a marked reduction in turnover in the presence of 2-cyclohexenone. To examine the full kinetics of the turnover and to determine the effect of Y196F on catalysis, we studied the individual half-reactions. Reductive Half-reaction-Given our hypothesis regarding the role of Tyr-196 as an active-site acid, reductive half-reaction kinetics are expected to be unaffected in Y196F as transfer of the 4-pro-R hydridic hydrogen of NADPH to oxidized flavin proceeds without a mechanistic need for a proton donor or acceptor. As demonstrated in Scheme 1, the reductive halfreaction, examined by stopped-flow, demonstrates no appreciable changes in the observable mechanistic steps. Typically, after a dead-time binding event, charge-transfer complex formation between oxidized enzyme and reduced pyridine nucleotide, followed by flavin reduction and product release, is observed (17). The rates for the binding event and charge-transfer formation remain largely unchanged as determined from the fitting of absorbance data at varied concentrations of NADPH. The charge-transfer intermediate Y196F OYE1:NADPH* complex is apparent from the transitory appearance of long-wavelength absorption in the course of the reaction. As with the wild type enzyme, k red shows minimal concentration dependence on NADPH for Y196F and the reduction rate constant of 5.4 s Ϫ1 does not differ significantly from the reported wild type value of 5.1 s Ϫ1 (21). Oxidative Half-reaction with 2-Cyclohexenone-Given that kinetics of the reductive half-reaction were not responsible for the dramatic change in rates with the aerobic turnover in the presence of 2-cyclohexenone, we turned our attention toward the oxidative half-reaction. The oxidative half-reaction with 2-cyclohexenone proved to be amenable to study by monitoring the absorbance at 462 nm, corresponding to the slow appearance of oxidized enzyme, after mixing various concentrations of substrate with reduced enzyme in an anaerobic cuvette. The reduced enzyme shows minimal absorbance in the region of oxidized enzyme. Upon mixing with 2-cyclohexenone, there is an immediate increase in absorbance with a maximum of 455 nm (Fig. 3). The initial burst in absorbance is followed by the very slow generation of a final spectrum, which is that of oxidized OYE bound to 2-cyclohexenone. The spectrum initially formed upon mixing seems deceptively like the oxidized enzyme spectrum. However, a closer examination reveals that the sharp flavin peak is replaced instead by a broader absorbance spectrum reminiscent of the charge-transfer complexes seen with oxidized enzyme and phenolic compounds. The spectrum initially observed after mixing is tentatively assigned to the formation of a complex between reduced enzyme and 2-cyclohexenone. The rate of oxidation was measured by monitoring the disappearance of the reduced enzyme complex and the appearance of the spectrum of oxidized enzyme bound to substrate. Wild type enzyme in the presence of 2-cyclohexenone typically undergoes a dismutation reaction whereby oxidized enzyme oxidizes 2-cyclohexenone to phenol, which successively forms a long-wavelength absorbing charge-transfer complex (14). The lack of the appearance of such a long-wavelength species even after following the oxidation over the period of many hours suggests that the dismutation reaction occurs at an undetectable rate with Y196F OYE1. This is consistent with the concept that oxidation of the 2-cyclohexenone would require Tyr-196 to act as an active site base and deprotonate the non-vinyllic ␣-carbon to the carbonyl. The oxidation of enzyme-bound reduced flavin was examined at a series of concentrations of substrate to assess the complete kinetics of the oxidative half-reaction. At all concentrations, the oxidation is seen to fit a single exponential, which represents the slow oxidation rate. The k obs includes the typically rapid rate of product dissociation and binding of oxidized enzyme to excess substrate. Ligand binding and dissociation typically occur on the millisecond time scale and thus do not affect the observed rate. A double-reciprocal plot of the data gives an oxidation rate constant of (1.4 Ϯ 0.2) ϫ 10 Ϫ2 min Ϫ1 which represents nearly 6 orders of magnitude decrease in the oxidation rate constant of wild type enzyme with 2-cyclohexenone (21). The individual data points show significant scatter, which allow only an approximate estimate of the enzyme affinity for substrate in the low micromolar range. Oxidative Half-reaction with O 2 -The reaction with molecular oxygen as a substrate for the oxidative half-reaction was studied by rapid reaction methods and the results are presented in Fig. 4. The reaction is second order with respect to O 2 concentration. The k ox value obtained for O 2 is double that of the wild type enzyme (Table II). Given the slowing in oxidation with 2-cyclohexenone, the increased rate with molecular oxy- Tyr-196 and the Oxidative Half-reaction of Old Yellow Enzyme gen suggests that the oxidation of molecular oxygen proceeds without the mechanistic need for a proton donor, forming the stable peroxide anion. The result is also consistent with the concept that the flavin potential is modestly lowered, as indicated previously by the shifted maxima of the charge-transfer complexes with phenolic ligands. The slow rate of oxidation by 2-cyclohexenone with Y196F suggests a model by which to view the turnover data which demonstrated the curtailing of aerobic NADPH oxidase activity in the presence of the enone (Fig. 4). In such a model, on the time scale of the reaction with oxygen, catalysis of oxidation by 2-cyclohexenone may be ignored due to its reduced k ox . The reversible and rapid binding of the enone to the enzyme, however, may influence the oxidative half-reaction with molecular oxygen by reducing the effective concentration of reduced enzyme and making it unavailable for the second-order reaction with oxygen. To assess the validity of this model, the oxidative half-reaction with oxygen was examined in the presence of varying concentrations of 2-cyclohexenone. The reaction is no longer fit to a single exponential. An initial lag phase exists, which can be attributed to the reduced enzyme complex with cyclohexenone previously observed (Fig. 3). The second phase of the reaction is fit to a single exponential, and the direct plot of the k obs values demonstrates the decrease in second-order rate constant with increasing concentration of 2-cyclohexenone, as predicted by the model. The ratio of the rates of the inhibited to the uninhibited reaction were taken as a measure of free enzyme (1-␣) in solution at a given concentration of 2-cyclohexenone. A direct plot of the bound enzyme (␣) versus varying concentration of 2-cyclohexenone produces a good hyperbolic fit and a K d of 13 M (inset, Fig. 4). The existence of a charge-transfer complex between reduced enzyme and 2-cyclohexenone is supported by the presence of the initial phase in the reoxidation by molecular oxygen in the presence of the enone. As a test of the accuracy of the model of inhibition and the value for the K d of 2-cyclohexenone generated by the model, we observed the rapid reaction mixing of reduced enzyme with low concentrations of 2-cyclohexenone. With saturating enone (500 M) and no molecular O 2 present, there is a dead-time increase in absorbance which remains unchanged with time. This gives a measure for a fully bound reduced enzyme complex with an extinction of 2700 M Ϫ1 cm Ϫ1 . Mixing slightly more than half enzyme plus K d concentration of the enone causes an immediate dead-time increase in the absorbance to a value two-thirds between the reduced and fully bound spectra. This suggests that the determined K d is reason- In panel A, the series of spectra give a representative data set for the double-tip experiment. 9 M oxidized Y196F in an anaerobic cuvette at the start of the experiment is given by the spectrum with closed circles. After tipping in an equivalent of NADPH, the stable reduced enzyme spectrum represented by closed squares was generated. After tipping in 200 M 2-cyclohexenone, the initial spectrum recorded (5 s) is given by the closed triangles. Successive spectra were recorded following the appearance of oxidized enzyme at 1, 9, 29, 50, 80, 130, 180, and 280 min. The final spectrum is the same as that with oxidized enzyme bound to excess 2-cyclohexenone. In the inset, the A 462 is plotted as a function of time and fit to a single exponential. Such experiments were conducted at a series on concentrations ranging from 75 to 200 M. The derived rate constant was taken as k obs , and the results of these experiments were used to generate the double-reciprocal plot shown in panel B. Tyr-196 and the Oxidative Half-reaction of Old Yellow Enzyme ably accurate and that the dead-time absorbance increase can be confidently assigned to 2-cyclohexenone bound to the reduced enzyme. We may speculate that a similar complex to that detected with Y196F forms with wild type enzyme, but passes as an undetectable dead-time binding event due to the rapid rate of the subsequent reoxidation. Oxidative Half-reaction with Various Substrates-Given that kinetic evidence suggests that Tyr-196 acts as a necessary active site acid for the reduction of 2-cyclohexenone, the effect of the residue upon other substrates was examined by observation of their oxidative half-reactions (Table II). Methyl vinyl ketone was selected as a representative straight chain ketone, cinnamaldehyde as a representative extensively conjugated aldehyde and 1-nitrocyclohexene as a member of the class of newly discovered unsaturated nitro substrates for the oxidative half-reaction. 2 The rates were measured by following the reoxidation of the enzyme by monitoring the absorbance at 460 nm with a stopped-flow spectrophotometer. The oxidative half-reaction with methyl vinyl ketone occurs largely in the dead time with wild type enzyme, giving statistically difficult to measure rate constants. For the half-reactions with cinnamaldehyde and methyl vinyl ketone, the phase corresponding to the largest change in absorbance was assigned to the oxidation of flavin and used to determine the k ox and K d through a double-reciprocal plot. For the half-reactions with 1-nitrocyclohexene and oxygen, the reaction appears second order with relation to concentration of substrate up to the highest concentrations used (800 M 1-nitrocyclohexene and 620 M O 2 ). Across the series of substrates for the enzyme, reduced Y196F binds all substrates slightly more weakly than wild type enzyme with the exception of 2-cyclohexenone. The rate of olefinic bond reduction varies significantly across the series of compounds. In all cases with organic substrates, the oxidative half-reaction is impaired in the mutant Y196F. Interaction of Cinnamaldehyde Derivatives with Reduced Enzyme-The tight binding of cinnamaldehyde to reduced enzyme without the rapid reduction of the double bond offered a unique opportunity to study the Michaelis complex for the oxidative half-reaction. The cinnamaldehyde analog, 4-N,Ndimethylaminocinnamaldehyde (DMACNA), introduces an electron-donating dimethylamino group to the cinnamaldehyde chromophore, giving free substrate an absorbance maximum at 398 nm. When reduced Y196F OYE1, generated by the use of an NADPH-generating system, is mixed with DMACNA in the stopped flow diode array spectrophotometer, the initial spectra produced demonstrate a maximum at 428 nm (Fig. 5). This red shift may be attributed to enzyme-induced changes in the chromophore upon binding. In addition, as a stoichiometric equivalent bound to enzyme gives a large spectral shift with one clear peak rather than two distinct overlapping curves, it is evident that substrate binds to reduced Y196F quite tightly. The reduction of the olefinic bond, which is evidenced by the disappearance of the chromophore due to the disruption of conjugation, occurs quite slowly as turnover of a single equivalent of the substrate proceeds over the course of more than 10 h when followed with a Varian/Cary spectrophotometer. The product of the turnover reaction was analyzed by GC/MS and shown to be 4-N,N-dimethylaminohydrocinnamaldehyde. The effects of systematic variation of the chromophore upon kinetics of the oxidative half-reaction with cinnamaldehyde and wild type enzyme also revealed much about the nature of substrate binding. A series of cinnamaldehyde derivatives with variable electron-donating and -withdrawing abilities was used as substrates in a rapid reaction study of the oxidative halfreaction. The appearance of absorbance at 460 nm of oxidized enzyme fits well to a single exponential, and clear saturation of the observed rate constant with increasing substrate concentration demonstrates that the mechanism proceeds through a binding equilibrium followed by a step where the cinnamaldehyde is reduced. The data for the binding step (Table III) suggest a distinct trend. The electron-donating substituents, 4-dimethylamino and 4-methoxy, are seen to lead to significantly tighter binding to the reduced enzyme, while the electron-withdrawing 4-nitro substituent leads to weaker binding as compared with the unsubstituted cinnamaldehyde. The above behavior is consistent with electron-donating substituents leading to tighter binding of substrates by increasing electron density at the carbonyl oxygen which is expected to be hydrogen-bonded to 14). Given the precedent of tight binding of numerous phenolic compounds to the enzyme and the need to draw electrons through the carbonyl bond of cinnamaldehyde to prime the substrate to be a 2 Y. Meah and V. Massey, manuscript in preparation. FIG. 5. The interaction of 4-N,N-dimethylaminocinnamaldehyde with reduced enzyme. 18 M free DMACNA is given by the closed circles with max at 398 nm. 22 M reduced enzyme was made anaerobic with a NADPH-generating system; reduced enzyme is represented by the closed squares. With the diode array spectrophotometer, the spectrum 10 ms after mixing 18 M DMACNA with reduced enzyme is given by the closed triangles and has a max at 428 nm. The concentrations given are those after mixing. Tyr-196 and the Oxidative Half-reaction of Old Yellow Enzyme Michael type acceptor, such an enolate-like intermediate is the likely form that substrate takes upon binding to the reduced enzyme. No trend is apparent in the k ox values with varying p (23). Observing the chemical reaction of reduction as two steps, one involving hydride transfer to the C ␤ of the olefinic bond and one involving proton uptake at C ␣ , allows for the reduction to be viewed through a push-pull electron model. Pulling electrons through the carbonyl primes C ␤ for hydride transfer, while pushing them back through the carbonyl bond promotes uptake of a proton from Tyr-196 at C ␣ . In this sense, electronwithdrawing and -donating substituents will differentially affect these two aspects of the overall reaction in opposing manners, thus convoluting the overall picture of the oxidative halfreaction by showing no apparent trend with p . DISCUSSION The Crystal Structure and the Role of Tyr-196 -An examination of the active site of OYE1 suggests a place for Tyr-196 in the mechanism of the oxidative half-reaction with reduced flavin and ␣,␤-unsaturated compounds. The structure of the oxidized enzyme with p-hydroxybenzaldehyde in the active site (7) provides a system by which to model the location of the substrate upon binding to the enzyme (Fig. 6). The carbonyl bond of the ␣,␤-unsaturated substrate, with hydrogen bonds between the oxygen and His-191 and Asn-194, would be positioned in an analogous way to the phenolate oxygen of phydroxybenzaldehyde. The C 2 carbon analogous to the C ␤ of the enone substrate (the site of hydride transfer) is located 3.5 Å from N 5 of the flavin. Tyr-196 is situated in the plane above the substrate with its phenolate oxygen to C 3 distance of 3.4 Å reflecting the proximity of the residue to the C ␣ of the enone. The relative orientation of the N 5 hydride donor and the Tyr-196 proton donor are in agreement with a the trans addition stereochemistry observed in the product (14). Further examination of the crystal structure reveals that the residue Asn-251 has an R-group amidic proton situated at 2.7 Å from the tyrosine oxygen, perhaps serving to increase the acidity of Tyr-196 from the normal pK a of 10.1. Tyr-196 and the Oxidative Half-reaction of Old Yellow Enzyme Tyr-196 is also well conserved across the family of related proteins. In best-fit alignment, a Tyr residue is situated in an analogous position in all of the isozymes of OYE, N-ethylmaleimide reductase (24), pentaerythritol tetranitrate reductase (25), glycerol trinitrate reductase (26), estrogen-binding protein (27), and trimethylamine dehydrogenase (28). In the nitrate reductases, this suggests the interesting mechanistic possibility that hydride transfer to the carbon bonded to a nitrate is followed by a breaking of the carbon to nitrate oxygen bond, which is likely facilitated by the corresponding Tyr residue. In morphinone reductase, a member of the OYE family of enzymes, a conservatively substituted residue Cys-191, with its more acidic sulfhydryl side chain, aligns with . Isolating the Kinetic Effects of Y196F-To explore the hypothesis that Tyr-196 serves as an active site acid in reduction of unsaturated ketones, aldehydes and nitro compounds, a detailed investigation of effects of mutation of Tyr-196 to Phe on the catalytic cycle of OYE was conducted. While aerobic turnover with only molecular oxygen as the substrate for the oxidative half-reaction remains unaffected, the aerobic turnover reaction in the presence of 2-cyclohexenone proved to be dramatically slowed. The cumulative ligand binding data suggest that the active site and the distribution of electrons in the FMN are only perturbed in a minor way with the Y196F mutation. Thus, any structural change should not significantly alter undesired aspects of the thermodynamics of the reaction catalyzed by OYE. The reductive half-reaction for Y196F is not perturbed; in fact, there is a minor increase in k red . Our characterization of the oxidative half-reaction with molecular oxygen gives a second order rate constant double the wild type value. 2-Cyclohexenone is shown to inhibit oxidation by molecular oxygen by forming a spectrally distinct complex with the reduced enzyme which is both unreactive to molecular oxygen and which itself undergoes oxidation at a rate slowed almost 6 orders of magnitude. The almost complete loss of catalysis with Y196F speaks to its significance in the mechanism of reduction of 2-cyclohexenone. Decoupling the Oxidative Half-reaction-A fuller mechanistic picture of the oxidative half-reaction was given by observing the reaction with various substrates. The character of the substrate significantly affected the extent to which the reaction was slowed. The examples of 1-nitrocyclohexene and 2-cyclohexenone illustrate two extreme situations, which may be integrated into a single model to explain the differential results with wild type enzyme and Y196F and elucidate the nature of the oxidative half-reaction. While with 2-cyclohexenone enzyme oxidation is slowed by nearly 6 orders of magnitude, with 1-nitrocyclohexene the rate constant only falls to 0.85 that of the wild type value. The reduction of the ␣,␤-unsaturated bond requires both hydride transfer from the flavin to the C ␤ and proton uptake from Tyr-196 to the C ␣ . Whether these components of the reaction occur in a stepwise or concerted fashion has remained an open question. The differential reoxidation rate constants across the series of compounds examined suggests a model for the oxidative half-reaction (Fig. 7). With Y196F, since no proton donor is oriented toward C ␣ and the enzyme is only equipped to perform hydride transfer, catalysis of the two components of the halfreaction is likely decoupled with all substrates. In the case of 1-nitrocyclohexene, where the reaction is decoupled in the wild type enzyme (as evidenced by the biphasic generation of product), in Y196F the intermediate aci-nitro compound is rapidly generated (1A), while decay of the aci-nitro intermediate to its tautomeric product proceeds at the uncatalyzed rate (1B). For 2-cyclohexenone, the fact that hydride transfer is not observed to occur with Y196F suggests that the intermediate enolate anion is less thermodynamically accessible. Thus, Y196F remains tightly bound with 2-cyclohexenone while oxidation of the enzyme is slowed by 6 orders of magnitude. By comparison, this suggests that wild type enzyme may be able to catalyze the reaction through a different mechanism than that observed with the 1-nitrocyclohexene. The results can be explained by noting that the presence of the Tyr-196 activated for proton transfer to C ␣ either allows for a concerted transfer of hydride and proton or stabilizes the transition state for the transfer of hydride (2). The model describing these two reactions may be applied to other substrates. The extent of the reduction in rate with Y196F versus the wild type enzyme would be expected to be dependent upon the thermodynamic accessibility of the intermediate species resulting from hydride transfer decoupled from proton uptake. Applied to cinnamaldehyde, the extensive conjugation in the substrate is expected to make the intermediate more thermodynamically accessible than for 2-cyclohexenone, as the energy difference between the and * orbitals is reduced by conjugation. The greatest effect on catalysis is shown with 2-cyclohexenone and methyl vinyl ketone, both of which lack conjugated double bonds as present in cinnamaldehyde or lack a resonance-stabilized intermediate as with 1-nitrocyclohexene. Characterization of the Michaelis Complex of the Oxidative Half-reaction-Our analysis of the altered kinetics of Y196F allows for insight into the chemical step(s) of the oxidative half-reaction but fails to provide a description of the interaction of substrate with reduced enzyme upon binding. The slow catalysis of the oxidative half-reaction, however, offered the Tyr-196 and the Oxidative Half-reaction of Old Yellow Enzyme unique opportunity to examine the manner in which enzyme primes substrate for the oxidative half-reaction. The Michaelis complex with reduced enzyme bound to substrate was examined by the use of 4-DMACNA, which contains a chromophore amenable to study. Upon binding to reduced enzyme, a 30-nm bathochromic shift is observed in the spectrum of the substrate with little change in extinction. Several precedents exist for such an observation. In liver alcohol dehydrogenase, a ternary complex formed between enzyme, NADH, and DMACNA, which results in a 66-nm red shift and marked increase in extinction is due to coordination of the carbonyl oxygen of substrate with a Zn 2ϩ active site metal (30). In enoyl-CoA hydratase, the presence of a strong polarizing electric field is used to explain the 90 nm red shift and marked increase in extinction upon binding of 4-N,N-dimethylaminocinnamoyl-CoA to enzyme (31). In the case of OYE, the lack of appropriately oriented oppositely charged residues in the active site makes the possibility of the such a polarizing electric field unlikely. When DMACNA was examined in a series of solvents of varying polarity, the max of the free substrate was correlated with solvent polarity (32). The trend is consistent with a narrowing of the gap between the and * orbitals in a more polar environment causing a lower energy transition (33) due to the greater asymmetry of the * orbital. A 30-nm shift upon binding to enzyme can be explained by noting that the active site is designed with a polar pocket with His-191 and Asn-194 residues, which can polarize the carbonyl and stabilize the enolate resonance form of the DMACNA. In addition, the importance of this hydrogen bond on substrate binding and priming for catalysis is revealed with the wild type enzyme by the trend toward tighter binding of 4-substituted cinnamaldehyde compounds having electron-donating substituents as compared with those having electron-withdrawing substituents. The polarization of substrate upon binding, as observed in Y196F where catalysis is slowed, is likely an integral part of the means of activating the substrate for its reduction. Speculations on a Physiological Role for OYE-The model for the mechanism of the oxidative half-reaction also allows for speculation regarding the nature of the physiological oxidant. Under the assumption that the enzyme is designed to meet the needs of the natural substrate or substrates, a substrate with a less thermodynamically accessible intermediate would justify the presence of Tyr-196. Substrates that can stabilize the product of hydride transfer, as with the resonance stabilization of cinnamaldehyde or the aci-nitro intermediate of 1-nitrocyclohexene, would be less likely to require nature to engineer OYE as it has. Several straight chain and cyclic enones, which lack such resonance stabilization have proven to be among the best substrates for the oxidative half-reaction. These include many of the possible breakdown products of lipid peroxidation, which can have toxic effects on cells (34). Studies of the effect on colony forming efficiency in S. cerivisiae in the presence of various aldehydes have suggested that the effectiveness of inhibition of cell proliferation decreased in the order: 2,4-alkadienals Ͼ 4-hydroxyalkenals Ͼ 2-alkenals Ͼ Ͼ alkanals (35). In other species, inhibition of cell proliferation is seen to be absent or low in alkanals (36). OYE, by catalyzing the reduction of the ␣, ␤-olefinic unit, thus could serve in a defensive role against these substrates by reducing their toxicity. Bakers' yeast, established to be effective in antioxidant defense, has several known defense systems including glutathione, superoxide dismutase, catalase, and peroxidase (37). Enzymes involved in such detoxification pathways are typically marked by wide species differences, several isoforms of differing kinetics, broad specificity, and high catalytic efficiency (38). OYE, with several isoforms marked by their broad substrate specificity and efficiency, is a prime candidate for involvement in such detoxification pathways.
8,918
1998-12-04T00:00:00.000
[ "Chemistry" ]
Costly Advertising and the Evolution of Cooperation In this paper, I investigate the co-evolution of fast and slow strategy spread and game strategies in populations of spatially distributed agents engaged in a one off evolutionary dilemma game. Agents are characterized by a pair of traits, a game strategy (cooperate or defect) and a binary ‘advertising’ strategy (advertise or don’t advertise). Advertising, which comes at a cost , allows investment into faster propagation of the agents’ traits to adjacent individuals. Importantly, game strategy and advertising strategy are subject to the same evolutionary mechanism. Via analytical reasoning and numerical simulations I demonstrate that a range of advertising costs exists, such that the prevalence of cooperation is significantly enhanced through co-evolution. Linking costly replication to the success of cooperators exposes a novel co-evolutionary mechanism that might contribute towards a better understanding of the origins of cooperation-supporting heterogeneity in agent populations. Introduction Cooperative behaviour -acting for the benefit of the group even if not in the immediate interest of the individual -is common in life. Examples are found in many simple and more complex organisms, ranging from bacteria in microfilms [1] up to humans and human society (see e.g. [2]). Explaining the emergence and sustainability of cooperation has attracted considerable interest over the last decades. Previous approaches typically use the framework of evolutionary game theory [3] that describes the spread of strategies in populations of individuals engaged in prototypical dilemma situations. Reproductive success is determined by payoffs which depend on an individual's strategy and on the strategies of its interaction partners. One of the most often studied dilemmas in this context is the prisoner's dilemma. Two individuals are simultaneously faced with a choice between two options, ''C'' (for cooperate) and ''D'' (for defect). Mutual cooperation is rewarded with a payoff of R. Defectors playing against cooperators receive the temptation to defect T while cooperators are paid the ''sucker's'' payoff S in these interactions. Last, mutual defection results in a payoff P, the punishment for mutual defection, for both players. Payoffs are ranked TwRwPwS and Tv2R such that irrespective of an opponent's choice an individual is best off by playing D. Hence the Nash equilibrium is (D,D) with a group payoff of 2P which is inferior to the social optimum of 2R that could be achieved by playing (C,C). How then can cooperative behaviour be explained? An approach that has found much attention in recent years is to consider evolutionary games in structured populations [4]. In this way, for instance, a spatially distributed population can support cooperation [5]. The basic cooperation-supporting mechanism is network reciprocity (see [6] for a classification of cooperationsupporting mechanisms), i.e. strategies assort in space such that clusters of cooperators can shield cooperation against the invasion of defection. The basic findings of [5] for the prisoner's dilemma have been extended in many ways, e.g. by considering the effects of asynchronicity [7], various forms of noise [8][9][10][11], and payoff structures other than the prisoner's dilemma game [12,13]. More recently, also the evolution of cooperation on complex networks [14][15][16] has become a major field of study. The prevailing finding is that heterogeneity in network structure can strongly enhance the support for cooperation. This effect is due to the role of hub nodes as cooperation leaders [15,17]. Hub nodes have many more opportunities to play the game (and hence generate payoff) than average nodes. Consequently, they tend to impose their strategies on adjacent nodes. Then, if a hub node was a defector, it would quickly undermine its position by surrounding itself with defectors, whereas it would reinforce its position when following a cooperate strategy. This basic effect of heterogeneity on cooperation can also be observed on regular or spatial networks when other heterogeneity in agent properties is introduced. Examples of this type of model are models of learning and teaching [18][19][20] or models that consider differential abilities of agents to evaluate [21] or generate payoff [22][23][24]. Whether assuming a scale-free network topology as in [15], a distribution of learning and teaching abilities as in [18], or quenched stochasticity in payoff structures as in [22], all the above studies presuppose a fixed heterogeneous system structure. Such an assumption can be reasonable if (e.g. environmental) processes unrelated to the evolutionary dilemma game shape system structure. However, it remains an important question to investigate how system structure and strategies can co-evolve to create the dynamic patterns that allow for cooperation to survive (see [25] for a review on that topic). Most prominently this question has been addressed in the context of adaptive networks and cooperation [17,[26][27][28]. Other studies have investigated mechanisms for the coevolution of teaching or learning abilities and cooperation [29][30][31]. This work typically relies either on reasonable ad-hoc rules [30,31] or on a dynamics of system structure that is similar to Hebbian learning [32]: i.e. what is successful remains (as in the case of co-evolutionary network models [17,[26][27][28]) or becomes stronger (as e.g. in the context of [29]). Whereas such a combined dynamics can explain the coevolution of strategies and cooperation-supporting structure, the dynamics of system structure is not subject to an evolutionary dynamics itself. Previous models like [17,[26][27][28][29][30][31] might thus be suitable in the context of human learning (or for any types of more sophisticated agents), but would be problematic in the context of very simple biological organisms. To explain this point in more detail, consider a model of teaching and learning developed in [18]. The study investigates a setup in which two types of agents (e.g. teachers and learners) are subject to an evolutionary prisoner's dilemma on a 2d spatial grid. Agent types are assumed to be fixed and cooperation is supported by the implied heterogeneity in strategy adaptation speed. Now consider a scenario in which agents' teaching/learning abilities (or 'advertising' abilities as I will call them subsequently) are passed on together with their game strategies. What would be observed is that the teaching trait will spread in the population, thus reducing heterogeneity and hence removing the support for cooperation. This raises the question: Can evolutionary processes that govern the dynamics of both strategy and teaching trait give rise to the necessary heterogeneity to support cooperation? In this paper I will follow previous studies as [33,34] that modelled individual trait selection in the prisoner's dilemma game to address this topic. I demonstrate that the answer to this question is yes, but only provided that the advertising trait is costly and costs of advertising are within a certain range. Subsequently I consider 'advertising' as an agent-specific ability to enhance its chances of strategy propagation. This may be understood in a social context as an individuals persuasiveness (or effort to persuade others to imitate it). In a biological context 'advertising' may be linked to an investment into replication that comes at a cost, or may be interpreted as a form of signalling that enhances an individual's chance of being imitated. However, note a crucial difference to models of cheaptalk, green beard-type signalling and the evolution of cooperation [35][36][37]. The present model does not allow sophisticated agents that have the ability to play strategies that discriminate between game partners' signals. Rather, signals promote the replication of strategies when present. The organization of the paper is as follows. Section 0 starts with a description of the model and the typical setup of simulation experiments. Section 0 presents analytical results for the wellmixed case and then proceeds with a numerical analysis of the evolutionary game in space. The generality of results and the wider context are analyzed at the end of the results section and in the final section of the paper. Methods I consider a set of N agents distributed on a 2d spatial L|L square lattice with periodic boundary conditions. Adjacency relationships are defined by von Neumann neighbourhoods. Agents are engaged in the one-off prisoner's dilemma, play pure strategies, either s~0 (for defect) or s~1 (for cooperate), and receive payoffs depending on game outcomes. Following a large part of the literature I parametrize the prisoner's dilemma via such that the parameter 0ƒrƒ1 classifies the dilemma strength. Additional to game strategies, agents have a trait s that determines their 'advertising' strategy. Hence, four possible strategy combinations: advertising cooperators (CA), non-advertising cooperators (C), advertising defectors (DA) and nonadvertising defectors (D) are possible. An agent with s~1 advertises, such that it has a by an amount bw0 enhanced chance of being selected as a reference agent for strategy updating. Agents with s~0 still have a chance of being selected as a reference, but this chance is smaller than the chance of being selected as an advertiser. Advertising is costly and an agent that advertises will have a cost aw0 deducted from its payoff before strategy updating. More specifically, the following algorithm is implemented for the evolution of game strategy s and advertising trait s: N In a typical experiment the system is seeded with a random allocation of 50% cooperators and 50% defectors. Both cooperators and defectors are equally likely to advertise or not. In some cases, in particular for large benefit of advertising, otherwise stable phases can not evolve out of randomly allocated initial conditions. In such cases I initialize simulations by a correlated arrangement of the four strategies such that like types cluster. Long term solutions do not depend on specifics of these initial arrangements. N A focus agent i is chosen at random. Amongst the focus agent's neighbours a reference agent is chosen probabilistically such that agents with the advertising strategy trait have an enhanced chance of being selected by the focus agent i, i.e. where A ij is the adjacency matrix of the contact network and k i~P j A ij is the degree of node i. N Game payoffs p (0) i and p (0) j of both the focus and the reference agent are calculated from games they play against all of their respective neighbours. Following this advertising costs are deducted if applicable, i.e. In a next step strategies are updated according to Fermipairwise updating. Accordingly, with probability agent i copies the strategy traits (both game and advertising strategies) from agent j. The parameter k in Eq. (4) gives the intensity of noise in strategy updating and is set to k~0:1 for all of the following simulation experiments. N Steps 2,3, and 4 are iterated till a quasistationary state is reached and then equilibrium concentrations of the four strategy combinations: advertising cooperators n ca , nonadvertising cooperators n d , advertising defectors n da , and non-advertising defectors n d are determined as averages over a sufficient number of further iterations. In all following experiments system sizes from N~100|100 up to 800|800 have been considered and system sizes were increased as required close to transition points. The above model setup differs slightly from the setup of the orginal model of learning and teaching developed in [18] in which learning or teaching abilities are included as an agent-specific prefactor in the strategy transmission rates in Eq. (4). The purpose of the present setup is to make the role of advertising more explicit in the way agents select reference partners. Very similar results to the ones presented below can be obtained for models in which advertising is directly included as a prefactor to Eq. (4) and the probabilistic reference partner selection step is omitted. The Advertising Game To understand the effect of advertising in combination with an evolutionary dilemma game, it is instructive to gain insights into the incentives for advertising in a well-mixed population. Let us consider a population composed of two types of agents, 'advertisers' (concentration n 0 ) and non-advertisers (concentration n 1~1 {n 0 ). Payoff differences are then determined by the cost of advertising a and I assume that strategy concentrations in the population evolve according to Eq.s (2) and (4). In the limit of large systems, the evolution of concentrations is governed by where and are the transition probabilities that an advertiser changes its strategy to non-advertising or vice versa. The first factors in Eq.s (6) and (7) give the probability of selecting an agent of the other type as a reference and the second factors give the probabilities of adopting the other types' strategy once the type has been selected as a reference. Combining Eq.s (5), (6) and (7) and straightforward manipulation gives a criterion for a critical cost of advertising a c a c~k ln (bz1), ð8Þ such that advertising is not viable in a well-mixed population if awa c and dominates the entire population if ava c . A viscous population structure does not give any additional benefit for either advertisers or non-advertisers and only slows down the diffusion of strategies. Accordingly, one would expect the criterion (8) to hold on spatial lattices as well. This has been confirmed by simulation experiments (data not shown). Advertising and Cooperation in Well-mixed Populations Consider the coevolution of advertising trait s and game strategy s in a well-mixed population. In this setup two regimes must be distinguished. For cheap advertsing (ava c ) it can easily be shown that irrespective of the composition of the population net transition rates from any species in the population to advertising defectors are positive. Hence, n da~1 is a stable equlibrium point and cooperation is not sustainable. For expensive advertising awa c net transition rates from any species to non-advertising defectors are positive and hence n d~1 is a stable fixed point. Similar arguments show that for a~a c any composition of advertising and non-advertising defectors with n d zn da~1 is a stable equilibrium point. Unsurprisingly, one concludes that advertising cannot facilitate stable cooperation in well-mixed populations. Results in Structured Populations The above arguments change in structured populations, on which network reciprocity favours positive assortment of strategies. When network reciprocity is present, it is advantageous for cooperators to surround themselves with the same strategy. Hence there is a benefit to a cooperator to invest in 'advertising' that surpasses the threshold cost a c obtained from the cost-benefit analysis of the advertising game in the absence of a dilemma situation. Clearly also, this benefit of cooperators of surrounding themselves by like types is limited. One thus also expects an upper threshold cost of advertising a (2) c such that advertising for cooperators is no longer viable for awa (2) c . The situation is different for defectors. Defectors don't gain from surrounding themselves with like types. Hence, the threshold for the viability of advertising for defectors is the same as for the advertising game alone, i.e. given by Eq. (8). Both arguments let one surmise that there must be a range of advertising costs a c vava (2) c such that advertising is profitable for cooperators but not to defectors. Spatial arrangements of strategies add a further dimension to the problem. Consider the range of costs a c vava (2) c in which advertising is viable for cooperators, but not for defectors. Defectors close to cooperators can achieve very high payoffs, but payoffs of defectors surrounded by other defectors are poor. Similar to results on volunteering [38,39], this leads to a cyclical dominance between strategies that allows for coexistence in spatial settings. In particular, in 'tough' games with rwr (0) c , advertising defectors can outcompete advertising cooperators. However, the cost of advertising can disadvantage advertising defectors in direct competition with non-advertising defectors. Further, advertising cooperators may be able to invade groups of non-advertising defectors, when the cost of advertising is not too large, thus creating a cyclical dominance. For large rwr (0) c non-advertising cooperators never play a role: They are always outcompeted by all other strategies and are not expected to be found in the population. In the following, I verify and extend the above arguments by numerical simulation experiments. The panels in Fig. 1 show data for the dependence of the concentrations of the four strategies in the population on the dilemma strength r for b~3 and k~0:1. For better visualization, the panels of Fig. 2 illustrate some typical snapshots of arrangements of cooperators and defectors for interesting parameter regions and give support for the above arguments about the cyclical dominance of advertising and nonadvertising defectors and advertising cooperators. Fig. 1 explores various regimes of advertising cost parameters a. Panel (a) characterizes the regime a~0:1va c in which advertising is viable for both cooperators and defectors. As a result, only advertising cooperators and defectors survive and the diagram reproduces the known phase diagram with a very low extinction threshold for cooperation at r c~0 :02112(1) [39]. Similarly, panel (d) gives data for a~1wa (2) c , a scenario in which advertising is not viable for both strategies and again the known phase diagram is reproduced. (2) c is of more interest, cf. panels (b) and (c). Three observations stand out: i) coexistence of all four strategies appears only possible in a small interval of dilemma strengths below the known extinction threshold r c~0 :02112(1). In particular for larger advertising cost, advertsing cooperators only become viable once the dilemma strength exceeds some threshold. Likewise, in the absence of advertising cooperators for low r, advertising defectors cannot exist and only become viable at the same threshold dilemma strength at which advertising cooperators appear. i) Increasing the dilemma strength benefits advertising defectors, provided non-advertising cooperators are still in the population. Once these have died out (for very low dilemma strength), increasing r further increases the rates at which advertising defectors invade advertising cooperators and slows down the rate at which advertising cooperators invade nonadvertising defectors. The result is a slow decrease in the numbers of advertising cooperators and a strong increase in concentrations of non-advertising defectors which also effect a strong decrease in the numbers of advertising defectors. Hence, further increases in r first drive advertising defectors into extinction, resulting in a co-existence regime of advertising cooperators and non-advertising defectors, cf. the phase diagram in the a-r plane in Fig. 3. The extinction of advertising defectors may allow for a recovery in the population of cooperators, but further increases of r gradually reduce survival chances for advertising cooperators. As argued above, the intermediate interval a c vava As one would expect from previous work on cyclical dominance between three strategies [38,39], wave-like patterns and oscillations are found in the regime in which the strategies CA, DA, and D coexist. Some simulation results that demonstrate this are illustrated in Fig. 4. Panel (a) visualizes simulation data for the dependence of the concentration of advertising cooperators and the maxima and minima of oscillations on the cost of advertising. Towards the lower end of cost parameters amplitudes of oscillations are very large. Amplitudes decline when the cost of advertising is increased and advertising defectors decrease in numbers. Eventually, the extinction threshold of advertising defectors at a~0:200(1) marks the end of the parameter regime in which oscillations can occur. Panel (b) shows some typical timeseries of all three species in the regime that supports oscillations. It should, however, be noted that because invasions at various places will increasingly compensate each other when the system size is increased, the amplitudes of these oscillations is a declining function of system size. This contrasts spatially synchronized patterns that have, e.g., been observed in [40]. i) Cyclical dominance and the suppression of advertising defectors at large dilemma strengths replace the competition between cooperators and defectors by a competition between advertising cooperators and non-advertising defectors. The mechanism results in a considerable extension of the range of dilemma strenths for which cooperation can survive. This range is particularly large for small costs of advertising just at the threshold cost given by Eq. (8) and decreases linearly with costs, cf. also the phase diagram for b~3 and k~0:1 in Fig. 3. The full phase diagram in the a{r plane given in Fig. 3 shows that regimes exists in which various combinations of the four strategies can co-exist. Several of the transitions between such regimes are discontinuous. One example is the transition DA?CAzDAzD at the sharp cutoff point defined by Eq. (8). Another example is the transition CAzDA?CzDA which is the result of an indirect territorial battle of CA and C, similar to what has been described in the context of adaptive rewarding for public goods games [41,42]. In the case presented here, the outcome of the competition of CA and C to invade DA defines the transition point. Similarly, also the transitions CzDA? CAzCzDAzD and CAzCzDAzD?CzD have been found to be discontinuous. It is of interest how the size of the cooperation-supporting region depends on the two parameters cost of advertising a and benefit of advertising b that characterize the coevolutionary advertising game. The panels belonging to Fig. 5 illustrate simulation results addressing this question. In panel (a), maximum and minimum costs required for cooperation to survive are given. One notes, that the lower bound agrees very well with the logarithmic dependence predicted by Eq. (8). In contrast, the upper boundary has a step-like dependence on the benefit of advertising. Whereas a cooperation-supporting range of costs exists for every value of b, the size of the region increases markedly at b&2. At the same threshold there is also a jump in the range of dilemma strengths at which cooperation can survive, see panel (b). This indicates that cooperation benefits when advertising becomes more effective. However, in finite systems this is not necessarily the case. In particular for large b in cost regions in which advertising defectors can survive, amplitudes of the oscillations in cooperator and defector populations can become very large such that advertising cooperators can often go extinct when absorbing boundaries are hit. Robustness How sensitive to changes in model structure is the cooperationsupporting effect of advertising? In this section I will briefly comment on a number of important factors. First, I note that noise, i.e. an even only small but non-vanishing chance of passing on a strategy that is less successful than the strategy of the focus agent, is essential for the mechanism to operate. In fact, for k~0 transmission probabilities in Eq. (4) become step-like, strategy propagation becomes deterministic and advertising is never viable. On the other hand, when kw0 advertising can always support cooperation. Simulations indicate that the support for cooperation is maximized the smaller the amount of noise. In the light of the importance of noise for advertising to be successful, it remains an interesting question whether an equivalent of the underlying mechanism can be found for strategy replication mechanisms other than Fermi-pairwise updating, like e.g. the replicator dynamics. A second comment is in order about the binary nature of strategies in the present model. One might wonder, if cooperation could be supported if the advertising trait was a continuous variable. Preliminary simulations indicate that this is indeed the case, but the viable amounts of advertising between advertising defectors and cooperators would vary. Detailed results about this model will be reported elsewhere. Third, outcomes of evolutionary simulations can depend critically on the updating scheme, e.g. whether one chooses synchronous or asynchronous updating [7]. The difference between both schemes tends to be not so important for probabilistic models like the one discussed in this paper. Experiments with a parallel updating scheme show that principal results are robust with regard to the choice of updating scheme. In fact, synchronous updating appears to enhance the regime in which cooperation is supported. A fourth point worth noting is that the support for cooperation from advertising is not only due to a cyclical dominance mechanism between three strategies. A prototypical phase diagram for not too large benefit of advertising like the one of Fig. 3 will always contain a large region in which the usual competition between cooperators and defectors is replaced by a competition between advertising cooperators and non-advertising defectors. In this regime cooperation can maximally exploit investments into surrounding themselves by cooperators which is a strategy that is not viable for defectors. One might wonder if the investment into advertising would not obliterate the benefits of cooperation? A quick back of the envelope calculation shows that this is typically not necessarily case. For an estimate of the social benefits of advertising, compare the social payoffs of a pair of advertising cooperators and a pair of non-advertising defectors. In the first case, a group payoff of 2(1{a) is achieved and in the second case one has a payoff of zero. Hence advertising is overall beneficial if it can lend support to cooperation for av1. A comparison with Eq. (8) shows that this imposes a limit on b and one obtains the rough estimate of bve 1=k {1 for advertising to be socially beneficial. Fig. 5 illustrates that this estimate is well below the threshold-value of b&2 and thus the viable region in a{b parameter space is rather large and will become the larger the smaller the amount of noise k in strategy propagation. In contrast to the above, in the regime ava c in which advertising is beneficial for both cooperators and defectors advertising is not socially beneficial and may assume the character of an arms race between cooperators and defectors. A further point worth mentioning is the importance of the mechanism by which advertising and game strategies are inherited. The present model assumes that both strategy components are passed on at the same time and there is no separation of timescales between the spread of the game and the advertising strategies. This is in fact a crucial assumption and some exploratory simulation experiments show that the support for cooperation is reduced markedly if this condition is relaxed. Last, it is worth remarking that the success of advertising for cooperators is not limited to the two-player version of the prisoner's dilemma game. Preliminary results show that a very similar mechanism can also operate in the public goods game. Discussion In this paper I have discussed 'advertising' as a mechanism by which agents can make a costly investment into faster strategy propagation in evolutionary dilemma games in space. Importantly, both strategy components, advertising and game strategy, are subject to the same evolutionary dynamics and their interplay can co-create dynamic patterns of fast and slow strategy propagators that can sustain cooperation far beyond the regime in which cooperation is supported by network reciprocity in the standard spatial evolutionary game. An important requirement for this extension of the support for cooperation is an appropriate choice of advertising costs. Since cooperators gain support by surrounding themselves with like strategies, advertising can benefit cooperators more than defectors. Hence advertising needs to be so costly that it becomes unviable for defectors, but should be below another threshold that demarcates the maximum benefit for cooperators. A careful analysis of the costs and benefits of advertising reveals that such a regime always exists, provided that strategy propagation is subject to noise which occasionally allows inferior strategies to be copied. I have discussed that support for cooperation by advertising is robust and advertising can be socially beneficial for the group, provided that the benefits of advertising are not too large. As a more general aside, advertising essentially introduces a second game on top of the original dilemma situation. The essence of this second game is competition for an accelerated rate of strategy propagation. Considering this game as standalone, advertising could be interpreted as a defect strategy, because it leads to inferior group payoffs compared to non-advertising. However, in the combined game in structured populations a linkage disequilibrium develops. Cooperation in the prisoner's dilemma game associates with the defect strategy in the advertising game, whereas defect in the prisoner's dilemma associates with a cooperate strategy in the advertising game. Interestingly, payoffs achieved in the combined game can be larger than in the standalone prisoner's dilemma game. The present model serves two purposes. First, by demonstrating that even if subject to the same evolutionary dynamics a trait that marks heterogeneity in strategy spread (i.e. the advertising strategy) and cooperation can co-evolve it addresses a gap in the current literature. Second, by pointing out that costly replication can support cooperation, it may point to a more general mechanism that might lead to some interesting directions for future work.
6,540.8
2013-07-08T00:00:00.000
[ "Economics" ]
An Approach of the Maximum Curvature Measurement of Dynamic Umbilicals Using OFDR Technology in Fatigue Tests Prototype fatigue tests simulate in-place working conditions of dynamic umbilicals that are usually conducted to verify fatigue life. The fatigue failure hot spot locates on the top segment of the umbilical. The umbilical reaches a maximum curvature at the hot spot. The hot spot position of the umbilical is always inside or near the bend stiffener. The radial gap between the umbilical and the bend stiffener is very small, making it difficult to put traditional sensors in the gap. In this work, a prototype umbilical tension-bending test is conducted, and the optic frequency domain reflectometer (OFDR) technology is utilized to measure the distributed strain of the umbilical inside the bend stiffener. The curvature of the umbilical is obtained to locate the fatigue hot spot in order to verify the feasibility of this approach. The test results show that the umbilical curvature can be measured well with the use of OFDR technology. The influence of tension and swing angle on the position of the maximum curvature is studied. The method proposed in this article provides a valuable approach for curvature monitoring in the application of dynamic umbilical fatigue tests. INTRODUCTION Dynamic umbilicals play an important role in offshore oil and gas development. They are used to connect the upper floater and the subsea Christmas tree system supplying necessary control, energy, and chemicals. The fatigue life of the umbilicals should be predicted to ensure dynamic application. Prototype fatigue test of umbilicals is the most reliable and acceptable method to verify the predicted fatigue life, which must be conducted before the in-field application. The key of the prototype fatigue test is to simulate the behavior of the umbilical during its in-place working condition accurately. Proper measurement methods should be applied to monitor the umbilical behavior and performance during the fatigue test. The in-place working condition of the umbilical is shown in Figure 1. The top segment of the umbilical is believed to be a critical area in fatigue analysis and test, which is simulated in the prototype fatigue test. The fatigue failure hot spot is in this area as the structure in this area sustains the highest tension load and the maximum alternating curvature (He et al., 2020a). The umbilical behavior of this area is difficult to predict due to the complex contact and interaction with the bend stiffener. The bend stiffener is a polymeric conical shape structure with nonlinear mechanical characteristics. The polyurethane material of the bend stiffener is viscoelastic. The shape of the bend stiffener and the cross-section of the umbilical are shown in Figure 2. The radial gap between the bend stiffener and the umbilical is illustrated in Figure 2A. The size of the gap is normally 5-15 mm. The gap leads to different deformation and curvature of the bend stiffener and the umbilical. Figure 2B shows the cross-section of an umbilical; the components are unboned and free to slip in the umbilical, which leads to its nonlinear bending stiffness. Several studies have been conducted to obtain the curvature and bending behavior of umbilicals in this area. Tang et al. (2015) have developed an analytical model considering the material and geometrical nonlinearity of the bend stiffener to calculate the curvature of the top segment of the flexible riser. Vaz et al. (2007) and Caire (Caire et al., 2016;Caire and Vaz, 2017) have proposed analytical formulation for bend stiffeners considering nonlinear viscoelasticity. Ruanet et al. (2017) have conducted a dynamic analysis of a flexible riser with bend stiffener. However, the above researches were mainly focused on obtaining the behavior of bend stiffener instead of the umbilical. Determining the location of the maximum curvature is challenging using analytical methods. Different measure methods were utilized to obtain the curvature of the umbilical by fatigue tests. He et al. (2020b) have applied an image-based technique using the optical target tracking method to measure and monitor the curvature distribution of a bend stiffener in full-scale bending-tension tests. Leroy and Estrier (2001) have used parallel strain gauges to obtain axial and transverse strain variations in tension armors and calculate the bending behavior of a flexible riser. However, the curvature of the umbilical is different from the bend stiffener. The location of the maximum curvature is different in the two structures, which leads to different analysis results for fatigue. The curvature of the umbilical inside the bend stiffener cannot be measured directly by traditional measure methods. The radial gap between the bend stiffener and the umbilical is small, making it difficult to place the sensor. It is necessary to develop a method to obtain the curvature of the umbilical and locate the hot spot in the fatigue test for a more accurate fatigue life analysis. Optic fiber sensing technology has been applied in many domains for its advantages in parameters measurement in field and laboratory tests. Fiber-optic sensors are used to transmit and sense signals such as strain, temperature, and pressure. Compared with other sensors, the fiber-optic sensor has a small size, which makes strain measure or monitor in small spaces possible. The sensor can be multiplexed along the length of a fiber for more measurement points. The optic fiber sensing technology also has advantages of high accuracy, low loss, and immunity to electromagnetic interference . Different kinds of sensing technology using fiber-optic sensors were developed, such as fiber Bragg grating (FBG) sensors (Li et al., 2011;Gautam et al., 2016;Matveenko et al., 2018) distributed and dominated by Brillouin optical time domain reflectometer (BOTDR) (Moffat et al., 2015;Wu et al., 2015) and by Brillouin optical time domain analysis (BOTDA) (Inaudi and Glisic, 2010). Ren et al. (2014) and Jia et al. (2019) have proposed methods to detect pipeline leakage and its localization by FBG sensors. Feng et al. (2019) have applied BOTDA technology to measure the strain of the geotechnical structure with less than 4% accumulated errors. Gao et al. (2018) have applied the OFDR technology to measure the strain of the cast-in-place concrete pile (PCC) by installing the optical fiber onto the surface of the PCC pile. Ren et al. (2018) have applied the OFDR technique to monitor pipeline corrosion and leakage. With the advantage of the small size and continuous characteristics of OFDR technology, a direct measurement of umbilical curvature may be achieved. This article presents a curvature distribution measurement of dynamic umbilicals using high-accuracy OFDR-based strain sensing technology. A prototype fatigue test of umbilical was conducted with the fiber-optic sensors installed on the outer sheath. The strain distribution on the tested umbilical inside the bend stiffener was measured. The maximum bending curvature spot of the umbilical was found and located. The influence of tensile load and swing angle on the position of the maximum curvature is studied. PROTOTYPE FATIGUE TEST DESCRIPTION A summary of the test method, test rig, test specimen, and test setup are described in this section. Test Method and Test Rig The goal of the fatigue test is to simulate the true behavior of the umbilical and bend stiffener system on the top segment of the layout. The curvature of the umbilical is a critical parameter to evaluate the accuracy of the simulation. The real working condition behavior of the umbilical is simplified as a combination of tension and in-plain bending. Constant tensile load and alternating bending moments should be applied to the umbilical in the fatigue test. The principle of the fatigue test is to guarantee the similarity of strain where the maximum curvature is reached, which leads to two critical control parameters of the test: the location and value of the maximum curvature. The maximum curvature always locates in the area where the umbilical is inside of or near the bend stiffener. The curvature of the inside umbilical is difficult to be calculated using the theoretical analysis model due to the complex contact behavior and the nonlinear characteristics of the bend stiffener and the umbilical. It is necessary to obtain the curvature of the umbilical for an accurate fatigue test. As the curvature is the response of the umbilical, trail tests should be conducted to obtain the loading parameters. To guarantee the accuracy of the prototype fatigue tests, the following items should be taken into consideration and met. The curvature at the end of the tension actuator should reach zero by designing the length of the test specimen. The length of the swinging head from the installation end to the rotation center should be designed. Under the condition that the fatigue response of the umbilical remains unchanged, the test frequency should be increased properly to shorten the test duration. A prototype tension-bending fatigue test of a system involving umbilical and bend stiffener was conducted by the fatigue test rig from Dalian University of Technology (DUT). The schematic picture of the test rig is shown in Figure 3. The test rig simulates real in-place working conditions of the umbilical by applying constant tensile load by hydraulic actuators. The reciprocating bending moment is applied by a swing head linked to a rotation center. The test rig is adjustable with a test length of umbilicals or flexible risers up to 20 m. The swing head of the test rig is also adjustable from 1 to 3.5 m. The test rig can meet most of the testing requirements for dynamic umbilicals, cables, and flexible risers. The maximum tension capacity of the test rig is 500 kN. The maximum bending capacity is 150 kNm with a swing angle within ±15°. The tension and bending actuators are free to rotate in the bending plane with a hinge joint, which leads to a more accurate simulation. The test frequency of the test rig can reach up to 0.1 Hz. Test Specimen and Test Setup A dynamic umbilical was applied in this test. The key parameters of the umbilical are listed in Table 1. A bend stiffener was assembled on the umbilical. The geometry shape of the bend stiffener is illustrated in Figure 2A. The geometry parameters of the bend stiffener are listed in Table 2. A global Cartesian coordinate system (X, Y) is adopted to describe the measurement location of the umbilical, as shown in Figure 2A. The origin point (0, 0) is at the top of the bend stiffener. The distance from the origin point to the swing center of the swing head is 2.5 m. The length of the test sample is 16 m. The length of the bend stiffener is 2.92 m. The constant tensile load applied in the test is 100 kNN. The swing angle applied in the test is from 0°to 8°, which is measured by a tilt angle sensor placed on the swing head. Each test was repeated three times to verify the accuracy of the results. The average environment temperature and humidity during the test are 23°C and 70%, respectively. Fatigue failure of the umbilical typically occurs instantaneously. The behavior of the umbilical should be measured during the fatigue test to infer and analyze the failure. Load cells and tilt sensors are applied to monitor the loads applied to the umbilical. Curvature distribution, change of the tension stiffness, temperature, pressure of tubes, and continuity of photoelectric signal should be monitored during the test. Four groups of tests were conducted as listed in Table 3. Methods of Curvature Measurement Different measure methods were attempted to obtain the curvature distribution and the location of the maximum curvature. The measurement of the direct shape and deformation of the structure is believed to be the best method to measure the curvature due to its accuracy. Linear Variable Displacement Transformers (LVDT) or potentiometers are standard sensors for distance measurement. A large number of sensors are needed to draw the distribution of the curvature. Advanced sensors such as laser displacement sensors and optical target tracking technology can be employed for curvature sensing. However, this method can only obtain the external shape of the bend stiffener whose curvature is different from that of the umbilical, as stated before. Considering the contact between the umbilical and the bend stiffener, a matrix piezoelectric sensor can be applied to measure the pressure distribution of their interface. The curvature can be obtained by analyzing the relationship between pressure and curvature through the numerical model. Moreover, the contact behavior can be different with the use of the sensor. The accuracy of this method needs to be verified. The strain of the outer sheath can reflect the curvature of the umbilical when a simple load is applied. The traditional way of strain sensing is by the strain gauge. The strain gauge can only be pasted on the interface of the umbilical with single point measurement. The curvature distribution measurement requires a large number of sensors. Moreover, the space between the umbilical and the bend stiffener is limited, which leads to the difficulty of the lead-out conducting wires. The fiberoptic sensor has a small shape and can sense multiplexed points along the fiber. This sensing technology requires no conducting wires. The advantages of the fiber-optic sensor make it a potential method for curvature measurement. The comparison of different methods of curvature measurement for umbilical is listed in Table 4. Principle of Distributed OFDR Technology Distributed OFDR technology was developed by Eickhoff in 1981 (Eickhoff andUlrich, 1981). The principle of ODFR is shown in Figure 4. The laser source emits continuous and tunable lights. The lights go through an optic fiber coupler and are divided into two parts. One part of the emitted lights is sent to the fiber-optic sensor. The Rayleigh backscattering light is produced as signal lights. Light frequency and spectrum change when strain or temperature changes on the sensor. The other part of the emitted lights is reflected back and passes through polarization controllers, used as the reference beam. The reflected signal lights and the reference beam are coupled by the optic fiber coupler and sent to the detector. The light spectrum changes with the strain or temperature change. The light spectrum between the reflected Rayleigh backscattering light and the reference beam can be measured, compared, and analyzed in the detector. Meanwhile, the optical frequency of the Rayleigh backscattering light at different positions is different due to the tunable laser source. The light frequency can be detected and analyzed by the detector. The following relationship between spectrum, strain, and temperature can be given: where ε and ΔT are the variations of strain and temperature, K ε and K T are the sensitivity coefficient of strain and temperature, Δλ is the change of the resonance wavelength, Δ] is the spectrum shift, and λ and ] are the mean wavelength and frequency. Based on the above principles, the strain and temperature at different positions of the fiber-optic sensor can be obtained. Layout of Fiber-Optic Sensor The layout and protection of the fiber-optic sensor are very important. The umbilical and the bend stiffener may have a large contact force and friction force during the fatigue test. The fiber-optic sensor is fragile and has a low capability of resistance to shear stress. The fiber-optic sensor may break off if it was glued on the surface of the umbilical. Several attempts were made to find a method to protect the fiber-optic sensor. The layout of the fiber-optic sensor and system is shown in Figure 5. The main steps are introduced as follows. Take off the bend stiffener and reserve enough measurement length for the fiber-optic sensor. The reserved length should be longer than the length of the bend stiffener, so the end of the fiberoptic sensor can be out of the bend stiffener. Then, draw a line along the outer sheath of the umbilical, which is the location of the sensor. Cut a U-shape notch along the line with a depth of 2-3 mm. The whole fiber-optic sensor should be placed inside the notch, as shown in Figure 5A. Clean the notch and lay the fiberoptic sensor inside it. The fiber-optic sensor should have enough reserved length on both ends for the follow-up connection with the measure instrument. Fix the fiber-optic sensor at one end of the notch with glue 502 or similar glue with a short coagulation time. After the sensor is fixed at one end, gently stretch the fiberoptic sensor on the other side of the notch with a small weight. Fix several points along the sensor using glue 502. Pour the epoxy resin into the notch and cover the fiber-optic sensor. The fiberoptic sensor will be well prepared after the coagulation of the epoxy resin for 24 h. Put back the bend stiffener after all the sensors are prepared. The advantage of this method is that the fiber-optic sensor can be protected. The sensor is sensitive to measure the strain change of the umbilical. Moreover, both ends of the fiber-optic sensor can be connected to the measurement instrument if one end of the sensor is broken. The sensing fiber used in the test is a standard single-mode optical fiber coated by Hytrel ® 6,356 material. The diameter of the optical fiber is 0.9 mm. LUNA ODiSI 6000 was applied as the measurement instrument in this test. The schematic of the curvature measurement system using fiber-optic sensors in the fatigue test is shown in Figure 5B. Three fiber-optic sensors were installed on the umbilical. The angles between the fiber-optic sensor and the neutral surface are 0°, 45°, and 90°. The length of each fiber-optic sensor is approximately 6 m. Each point on the fiber-optic sensor can be seen as a sensing point. The gage pitch on the fiber-optic sensor was set to 2.6 mm with a 25 Hz sampling frequency. The position of the start point of the measure segment can be set by pressing the gauge pitch before the test. Curvature Algorithm The strain on the outer sheath of the umbilical can be measured through the fiber-optic sensor. As the umbilical sustains the combination of tension and bending, the measured strain can be written as follows: ε ε tension + ε bending . The curvature of the umbilical is caused by the bending stress and can be written as follows: where y is the distance from the fiber-optic sensor to the neutral surface, θ is the angle between the fiber-optic sensor and the neutral surface, d is the outer diameter of the umbilical. Fiber-optic sensor No. 1 lays on the neutral surface. When the umbilical is under the combined load of tension and bending, the longitudinal stress and strain caused by bending are zero. The strain measured by sensor No. 1 represents the pure tension strain. The average curvature is calculated in this test and can be written as follows: where ε 1 , ε 2 , ε 3 is the measured strain of fiber-optic sensors No. 1, No. 2, and No. 3. Pure Tensile Test and Static Trial Test To observe the performance of the applied fiber-optic sensor on the umbilical, a trail pure tension test was conducted. In this test, the swing head of the test rig is in the horizontal position. There is no contact between the umbilical and the bend stiffener in this trial test. A pure tensile load of 0-90 kN was applied to the umbilical. The test results are shown in Figure 6. The strain of the three sensors was recorded to verify if the sensor is fixed well onto the umbilical. The tension strain on the umbilical should be stable along the umbilical. Figure 6A shows the strain along the umbilical on different pure tensile loads of the No. 3 fiberoptic sensor. As shown in Figure 6A, the strain along the arc length of the umbilical tends to be stable and changes in a small amplitude. The strain increases linearly on each point with the increasing tensile load. There is a loss of strain at the location of 1.7 m along the arc length. The decrease of strain is caused by the loose bond of the sensor at this point. However, as the tension stress will be subtracted during the test, the fluctuation of the tension strain will not affect the test results. The results of the trial test prove that the fiber-optic sensors are well fixed on the umbilical and the measured strain meets the theoretical tendency. Figure 6B shows the tension strain of the three different sensors at the same arc length of 2 m. The error between the sensors decreases with the increasing tensile force. The maximum error is 6.1% at the load of 90 kN. A follow-up static trial test 2 was conducted to compare the measured curvature through two different measurement methods. A tensile load of 100 kN with an 8°swing angle was applied in this test. The deformed shape of the bend stiffener was measured by using a laser displacement sensor as stated in Methods of Curvature Measurement section. The curvature of the umbilical is calculated by the tested shape information of the bend stiffener. The maximum curvature and its location in the two measurement methods are compared, which is shown in Table 5. The maximum curvature obtained by the two methods is almost the same. However, there is still a difference between the location of the maximum curvature by the two methods. The pitch of different measuring points by the laser displacement sensor is smaller than the fiber-optic sensor. The maximum curvature and its location will be measured more accurately by applying a more densely measuring point by OFDR. As the fiberoptic sensor is placed on the umbilical, it is believed that the data of the OFDR method are more reliable. The Influence of the Swing Angle on the Curvature Test 3 was conducted with a constant tensile load of 100 kN and swing angle up to 8°. The test results are shown in Figure 7. Figure 7A shows the curvature of the umbilical on different swing angles. The curvature shows an increasing trend at the starting area of the umbilical. The curvature reaches the maximum point at the area between 0.16 and 0.27 m along the arc length. Then, the curvature shows a decreasing trend. If the bend stiffener is not applied with the umbilical, the maximum curvature happens on the umbilical end. The location of the maximum curvature moves towards the middle segment along the umbilical by the influence of the bend stiffener. The decrease rate of the umbilical increases after 0.45 m, which is the length of the root of the bend stiffener. The outer diameter of the bend stiffener decreases at the location of 0.45 m along the arc length. The value and location of the maximum curvature are shown in Figure 7B. The maximum curvature increases linearly with increasing swing angle. The location of the maximum curvature moves to the end side with increasing swing angle. The test results show that the location of the maximum curvature of the umbilical is not at the same place with the change of swing angle, the area in which the maximum curvature lays is determined. Rigorous monitoring of the umbilical should be conducted in this area, such as more dense distribution of sampling points and damage monitoring of the umbilical. The Influence of the Tensile Load on the Curvature Test 4 was conducted with a constant swing angle of 8°and the tensile load changes. The test results are shown in Figure 8. Figure 8A shows the curvature of the umbilical with different tensile loads and the same swing angle of 8°. The maximum curvature and its location are shown in Figure 8B. The result shows that the curvature increases linearly with increasing tensile load. The location where the maximum curvature happens differs with the different tensile loads. The maximum curvature location decreases with the increasing tensile load. With the increase of the tensile load from 40 to 80 kN, the location of the maximum curvature moves from 0.27 to 0.15 m along the arc length. When the tensile load is larger than 80 kN, the location stays stable with the increasing tensile load. CONCLUSION In conclusion, a new measurement method based on OFDR technology was developed for monitoring the maximum curvature in the dynamic umbilical fatigue test. A tensionbending test was conducted to verify the feasibility of this method. The experimental results of this study lead to the following conclusions: Frontiers in Materials | www.frontiersin.org July 2021 | Volume 8 | Article 717190 8 (i) The distributed OFDR technology was proved to be feasible to measure the curvature of the umbilical inside a bend stiffener in a prototype fatigue test. The maximum curvature and its location can be measured successfully by utilizing this method. (ii) The curvature of the umbilical measured by the fiber-optic sensor is different from the direct displacement measure outside the bend stiffener. (iii) The maximum curvature of the umbilical increases linearly with increasing tensile load and swing angle. (iv) The location of the maximum curvature is in a specific area instead of a fixed point with the change of tensile load and swing angle. The location of the maximum curvature moves to the end side of the umbilical with increasing tensile load and swing angle. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.
5,882.8
2021-07-30T00:00:00.000
[ "Engineering", "Materials Science" ]
Identifying Bacteria Species on Microscopic Polyculture Images Using Deep Learning Preliminary microbiological diagnosis usually relies on microscopic examination and, due to the routine culture and bacteriological examination, lasts up to 11 days. Hence, many deep learning methods based on microscopic images were recently introduced to replace the time-consuming bacteriological examination. They shorten the diagnosis by 1–2 days but still require iterative culture to obtain monoculture samples. In this work, we present a feasibility study for further shortening the diagnosis time by analyzing polyculture images. It is possible with multi-MIL, a novel multi-label classification method based on multiple instance learning. To evaluate our approach, we introduce a dataset containing microscopic images for all combinations of four considered bacteria species. We obtain ROC AUC above 0.9, proving the feasibility of the method and opening the path for future experiments with a larger number of species. The standard bacterial diagnostics procedure [1], presented in the upper part of Fig. 1, starts with collecting various types of test materials, such as swabs, scraps of skin lesions, urine, blood, or cerebrospinal fluid. Then, the clinical material is directly cultured on special media under specific temperature conditions (usually for 1-2 days, blood and cerebrospinal fluid samples require prior cultivation in automated closed systems for additional 1-5 days). Often bacteria colonies are too close to each other and it is not feasible to obtain monoculture colonies after the first inoculation on the culture medium. To obtain samples with single species, they need to be separated in an iterative process (1-2 days). The initial identification of bacteria is based on microscopic observation, which takes into account the growth rate, type, shape, and color. Such analysis allows only approximate identification due to species similarity, in consequence, a bacteriological examination is required. It is a set of pre-laboratory and laboratory procedures aimed at identifying microorganisms and determining their drug sensitivity. Diagnostic diagrams in bacteriology consist of the following laboratory procedures: 1) microscopic examination of the direct preparation; 2) inoculating the material on an appropriate medium to obtain pure bacterial cultures; 3) morphological macro and microscopic observations of the obtained cultures; 4) testing the physiological properties of pure cultures; 5) immunological research; and 6) determination of sensitivity to antimicrobial substances, including drugs. Conventional bacteriological examination may take up to 11 days. Due to the long time required for the standard process of species identification and its high costs [2], it is beneficial to use methods that do not rely on conventional methods. Existing solutions can automatically distinguish between bacteria and fungi species [3], [4] or even bacteria clones [5] using microscopic images and deep learning methods. However, it is only possible for monoculture images, which requires multiple culture iterations. In this work, we present a feasibility study for a further acceleration of diagnosis by reducing the number of culture iterations. For this purpose, we introduce a multi-label classification method based on multiple instance learning [6] to address a shortage of GPU memory when training the model on high resolution images. Firstly, we split each image into Fig. 1. Standard microbiological diagnosis requires iterative species division culture and biochemical tests, extending the diagnosis process up to 11 days. While the existing methods for automatic species identification do not require biochemical tests, they still need iterative species division and culture as they require monoculture images. In contrast, our method works on polyculture images. Hence, diagnosis shortens to 2-7 days. smaller patches to which we assign the image labels. Because each patch is associated with a label, we can train a patch classifier. The output of its penultimate layer serves to generate a patch representation. Finally, we aggregate representations of all patches belonging to the analyzed image and pass the cumulative representation to the classifier. To evaluate our approach, we introduce a dataset containing microscopic images for all combinations of four considered bacteria species. Moreover, we provide results for different variants of our method and compare them with existing stateof-the-art approaches. Additionally, we preform extensive ablations studies: on set of species of high resemblance as well as how image magnification and amount of training data influence the performance. Our contributions can be summarized as follows: r Shortening the time of bacteria identification with methods classifying polyculture images. r Introducing multi-MIL, a multi-label classification method based on multiple instance learning, with increased interpretability compared to existing methods. r Providing a methodology for creating controlled datasets of polyculture images of exactly known species, similar to real-life images. II. RELATED WORKS For various medical purposes, e.g. an epidemiological investigation [5] and an infection diagnosis [3], the classification of microbiological organisms, especially bacteria, is essential. Traditional methods of microorganism identification and classification are expensive and labour-intensive [7]. Therefore, researchers have been developing machine learning techniques to improve or even automate recognition of non-living infectious agents (e.g. viruses [8]) and microorganisms such as algae [9], bacteria [10], fungi [11], and protozoa [12]. However, according to our best knowledge, existing methods focus on identifying a single microbe per microscopy image. Identification of microbes can be described in the context of types of imaging, taxonomy, and computational methods such as deep learning. However, due to the variety of approaches, we decided to present the related works chronologically, emphasizing computer vision methods and bacteria species identification. One of the first works [13] clustered dinoflagellate cyst with self-organized maps (SOMs) on microscope-mounted camera images. Later, in [14] an artificial neural network was trained using contour invariant moment and morphological features extracted from microscopy images to identify wastewater bacteria. Just a year later, a probabilistic neural network [15] was used to classify five microorganisms, stained with fluorescent dyes and captured with a light microscope. The authors used nine morphological features to describe microbes in images of single bacteria extracted from the original microscopy image. At the same time, in [16], decision trees were used to identify Mycobacterium tuberculosis from ZN-stained sputum smear images. Hiremath and Bannigidad of [17] exploited information about cocci bacteria geometry and extracted morphological features, such as sphericality, to train a 3σ method, kNN, and ANN classifiers. In successive years, researchers explored the classification of micro-organisms using methods such as random forest for classification of tuberculosis bacteria [18], minimal sequential optimization for an algae image classification [19], genetic programming for representing an image and optimum-path forest classifier [20]. The last work examined bright-field microscopy images of the 15 most common species of protozoan cysts, helminth eggs, and larvae with fecal impurities. Priya and Srinivasan of [21] also delve into tuberculosis research by extraction of fifteen Fourier descriptors passed to a multi-layer perceptron with activations classified via support vector machines. Meanwhile, five species of Staphylococcus bacteria were identified in hyperspectral microscopic images [22], and their classification was conducted with SVM and Partial Least Square Discriminant Analysis (PLS-DA). Then, [3] classified bacteria colony using deep learning approach. Similarly, in [23] authors used textures features extracted from CNNs to identify gut bacteria in larval zebrafish using 3D light-sheet fluorescence microscopy images. Lakshmi and Sivakumar of [24] compared a multitude of methods, i.e. kNN, SVM, RF, ANN, and CNN which achieved the highest accuracy. In [25], a system for environmental microorganism classification on microscopic images was presented, and the authors used Conditional Random Fields (CRF) and Deep Convolutional Neural Networks (DCNN). In recent years, CNN with Raman spectroscopy was used in [26], which used the database of thirty yeast and bacterial isolates of five species. Arredondo-Santoyo et al. of [27] investigated standard features, expert features, and features extracted using deep neural networks. This approach involved various machine learning algorithms,i.e. logistic regression, KNN, SVM, and random forest for classification. They also present the problem of dye decolorization in fungal strains. Fungus classification was also focus of [4] with the use of Fisher Vector and Random Forest on features extracted from AlexNet neural network [28]. Seven food-borne pathogens, captured with hyperspectral imaging, were classified using pixel features and a classifier based on SVM, competitive adaptive weighted sampling, and particle swarm optimization [29]. A novel approach based on coherent time-lapse images was used in [30] to detect live bacteria, even mixes of two species. In the latest research, [5] used attention-based multiple instance learning pooling to classify clones of Klebsiella pneumoniae as well as persistence homology to obtain explanations of the model and description of each clone. Yu et al. of [31] created a hierarchical classification model for taxonomy purposes with PCA, LDA, and random forest, using gold nanoparticles measurements. Then, transfer learning was used in [32] with ResNet-18 [33] to detect longitudinal bacterial fission and in [34] with atrous convolution. Finally, [35] used various convolutional architectures to generate image representation which were then concatenated and classified with xgboost [36]. More detailed insights about microbe classification can be found in reviews [37], [38], [39]. According to our best knowledge, none of the aforementioned works consider a dataset of mixed bacteria species captured on microscopy images as an alternative approach for iterative species division and culture in microbiological diagnosis. Therefore, in this work, we describe the results of a feasibility study on this problem. L. plantarum belongs to the genus Lactobacillus called Lactic Acid Bacteria (LAB) and is facultatively anaerobic or strictly anaerobic rods. These microorganisms are a component of microbiota of the mouth, vagina, stomach, intestines, and genitourinary tract, especially in breastfed infants. Also, they are found in water, sewage, plants, food products, human body, and warm-blooded animals. The LAB bacteria are most commonly isolated in urine specimens and blood cultures due to transient bacteremia, endocarditis, or opportunistic septicemia. Lactobacillus strains are also very widely used as probiotics [43]. S. aureus is the best-known, highly virulent member of the genus Staphylococcus which are important pathogens in humans, causing a wide spectrum of life-threatening systematic diseases, including infections of the bones, skin, soft tissue, urinary tract, and opportunistic infections. They also cause sepsis and septic shock [44]. E. coli is an important member of the family Enterobacteriaceae and the most common aerobic, Gram-negative rods in the gastrointestinal tract. This bacteria is associated with various diseases, including gastroenteritis and extraintestinal infections such as urinary tract infections, meningitis, sepsis, and hemorrhagic colitis. Moreover, the presence of E. coli in the human intestine is an important indicator of fecal contamination of water, food, and medicines [45]. N. gonorrhoeaea is the etiological factor of gonorrhea, one of the most widespread sexually transmitted diseases. These bacteria are strictly human pathogens [46]. Pure cultures of E. coli (strain ATCC 25922) were grown overnight at 37 • C on Mac-Conkey agar (MAC agar, Merck Germany), L. plantarum (strain ATCC14431) was isolated from MRS medium (De Man, Rogosa and Sharpe agar, Oxoid, UK), N. gonnorhoeae was selected from Theyer-Martin medium (T-M medium, Graso, Poland), and S. aureus was cultiveted on Columbia CNA Agar with 5% Sheep Blood (CNA agar, Becton Dickinson, Germany). Then, samples of each bacteria were isolated from single bacterial colonies using a 1 μl calibrated loop (Bionovo, Poland) and mixed on the surface of a basic microscope slide in a drop of saline. Bacteria species were mixed in all possible combinations to create samples containing up to 4 different species. Additional replicate was made for each mix, resulting in two microscopic preparation on two different slides. After fixing the slides over the flame of a hot burner for 10 seconds, they were Gram-stained using a commercially available kit (Merck, Poland) according to the manufacturer's instructions [41], [47]. Finally, microscopic images of samples were taken from 10 different locations per slide. The resolution of obtained images was 4912 × 3684 pixels. Images were taken using an Olympus BX63 microscope with 100× super-apochromatic objective under oil-immersion. The photographic documentation was then produced with an Olympus Hamamatsu camera ORC and CellSense software (Olympus). IV. METHODS To identify bacteria species, we develop a pipeline (see Fig. 3) which for a given image returns labels y c ∈ {0, 1} for c = 1, .., C corresponding to each of C bacteria species. The pipeline starts with an image preprocessing and extracting its patches X = {x 1 , .., x n }. Then, it generates representations of patches {h 1 , .., h n } using a representation network f without the last layer (denoted f −1 ). Patches' representations are then aggregated into an image representation h using various types of pooling p. Finally, a multi-label classifier g obtains C predictions. A. Preprocessing Images and Extracting Patches First, we decrease each image size by two in each dimension (magnification: 1/4x) and we divide images into patches of resolution 250 × 250 pixels using a sliding window mechanism with stride 125. This introduces some redundancy in information but allows us to include each bacteria cell. Some patches may not contain any material or bacteria overlapping so much that is impossible to classify them. This is due to bacterial cells being characterized by low density, they refract and absorb light poorly, which makes it difficult to distinguish them from the background, therefore they are clearly visible in the microscope only after staining. The microscopic preparation is prepared on a degreased, cooled glass slide by applying and spreading (smear) drops of the bacterial suspension using a loop. Although there are loops with a strictly defined mesh diameter, eg 1 μl, 10 μl (so-called calibrated loops), while making a smear, even a calibrated loop cannot be controlled in any way by the random pattern of cells obtained on the slide. To reduce use of such uncontrolled patterns, we calculate the standard deviation σ p of the pixel intensities and remove patches with σ p ∈ [2,15]. The interval value for standard deviation σ p was obtained experimentally using a training dataset to maximize the number of patches with clearly visible bacteria cells. Finally, following a good practice [48] and previous research [4], [5], [49], we normalize the remaining patches by subtracting the mean and dividing by the standard deviation. Both values are again derived from training patches. On average, we've obtained 160 patches per image resulting in total in 47149 patches obtained from 293 images. Detailed information about each experiment are presented in Table II. B. Generating Patches' Representations To derive a meaningful patch representation, we use a transfer learning technique. We pretrain ResNet-18 [33] neural network on ImageNet [50]. Then, we replace the last layer of the pretrained neurons with four neurons corresponding to four bacteria species and finetune the model (denoted f ) with previously extracted patches. The resulted model without the final layer (denoted f −1 ) is used to generate patches' representation h i . C. Aggregation and Classification Here, we first recall the multiple instance learning definition and then provide its specific implementations, including instance and embedding-based methods, recurrent neural network, attention-based methods, and our novel multi-label Multiple Instance Learning (multi-MIL). a) Definition: A typical supervised problem assumes that a single input x corresponds to a single output y of the model. However, in Multiple Instance Learning (MIL) [51], each input is represented by a bag of instances X = {x i } n i=1 of variable size n, which also corresponds to a single output y. Moreover, in the standard MIL assumption there is binary y ∈ {0, 1} and hidden binary labels y i ∈ {0, 1} of each instance (unavailable during training), where y = 1 if at least one y i = 1. However, this assumption does not fit multi-label classification of bacteria species with C binary outputs. b) Instance and embedding-based methods: The simplest MIL approaches, called instance-based methods, aggregate the predictions for bag instances using maximum c) Recurrent networks: Embeddings {h 1 , .., h n } can also be considered as a sequence [52] and passed to Recurrent Neural Network (RNN) that jointly aggregates and classifies bags of various sizes. We employ this strategy with LSTM [53] and GRU [54] models to extend the number of baseline approaches. d) Attention-based MIL: Embedding-based methods are imperfect because they apply pooling operations to all embedding without considering the importance of particular instances. As a result, a classifier can obtain irrelevant features. Hence, weighted average poolings were introduced based on the attention mechanism: Attention-based Multiple Instance Learning Pooling (AbMILP) [6] and Loss-based Attention (LA) [55]. In the case of AbMILP, pooling p is defined as where weight a i is described by with trainable parameters w and V. Notice that the sum of all weights within the bag equals 1. Hence, the model works for various sizes of a bag. In comparison, LA model simplifies the computation of weights to with trainable parameter w. Moreover, w is reused as the parameter of classifier g to model hidden labels of instances and increase the interpretability. It is possible thanks to the simplified a i computations and the same dimension of h and h i . Both AbMILP and LA return an aggregated bag representation, which is passed to the classifier g to obtain a prediction. e) Multi-MIL: When there is a single output label, LA can be used to directly link the weights of instances with their influence on a prediction. However, LA cannot be directly used in a multi-label setup. At the same time, AbMILP can be used, but the correspondence between weights and influence is difficult to observe. Therefore, we introduce a multi-label version of those models, called multi-AbMILP and multi-LA. For this purpose, we provide separate weighted average pooling and classifier for each class Hence, there are four different pairs of poolings and classifiers for four considered bacteria species. As a result, we obtain a direct correspondence of weights and influence, which improves the interpretability of the methods. V. EXPERIMENTAL SETUP We repeat all experiments five times. Each time, for each mix, we randomly assign one of its two slides to the training set and the second one to the testing set. This way, we eliminate the possible environmental bias. Moreover, all models are trained in three scenarios: r poly-poly: f , p, and g are trained both on monoculture and polyculture images, r mono-poly: f is trained on only monoculture images, but p and g are trained on both types of images, r mono-mono:f , p, and g are trained only on monoculture images. However, they are always tested using all images. We decided to use three training scenarios to estimate the importance of species combinations for the model training. It is essential because the number of combinations grows exponentially with the number of species. From this perspective, the poly-poly scenario obtains the highest accuracy but requires polyculture images. At the same time, the mono-mono scenario obtains the lowest accuracy but requires only monoculture images. That is why we test a third scenario, where the representation network f is trained on monoculture images while pooling p and classifier g use both monoculture and polyculture images. Because p and g have much fewer parameters than f , the last scenario can be used in future research to limit the number of polycultures images. To finetune the representation network, we use batch size 64, an initial learning rate of 10 −4 , which decreases 10 times every 1000 iteration. Hyperparameters were obtained from the preliminary experiments with learning rates from the range We performed the hyperparameters search for the classification network using grid search over learning rate from the range [0.000005, 0.001] and weight decay from the range [0.00001, 0.05]. We use a standard number of three attention heads [6] and batch size 1 due to the variability of the bag length. We perform all the experiments on a workstation with four 12 GB GPU and 64 GB RAM. On average, it takes 10 hours to train the representation network and 2 hours to generate patch representations for the classification step. Training pooling and classifier lasts up to 4 hours. Both networks were implemented using PyTorch and Adam optimizer [56] with parameters β 1 = 0.9 and β 2 = 0.999. Table I presents the overall accuracy and ROC AUC in three considered scenarios (described in Section V) for ten different methods. In bold, we mark the best method and methods that are not significantly worse. We obtain them by comparing the best method to all others using the Wilcoxon signed-rank test. Results are significantly different if the p-value is smaller than 0.05. A. Polyculture Images in All Training Steps (Poly-Poly) To estimate the upper bound of problem performance, we train the models in the first scenario with monoculture and polyculture images. Almost all of the methods obtain ROC AUC over 0.9. Nevertheless, the highest ROC AUC is obtained with embedding-based methods, AbMILP, and multi-AbMILP (0.961, 0.944, and 0.972 respectively). This is expected because distributions of training and testing sets are similar and the information about polyculture images is propagated throughout the entire pipeline. However, this solution does not scale up for the growing number of recognized bacteria species because it is impractical to create polyculture images of all possible mixes. B. Polyculture Images in the Pooling and the Classifier (Mono-Poly) In the second scenario, the representation network is trained only on monoculture images, but the pooling and classifier are also trained on polyculture images. In this case, the CNN and instance-based methods work the same as in the mono-mono scenario. One can observe that results for multi-MIL methods are better than in the poly-poly scenario. Both multi-AbMILP and multi-LA give ROC AUC over 0.95. It indicates that multi-MIL methods do not require polyculture images when training the representation network. Therefore, they should behave satisfactorily when the number of recognized species grows. C. Only Monoculture Images in All Training Steps (Mono-Mono) In the third scenario, all steps are trained only with monoculture images containing single bacteria species. We observe a big decrease in accuracy for this scenario across all methods. It indicates that the polyculture information is crucial when training the pooling and classifier because when trained on single bacteria images, the model becomes confused seeing an image of polyculture. However, polyculture images are not necessary to train the representation network. Moreover, ROC AUC of attention-based methods AbMILP, multi-AbMILP, and multi-LA, is relatively high, again confirming their relevance. Figure 4 presents the most important patches, i.e. patches with the largest weights in a pooling method. AbMILP model focuses on images with mixed bacteria species, while multi-AbMILP prefers images focused on one species. A similar trend is observed in Fig. 5, where we additionally present the least important patches, i.e. patches with the smallest weight in a pooling method. The figure shows that the AbMILP does not capture the nature of each species, while multi-AbMILP focuses on the most important patches with characteristic features of a given species. For example, in NG, we observe that the least important are patches with purple rods, while the most important ones are round and pink, which corresponds to the nature of NG that is a Gram-negative (pink) cocci (round). Therefore, we conclude that current MIL approaches cannot explain the results for each task, like the AbMILP model, which always weighs the patches similarly, no matter which species it predicts. In contrast, the multi-MIL models provide individual prediction interpretations for each task, making them more interpretable. VII. ABLATION In this section, we provide additional results on bacteria species identification using polyculture images. Firstly, we study how the deep learning models perform on bacteria species that are much more similar. Then, we analyze how the image magnification influences the model effectiveness, as well as how many training examples are required to obtain a meaningful model. A. SA+SH+SAP In this experiment, we check the performance of deep learning algorithms on polyculture bacteria images of species of a high resemblance. We use Gram-stained images of bacteria from the Staphylococcus group. They cause a wide spectrum of life-threatening systemic diseases and can be found on the skin, in the nostrils, urinary tract, and female reproductive tract. Those species can be commonly found in the human population and even 30% of humans can carry Staphylococcus aureus. In our subjective opinion, they are very similar to each other and the difference (mostly in the cell size) is barely perceptible in microscopic slides by the human eye. We are studying the following 3 species: Staphylococcus haemolyticus (SH), Staphylococcus saprophyticus (SAP), and Staphylococcus aureus (SA). Examples of those species are presented in Fig. 6. It is worth noting that even though those species are very similar and present a challenge to a deep learning model, our database contains only 3 of them which makes the classification problem slightly easier. In Table III, we present the results on the datasets consisting of similar bacteria species. One can observe that the multi-MIL approach once again surpasses all the other methods, especially in a poly-poly scenario. Also, we observe that the accuracy of the models is poor in the mono-mono scenario. This is strictly related to the high resemblance of the staphylococci species to each other and the overfitting of the model. Indicating, that it is important to use polyculture images in the training phase to obtain a meaningful model. B. Image Magnification In this ablation study, we test 3 magnification of patches. We followed the same procedure for patch generation as in Section IV-A (1/4x) and introduced images in original size (1x) and in size decreased by 4 in each dimension (1/16x). Fig. 7 presents that using 1/4x magnification, in almost all cases, results in the best performance. C. Percent of Training Data We trained models with 10%, 50% and 100% of training data to study the amount of images needed for satisfactory results. Testing was performed on entire testing set in each case. Fig. 8 shows majority of methods have the best performance when trained on 100% of data but CNN-based methods can be also used with only 50% of training data. VIII. CONCLUSION This work introduces multi-label classification methods based on multiple instance learning to identify bacteria species on polyculture images. Our method takes advantage of the fact that the multiple instance learning methods automatically assign interpretable weights to instances. Moreover, it introduces a mechanism that allows for multi-label classification without a decrease in the aforementioned interpretability. Experiments conducted on the specially created bacteria mixes database resulted in high ROC AUC values of up to 0.972, which supports the success of this feasibility study. In the future, we plan to expand the database to new bacteria species and other microorganisms, thus creating a tool for a fast and reliable microbiological diagnosis and, in consequence, a faster treatment. Additionally, we plan to analyze how different imaging techniques for capturing the bacteria species, such as novel microscopes that operate in nanoscale resolutions, influence the performance of artificial intelligence methods. However, those novel solutions are in early adaptation stages and it is challenging to create a substantial dataset for deep learning methods.
6,237
2022-09-26T00:00:00.000
[ "Computer Science" ]
N-Methyl-d-glucamine–Calix[4]resorcinarene Conjugates: Self-Assembly and Biological Properties Deep insight of the toxicity of supramolecular systems based on macrocycles is of fundamental interest because of their importance in biomedical applications. What seems to be most interesting in this perspective is the development of the macrocyclic compounds with biocompatible fragments. Here, calix[4]resorcinarene derivatives containing N-methyl- d-glucamine moieties at the upper rim and different chemical groups at the lower rim were synthesized and investigated. These macrocycles showed a tendency to self-aggregate in aqueous solution, and their self-assembly abilities depend on the structure of the lower rim. The in vitro cytotoxic and antimicrobial activity of the calix[4]resorcinarenes revealed the relationship of biological properties with the ability to aggregate. Compared to macrocycles with methyl groups on the lower rim, calix[4]resorcinarenes with sulfonate groups appear to possess very similar antibacterial properties, but over six times less hemolytic activity. In some ways, this is the first example that reveals the dependence of the observed hemolytic and antibacterial activity on the lipophilicity of the calix[4]arene structure. Introduction Calixarenes are one of the important and promising classes of macrocyclic host-molecules in supramolecular chemistry. Thanks to their hard structure, presence of an inner cavity, wide possibilities of modification of the upper and lower rims, calixarenes have attracted considerable attention for various applications ranging from biochemistry to catalysis [1][2][3][4][5][6]. Both rims of these macrocycles can be modified for development of advanced functional molecules [7]. The addition of specific functional groups in the molecular structure influences the calixarene properties and, consequently, can provide new possibilities, among which are aggregation properties, aqueous solubility, biocompatibility, stimuli-response, biological activity, etc. [8][9][10]. Supramolecular aggregates of calixarenes formed by non-covalent interaction include micelles, vesicles, films and other nanostructures, which are used for different biological and industrial processes [1]. The non-toxicity of calixarenes allows the use of systems based on these macrocycles for delivery and to improve the bioavailability of drugs [3,11,12]. N-Methyl-d-glucamine (meglumine, MG) consists of tertiary amine fragments and sorbitol and has structural features related to glycosides. Its most interesting feature is the ability to form supramolecular adducts with lipophilic organic compounds in water [13]. This property is very useful in pharmaceutical industry for increasing the solubility of drugs and their stabilization in water solutions [13,14]. With meglumine derivatives as the amine component in Mannich reactions, potential pharmaceutical compounds have been attained [15]. The choice of N-methyl-d-glucamine as N-Mannich base is due to the important circumstance of increasing system stability. For example, substitution by N-methyl-d-glucamine increases the stability of cyclic peptide dimers and nanotubes [16]. The successful use of N-methyl-d-glucamine to obtain water-soluble macrocycles [17], namely porphyrin derivatives capable of localizing in cancer tissues [18] is also known. The high potential application of chlorin e6 noncovalently complexed with N-methyl-d-glucamine groups for photodynamic treatment was also shown [19]. In view of the above, it can be assumed that the introduction of the N-methyl-d-glucamine moiety into a macrocycle makes it possible to synergistically improve the benefits of both the calixarene platform and the glucamine fragment from the viewpoint of biomedical applications. Importantly, the N-methyl-d-glucamine group has never been used for modification of calix [4]resorcinarenes. Considering the biocompatible nature of meglumine group, we decided to modify the upper rim of calix [4]resorcinarene scaffold with this group and investigate the aggregation behavior of the macrocycles obtained depending on the lower rim structure. This paper presents data on the aggregation and biological properties of new calix [4]resorcinarenes, containing N-methyl-d-glucamine groups in the upper rim and different chemical groups (sulfonate and methyl) on the lower rim (Scheme 1). The revealed self-assembling and biological characteristics allow to form the basis for creation of nanoparticles with particular physicochemical properties suitable for medical applications. Molecules 2019, 24, x 2 of 15 supramolecular adducts with lipophilic organic compounds in water [13]. This property is very useful in pharmaceutical industry for increasing the solubility of drugs and their stabilization in water solutions [13,14]. With meglumine derivatives as the amine component in Mannich reactions, potential pharmaceutical compounds have been attained [15]. The choice of N-methyl-D-glucamine as N-Mannich base is due to the important circumstance of increasing system stability. For example, substitution by N-methyl-D-glucamine increases the stability of cyclic peptide dimers and nanotubes [16]. The successful use of N-methyl-D-glucamine to obtain water-soluble macrocycles [17], namely porphyrin derivatives capable of localizing in cancer tissues [18] is also known. The high potential application of chlorin e6 noncovalently complexed with N-methyl-D-glucamine groups for photodynamic treatment was also shown [19]. In view of the above, it can be assumed that the introduction of the N-methyl-D-glucamine moiety into a macrocycle makes it possible to synergistically improve the benefits of both the calixarene platform and the glucamine fragment from the viewpoint of biomedical applications. Importantly, the N-methyl-D-glucamine group has never been used for modification of calix [4]resorcinarenes. Considering the biocompatible nature of meglumine group, we decided to modify the upper rim of calix [4]resorcinarene scaffold with this group and investigate the aggregation behavior of the macrocycles obtained depending on the lower rim structure. This paper presents data on the aggregation and biological properties of new calix [4]resorcinarenes, containing N-methyl-D-glucamine groups in the upper rim and different chemical groups (sulfonate and methyl) on the lower rim (Scheme 1). The revealed self-assembling and biological characteristics allow to form the basis for creation of nanoparticles with particular physicochemical properties suitable for medical applications. Scheme 1. Synthesis of GCR-1 and GCR-2. Synthesis and Characterization of N-methyl-D-glucamine-Based Calix[4]resorcinarenes Synthesis of N-methyl-D-glucamine-based calix [4]resorcinarenes (GCRs) was carried out in two steps as illustrated in Scheme 1, and the details are disclosed in the Materials and Methods section. At the first step, the synthesis of calix [4]resorcinarenes with sulfonate and methyl groups on the lower rim were carried out based on published methods [20,21]. Modification of their upper rim by N-methyl-D-glucamine fragments was performed by Mannich reaction in the ortho-position of resorcinarene cores, namely by three-component condensation of the calix [4]resorcinarene parent molecule, secondary amine (N-methyl-D-glucamine) and formaldehyde (Scheme 1). N-methyl-D-glucamine is mixed with paraformaldehyde and the mixture is heated until completely dissolution of the reagents, then a suspension of calix [4]resorcinarene in ethanol is slowly added while heating. In case of GCR-1 a water-methanol suspension is used. The reaction mixture was heated to 80 °C for 24 h. Raw products were filtered, purified by dialysis and recrystallized from ethanol. The structures of the synthesized compounds were confirmed by 1 H-NMR, 13 C-NMR and IR (Figures S1-S3). Scheme 1. Synthesis of GCR-1 and GCR-2. Synthesis and Characterization of N-methyl-d-glucamine-Based Calix[4]resorcinarenes Synthesis of N-methyl-d-glucamine-based calix [4]resorcinarenes (GCRs) was carried out in two steps as illustrated in Scheme 1, and the details are disclosed in the Materials and Methods section. At the first step, the synthesis of calix [4]resorcinarenes with sulfonate and methyl groups on the lower rim were carried out based on published methods [20,21]. Modification of their upper rim by N-methyl-d-glucamine fragments was performed by Mannich reaction in the ortho-position of resorcinarene cores, namely by three-component condensation of the calix [4]resorcinarene parent molecule, secondary amine (N-methyl-d-glucamine) and formaldehyde (Scheme 1). N-methyl-d-glucamine is mixed with paraformaldehyde and the mixture is heated until completely dissolution of the reagents, then a suspension of calix [4]resorcinarene in ethanol is slowly added while heating. In case of GCR-1 a water-methanol suspension is used. The reaction mixture was heated to 80 • C for 24 h. Raw products were filtered, purified by dialysis and recrystallized from ethanol. The structures of the synthesized compounds were confirmed by 1 H-NMR, 13 C-NMR and IR (Figures S1-S3). Conductometric Measurements Firstly, self-assembly of GCR-1 was studied by conductometric measurements (Figure 1). Specific conductivity increases proportionately with the GCR-1 concentration in water solution. Inflection on the plot appears at 14 mM, and after this critical aggregation concentration (CAC) the increment rate of conductivity reduces because the aggregates transfer charge less efficiently than monomers. Remarkably, modification of calix [4]resorcinarene by N-methyl-d-glucamine fragment facilitates self-aggregation of macrocycle in water medium ( Figure 1). The CAC value for calix [4]resorcinarene (CR-1) without meglumine units at the upper rim is 20 mM that is greater than that for meglumine-modified calix [4]resorcinarene. The driving forces of intermolecular aggregates' formation are probably hydrogen bonds between hydroxyl groups on the GCR-1 upper rim, so the presence of additional OH-groups in the meglumine group reinforces this noncovalent interaction. A stable pH value, which can be observed over a wide range of concentrations, verifies that the N-methyl-d-glucamine fragment is an effective buffer component ( Figure S4). Conductometric Measurements Firstly, self-assembly of GCR-1 was studied by conductometric measurements (Figure 1). Specific conductivity increases proportionately with the GCR-1 concentration in water solution. Inflection on the plot appears at 14 mM, and after this critical aggregation concentration (CAC) the increment rate of conductivity reduces because the aggregates transfer charge less efficiently than monomers. Remarkably, modification of calix [4]resorcinarene by N-methyl-D-glucamine fragment facilitates self-aggregation of macrocycle in water medium ( Figure 1). The CAC value for calix [4]resorcinarene (CR-1) without meglumine units at the upper rim is 20 mM that is greater than that for meglumine-modified calix [4]resorcinarene. The driving forces of intermolecular aggregates' formation are probably hydrogen bonds between hydroxyl groups on the GCR-1 upper rim, so the presence of additional OH-groups in the meglumine group reinforces this noncovalent interaction. A stable pH value, which can be observed over a wide range of concentrations, verifies that the N-methyl-D-glucamine fragment is an effective buffer component ( Figure S4). Since the hydrophobicity of GCR-2 is higher than that of GCR-1 due to presence of methyl groups instead of ionic sulfonate sites, water solutions of GCR-2 are unstable and eventually precipitate. As a result, the aggregation of this calix [4]resorcinarene was studied in water solutions at the presence of two-fold excess of N-methyl-D-glucamine (MG). Conductometic methods also confirm the self-assembly capability of GCR-2 in this aqueous medium. Conductometric values of CAC for GCR-2 of 7.9 mM are much lower than CAC of GCR-1, which is related with its lower water solubility due to the absence of ionic groups on the lower rim. Investigation of Air-Water Interfacial Tension In the next step of the study of GCR self-assembly the tensiometry method was used. Surface tension decreases with increasing GCR-1 concentration, but above the CAC (15 mM) the slope angle of the dependence changes ( Figure 2). This reduction of liquid-air interfacial tension indicates that GCR-1 has a surface activity and propensity to aggregation. Concentration point of 15 mM, at which occurs inflection of surface tension dependence, corresponds to CAC that is in good agreement with conductometric value. The tensiometric CAC value of GCR-2 (3.3 mM) is slightly different from the conductometric value (7.9 mM, Table 1). The tensiometry method is well known to fix the changes of the mutual disposition of the molecules at the water-air interface, whereas conductometry measures the number and mobility of ions in the bulk solution. The CAC values obtained for GCR-2 are lower compared to those for GCR-1. The improved ability to aggregation of GCR-2 in comparison with GCR-1 may be due higher hydrophobicity of GCR-2. Since the hydrophobicity of GCR-2 is higher than that of GCR-1 due to presence of methyl groups instead of ionic sulfonate sites, water solutions of GCR-2 are unstable and eventually precipitate. As a result, the aggregation of this calix [4]resorcinarene was studied in water solutions at the presence of two-fold excess of N-methyl-d-glucamine (MG). Conductometic methods also confirm the self-assembly capability of GCR-2 in this aqueous medium. Conductometric values of CAC for GCR-2 of 7.9 mM are much lower than CAC of GCR-1, which is related with its lower water solubility due to the absence of ionic groups on the lower rim. Investigation of Air-Water Interfacial Tension In the next step of the study of GCR self-assembly the tensiometry method was used. Surface tension decreases with increasing GCR-1 concentration, but above the CAC (15 mM) the slope angle of the dependence changes ( Figure 2). This reduction of liquid-air interfacial tension indicates that GCR-1 has a surface activity and propensity to aggregation. Concentration point of 15 mM, at which occurs inflection of surface tension dependence, corresponds to CAC that is in good agreement with conductometric value. The tensiometric CAC value of GCR-2 (3.3 mM) is slightly different from the conductometric value (7.9 mM, Table 1). The tensiometry method is well known to fix the changes of the mutual disposition of the molecules at the water-air interface, whereas conductometry measures the number and mobility of ions in the bulk solution. The CAC values obtained for GCR-2 are lower compared to those for GCR-1. The improved ability to aggregation of GCR-2 in comparison with GCR-1 may be due higher hydrophobicity of GCR-2. The presence of methyl groups on the lower rim close to the aromatic rims reduces the hydrophilic-lipophilic balance. It contributes to the formation of aggregates by means of Van-der-Waals interactions, in which intermolecular hydrogen bonds and π-stacking between aromatic rims of neighbor molecules of GCR-2 are possible. UV-Spectroscopic Measurements UV absorption spectrum of GCR-1 aqueous solution has a maximum absorption at 296 nm that corresponds to permissible π→π*-transition ( Figure 3a). Also, the spectrum has a shoulder in the 500 nm region, which is indicative of the occurrence of zwitterionic structures in GCR-1 with transfer of protons from OH-groups to nitrogen. The same is observed in the case of GCR-2 ( Figure 3a). The presence of methyl groups on the lower rim close to the aromatic rims reduces the hydrophilic-lipophilic balance. It contributes to the formation of aggregates by means of Van-der-Waals interactions, in which intermolecular hydrogen bonds and π-stacking between aromatic rims of neighbor molecules of GCR-2 are possible. UV-Spectroscopic Measurements UV absorption spectrum of GCR-1 aqueous solution has a maximum absorption at 296 nm that corresponds to permissible π→π*-transition ( Figure 3a). Also, the spectrum has a shoulder in the 500 nm region, which is indicative of the occurrence of zwitterionic structures in GCR-1 with transfer of protons from OH-groups to nitrogen. The same is observed in the case of GCR-2 ( Figure 3a). The presence of methyl groups on the lower rim close to the aromatic rims reduces the hydrophilic-lipophilic balance. It contributes to the formation of aggregates by means of Van-der-Waals interactions, in which intermolecular hydrogen bonds and π-stacking between aromatic rims of neighbor molecules of GCR-2 are possible. UV-Spectroscopic Measurements UV absorption spectrum of GCR-1 aqueous solution has a maximum absorption at 296 nm that corresponds to permissible π→π*-transition ( Figure 3a). Also, the spectrum has a shoulder in the 500 nm region, which is indicative of the occurrence of zwitterionic structures in GCR-1 with transfer of protons from OH-groups to nitrogen. The same is observed in the case of GCR-2 ( Figure 3a). Consequently, there is a chance that GCR-1 aggregates form not only by hydrogen bonds between hydroxylic groups but also by electrostatic interaction between negatively charged sulfonate groups on the lower rim and partially-positively charged meglumine groups on the upper rim. The change of optical density at 500 nm with increasing GCR-1 concentration has a nonlinear character (Figure 3b). The linear dependence breaks after 10 mM that correlates well with the CAC obtained by conductometry and tensiometry, and this violation of Lambert-Beer law is due to formation of GCR-1 aggregates in solution. The method of solubilization of the hydrophobic dye Sudan I was used for determination of formed aggregates' morphology. The appearance of an absorption band characteristic of a lipophilic substance in an aqueous solution of a specific component indicates the formation of a hydrophobic core of the component aggregates, like in surfactant micelles, which solubilizes the hydrophobic substrate. It was shown that increasing of GCR-1 concentration doesn't lead to appearance of a Sudan I absorption band at the region of 500 nm (Figure 3b), i.e., solubilization of hydrophobic dye has not risen commensurately. It may testify that GCR-1 aggregates haven't hydrophobic cavity, which could dissolve Sudan I. It also suggests that these aggregates have a stacked structure formed either by head-to-head pathway due to hydrogen bond between hydroxylic groups or by electrostatic head-to-tail interaction between negatively charged sulfonate groups of lower rim and positively charged upper rim. Dependence of GCR-2 at 500 nm on its concentration also loses linearity at higher 7.2 mM (Figure 3b). This is because the dissolved molecules of GCR-2 aggregate that confirms by conductometry and tensiometry. The obtained values of the correlation coefficients r 2 calculated for both calixarenes in a wide concentration range depict that GCR-2 has a greater ability to aggregate than GCR-1. GCR-2 solutions do not solubilize the hydrophobic Sudan I (Figure 3b), showing that formation of GCR-2 aggregates with hydrophobic cavity is non-identical with hydrophobic core of traditional surfactants. Probably, the short methyl groups fail to form a nonpolar core capable of incorporating hydrophobic guests. Thus, the cooperative effect of hydrogen bonds and π-stacking between GCR-2 molecules contribute to formation of compact aggregates incapable to solubilization of hydrophobic molecules. This is in line with our previous finding that difference between solubilization behavior of typical micelles and calix [4]resorcinol can be used as a tool for the control of binding/release behavior of lipophilic loads [22,23]. Self-Assembly Morphology Formation of aggregates with increasing GCR-1 concentration was confirmed by the DLS method. In aqueous 1 mM GCR-1 solutions the formation of small particles with hydrodynamic diameter of 2.9 nm was observed ( Figure 4a). This size is much more similar to that of a single molecule that to that of an aggregate. However, the bimodal distribution with peaks at 120 nm and 504 nm occurs for solution with 50 mM GCR-1. This could be attributed to the formation different stacked structures by head-to-head or head-to-tail interaction between GCR-1 molecules. For aqueous solutions of aminocalix [4]resorcinarene with sulfonate groups on the lower rim, the head-to-tail aggregation was also revealed previously as a result of the electrostatic interaction between oppositely charged rims [24]. Zeta-potential of GCR-1 aggregates obtained by electrophoretic light scattering in solution was −56.8 mV that correspond to sufficiently stable systems. Negative value was caused by dissociation of sulfonate groups of GCR-1 lower rim in aqueous medium. To confirm the morphology and dimension of the GCR-1 aggregates, TEM image was recorded (Figure 5a), where it was shown that GCR-1 molecules were assembled into non-well-defined large nanoparticles. The probable reason for such high polydispersity is multiple supramolecular interactions, such as Coulombic interactions between oppositely charged rims through head-to-tail self-assembly, hydrogen bonding between hydroxyl groups by head-to-head joining and π-π stacking between aromatic units of neighboring macrocycles (Figure 6a). The DLS method showed that an increase of GCR-2 concentration in solution didn`t lead to significant increasing of particles' sizes (Figure 4b). In a mixed 1 mM GCR-2-2 mM MG system particles with a hydrodynamic diameter of 2 nm were formed, which correlates well with the size of single macrocycle molecules. In a 15 mM GCR-2-30 mM MG mixed system particles with 3.8 nm diameter were formed. Such a size approximately corresponds to double the length of a GCR-2 molecule. This indicates of formation of spherical micelles by virtue of lateral packaging of aromatic walls by head-to-head contact due to π-stacking and hydrogen bonds between neighboring GCR-2 molecules as depicted in Figure 6b. Such an aggregation mode is similar to the self-assembly behavior described for other calixarenes [25]. The zeta-potential of obtained particles is -17.4 mV which is attributable primarily to hydroxylic groups. The formation of little spherical particles was further affirmed by TEM images (Figure 5b). In comparison with the nanostructure of GCR-1 aggregates, smaller nanostructures with a diameter of ca. 5 ± 2 nm were formed in the aqueous solution mixture of GCR-2, which was in good accordance with the DLS measurements. The DLS method showed that an increase of GCR-2 concentration in solution didn't lead to significant increasing of particles' sizes (Figure 4b). In a mixed 1 mM GCR-2-2 mM MG system particles with a hydrodynamic diameter of 2 nm were formed, which correlates well with the size of single macrocycle molecules. In a 15 mM GCR-2-30 mM MG mixed system particles with 3.8 nm diameter were formed. Such a size approximately corresponds to double the length of a GCR-2 molecule. This indicates of formation of spherical micelles by virtue of lateral packaging of aromatic walls by head-to-head contact due to π-stacking and hydrogen bonds between neighboring GCR-2 molecules as depicted in Figure 6b. Such an aggregation mode is similar to the self-assembly behavior described for other calixarenes [25]. The zeta-potential of obtained particles is −17.4 mV which is attributable primarily to hydroxylic groups. The formation of little spherical particles was further affirmed by TEM images (Figure 5b). In comparison with the nanostructure of GCR-1 aggregates, smaller nanostructures with a diameter of ca. 5 ± 2 nm were formed in the aqueous solution mixture of GCR-2, which was in good accordance with the DLS measurements. The DLS method showed that an increase of GCR-2 concentration in solution didn`t lead to significant increasing of particles' sizes (Figure 4b). In a mixed 1 mM GCR-2-2 mM MG system particles with a hydrodynamic diameter of 2 nm were formed, which correlates well with the size of single macrocycle molecules. In a 15 mM GCR-2-30 mM MG mixed system particles with 3.8 nm diameter were formed. Such a size approximately corresponds to double the length of a GCR-2 molecule. This indicates of formation of spherical micelles by virtue of lateral packaging of aromatic walls by head-to-head contact due to π-stacking and hydrogen bonds between neighboring GCR-2 molecules as depicted in Figure 6b. Such an aggregation mode is similar to the self-assembly behavior described for other calixarenes [25]. The zeta-potential of obtained particles is -17.4 mV which is attributable primarily to hydroxylic groups. The formation of little spherical particles was further affirmed by TEM images (Figure 5b). In comparison with the nanostructure of GCR-1 aggregates, smaller nanostructures with a diameter of ca. 5 ± 2 nm were formed in the aqueous solution mixture of GCR-2, which was in good accordance with the DLS measurements. Macroccyle Aggregation Comparison The various physicochemical techniques were used to study the self-assembling properties of GCR-1 and GCR-2. However, due to different solubility of macrocycles, they were not investigated under the same conditions, since GCR-2 was studied in the presence of MG. In order to verify the difference in aggregation behavior of these macrocycles, we conducted additional experiments in 50% H2O-50% DMSO solution that ensured approximately the same limiting solubility of both GCR-1 and GCR-2. First, the conductometric dependencies on macrocycle concentration were obtained ( Figure S5). However, for both calixarenes a linear increase in conductivity with an increase in the calixarene concentration was observed. Interestingly, in this case, the specific conductivity value for GCR-1 is above than that of GCR-2, which is probably related to the lack of MG molecules in GCR-2 solution. A similar pattern with a linear dependence is observed with a change in absorption at 500 nm for both macrocycles ( Figure S6). The linear dependences of specific conductivity and optical density on the concentration of macrocycles probably indicate that the morphology of aggregates formed in the concentration range is unchanged. Thus, it is likely that the aqueous-organic medium contributes to the formation of aggregates at low concentrations of calixarenes. Macroccyle Aggregation Comparison The various physicochemical techniques were used to study the self-assembling properties of GCR-1 and GCR-2. However, due to different solubility of macrocycles, they were not investigated under the same conditions, since GCR-2 was studied in the presence of MG. In order to verify the difference in aggregation behavior of these macrocycles, we conducted additional experiments in 50% H 2 O-50% DMSO solution that ensured approximately the same limiting solubility of both GCR-1 and GCR-2. First, the conductometric dependencies on macrocycle concentration were obtained ( Figure S5). However, for both calixarenes a linear increase in conductivity with an increase in the calixarene concentration was observed. Interestingly, in this case, the specific conductivity value for GCR-1 is above than that of GCR-2, which is probably related to the lack of MG molecules in GCR-2 solution. A similar pattern with a linear dependence is observed with a change in absorption at 500 nm for both macrocycles ( Figure S6). The linear dependences of specific conductivity and optical density on the concentration of macrocycles probably indicate that the morphology of aggregates formed in the concentration range is unchanged. Thus, it is likely that the aqueous-organic medium contributes to the formation of aggregates at low concentrations of calixarenes. NMR diffusion spectroscopy is a powerful tool to reveal aggregation behaviour of the supramolecular systems in solutions. Therefore, self-diffusion coefficients (D s ) were measured for GCR-1 and GCR-2 at various concentrations (Figure 7). It turned out that for GCR-2 the D s values are changed only slightly in a wide range of concentrations, however for GCR-1 the D s value decreases with increasing concentration. In general, the D s values for both macrocycles are similar, and the D s -C dependencies for them do not reflect a sharp phase transition revealed by conductometry and UV spectroscopy. The relations between the surface tension and the concentration of macrocycle solution were illustrated in Figure 8. Although the surface tension values are lower than in the case of aqueous solutions, the form of γ-C dependences is approximately the same as in Figure 2. A decrease in the surface tension of GCR-1 is observed with an increase in its concentration with the formation of two plateaus. The first inflection point was determined after a concentration of 10 mM, which can be correlated with the CAC value in the aqueous environment. The second plateau is formed in a highly concentrated region with GCR-1 amount above 40 mM, which is likely due to either a change of aggregation modes or morphological reassembly of its aggregates. Hence, the different media can cause slightly different aggregation behavior of water-soluble GCR-1. The turning point of the tensiometric curve for GCR-2 can be determined as 3.3 mM that is same as in water. Thus, the tensiometry method turned out to be sensitive to changes in the concentration of GCR-2 in a mixed aqueous-organic medium and, in general, reproduced data obtained in an aqueous solution. value for GCR-1 is above than that of GCR-2, which is probably related to the lack of MG molecules in GCR-2 solution. A similar pattern with a linear dependence is observed with a change in absorption at 500 nm for both macrocycles ( Figure S6). The linear dependences of specific conductivity and optical density on the concentration of macrocycles probably indicate that the morphology of aggregates formed in the concentration range is unchanged. Thus, it is likely that the aqueous-organic medium contributes to the formation of aggregates at low concentrations of calixarenes. NMR diffusion spectroscopy is a powerful tool to reveal aggregation behaviour of the supramolecular systems in solutions. Therefore, self-diffusion coefficients (Ds) were measured for GCR-1 and GCR-2 at various concentrations (Figure 7). It turned out that for GCR-2 the Ds values are changed only slightly in a wide range of concentrations, however for GCR-1 the Ds value decreases with increasing concentration. In general, the Ds values for both macrocycles are similar, and the Ds-C dependencies for them do not reflect a sharp phase transition revealed by conductometry and UV spectroscopy. The relations between the surface tension and the concentration of macrocycle solution were illustrated in Figure 8. Although the surface tension values are lower than in the case of aqueous solutions, the form of γ-C dependences is approximately the same as in Figure 2. A decrease in the surface tension of GCR-1 is observed with an increase in its concentration with the formation of two plateaus. The first inflection point was determined after a concentration of 10 mM, which can be correlated with the CAC value in the aqueous environment. The second plateau is formed in a highly concentrated region with GCR-1 amount above 40 mM, which is likely due to either a change of aggregation modes or morphological reassembly of its aggregates. Hence, the different media can cause slightly different aggregation behavior of water-soluble GCR-1. The turning point of the tensiometric curve for GCR-2 can be determined as 3.3 mM that is same as in water. Thus, the tensiometry method turned out to be sensitive to changes in the concentration of GCR-2 in a mixed aqueous-organic medium and, in general, reproduced data obtained in an aqueous solution. Since measurements of the surface tension of the macrocycles' solutions revealed changes at the water-air interface, DLS was performed to investigate the effects of macrocycle concentration on the morphology of aggregates. Figure 9a showed the variation of size of GCR-1 with an increase of concentration from 5 mM to 50 mM. For 5 mM solution the hydrodynamic diameter is about 150 nm, which is increased to 190 nm with the GCR-1 concentration. The appearance of a bimodal distribution in concentrated solutions is caused by an increase in the index of polydispersity due to the formation of disordered structures. The change in the hydrodynamic diameter in GCR-2 solution is insignificant (from 3.4 nm to 4.2 nm) (Figure 9b), which correlates with DLS data obtained in an aqueous solution in the presence of MG. Generally, the aggregation behavior of GCR-1 and GCR-2 is not exactly identical in both aqueous and aqueous-organic medium, and the inconsistency of the concentration dependence between aqueous and aqueous-organic solutions is due to several reasons. First, the different media cause different macrocycle aggregation behavior. Secondly, the measurements of self-assembly of ionic GCR-1 in organic media are much less informative compared to water due to the lower dissociation power of organic or water-organic media. Therefore, the number of ions in water will be different from the number of ions in an aqueous-organic medium, which will also be recorded differently by conductometry. Third, the contribution of cooperative interaction in the aqueous and aqueous-organic environment will also be different. Moreover, due to weaker solvophobic effect of Since measurements of the surface tension of the macrocycles' solutions revealed changes at the water-air interface, DLS was performed to investigate the effects of macrocycle concentration on the morphology of aggregates. Figure 9a showed the variation of size of GCR-1 with an increase of concentration from 5 mM to 50 mM. For 5 mM solution the hydrodynamic diameter is about 150 nm, which is increased to 190 nm with the GCR-1 concentration. The appearance of a bimodal distribution in concentrated solutions is caused by an increase in the index of polydispersity due to the formation of disordered structures. The change in the hydrodynamic diameter in GCR-2 solution is insignificant (from 3.4 nm to 4.2 nm) (Figure 9b), which correlates with DLS data obtained in an aqueous solution in the presence of MG. water-organic mixtures compared to water dependences of the conductivity of solutions (as well as other properties) versus amphiphile concentration will be much less expressed, so that the breakpoints in the dependences can be very smooth if any. Thus, a detailed study of aggregation in a water-organic environment requires a separate careful investigation, which is beyond the scope of the presented work. Evaluation of Toxicity and Biological Activity The different types of antimicrobial activity tests of calix [4]resorcinarenes were conducted in vitro for this study. Table 2 contains data on bacteriostatic, fungistatic, bactericidal and fungicidal action against bacteria and fungi. The analysis of the data suggests that both calix [4]resorcinarenes selectively kill the Gram-positive bacterium S. aureus 209P. In the same time, GCR-2 has higher activity against B. cereus 8035. The antimicrobial activity of macrocycles appears at concentrations ranging from 0.13 to 1 mM, In general, calix [4]resorcinarenes were less toxic in relation to the cells of the test microorganisms, and their activity is due to the single molecule and not to the aggregates. Generally, the aggregation behavior of GCR-1 and GCR-2 is not exactly identical in both aqueous and aqueous-organic medium, and the inconsistency of the concentration dependence between aqueous and aqueous-organic solutions is due to several reasons. First, the different media cause different macrocycle aggregation behavior. Secondly, the measurements of self-assembly of ionic GCR-1 in organic media are much less informative compared to water due to the lower dissociation power of organic or water-organic media. Therefore, the number of ions in water will be different from the number of ions in an aqueous-organic medium, which will also be recorded differently by conductometry. Third, the contribution of cooperative interaction in the aqueous and aqueous-organic environment will also be different. Moreover, due to weaker solvophobic effect of water-organic mixtures compared to water dependences of the conductivity of solutions (as well as other properties) versus amphiphile concentration will be much less expressed, so that the breakpoints in the dependences can be very smooth if any. Thus, a detailed study of aggregation in a water-organic environment requires a separate careful investigation, which is beyond the scope of the presented work. Evaluation of Toxicity and Biological Activity The different types of antimicrobial activity tests of calix [4]resorcinarenes were conducted in vitro for this study. Table 2 contains data on bacteriostatic, fungistatic, bactericidal and fungicidal action against bacteria and fungi. The analysis of the data suggests that both calix [4]resorcinarenes selectively kill the Gram-positive bacterium S. aureus 209P. In the same time, GCR-2 has higher activity against B. cereus 8035. The antimicrobial activity of macrocycles appears at concentrations ranging from 0.13 to 1 mM, In general, calix [4]resorcinarenes were less toxic in relation to the cells of the test microorganisms, and their activity is due to the single molecule and not to the aggregates. Minimal Bactericidal and Fungicidal Concentration (MBC, MFC), mM Bactericidal and fungicidal activity GCR-1 1.00 ± 0.06 >1 >1 GCR-2 1.00 ± 0.08 >1 >0.5 The evaluation of the cytotoxic effect of calix [4]resorcinarenes on human erythrocytes (hemolytic efficiency) has shown that these macrocycles exhibit toxic properties only at the high concentration (Table 3). At the low concentration GCRs show low hemolytic efficiency that correlates to the literature data [3,26]. Comparing both macrocycles under identical concentrations, it is worth noting that despite the ionic nature, sulfonated calix [4]resorcinarene is less toxic than a macrocycle containing a methyl group. Probably, such a difference in hemolytic activity is related to the aggregation ability of macrocycles. As also shown in Table 3, the compound concentrations inducing a 50% inhibition of cell growth (IC 50 on Chang liver cell line) were more than 0.1 mM for each studied macrocycle. The liver cells were chosen as they have a well-established structure of rather uniform type. The cytotoxic effects as well as the antimicrobial properties were also related to the single molecule and not to the aggregates. Thus, the results indicate a low toxicity for macrocycles in low-concentration aqueous solutions and further recommend them for application elsewhere. General Information N-Methyl-d-glucaminemethylcalix [4]resorcinarenes were synthesized in two steps starting from calix [4]resorcinarenes with sulfonate and methyl groups on the lower rim obtained by a published method [20,21]. N-Methyl-d-glucamine (99%, Acros Organics, Fair Lawn, NJ, USA) and 1-phenylazo-2-naphthol (Sudan I, Acros Organics) were used as received. Sample solutions were prepared in the deionized water (18.2 MΩ) obtained from a Direct-Q 5 UV water purification system (Millipore, Molsheim, France). The accurate pH was measured with a HI 2110 pH meter (Hanna Instruments, Woonsocket, RI, USA) calibrated using buffers according to the manufacturer's instructions. Tensiometry Surface tension measurements were conducted on a Du-Noüy tensiometer K6 equipped with platinum ring (KRÜSS, Hamburg, Germany). The tensiometer was calibrated against Milli-Q deionized water. The platinum ring was thoroughly cleaned and dried before each measurement. The measurements were done in such a way that the vertically hung ring was dipped into the liquid to measure its surface tension. It was then pulled out from the solution carefully. Each measurement was repeated until three consistent values (within ±0.5 mN·m −1 ) were obtained. Conductometry Electrical conductivity measurements were carried out using an InoLab Cond 720 precision conductivity meter with a graphite electrode having a cell constant of 0.475 cm −1 ± 1.5%. Specific conductivity values were measured at least three times for each concentration. Values varying from each other by not more than 2% were taken into account. All samples were studied at 25 ± 0.1 • C. Hydrophobic Dye Solubilization Hydrophobic dye solubilization was carried out by adding an excess of Sudan I to solutions. These solutions were allowed to equilibrate for about 48 h at room temperature. They were filtered, and Sudan I absorbance was measured at 486 nm using a Specord 250 Plus spectrophotometer (Analytic Jena, Jena, Germany) using a 1 mm optical path length quartz cell. Dye absorbance was obtained by subtraction of the contribution of macrocycle to the summary spectrum. Each absorbance spectrum was obtained three times, and the absorbance intensities were within 2-3%. Dynamic Light Scattering The hydrodynamic diameters of the self-assemblies were obtained by dynamic light scattering on a Zetasizer Nano instrument (Malvern Instruments, Malvern, Worcestershire, UK). The source of the laser radiation was a He-Ne gas laser with a power of 4 mW and a wavelength of 632.8 nm. For zeta potential measurement Zeta potential Nano-ZS (Malvern) with laser Doppler velocimetry and phase analysis light scattering was used. The temperature of the scattering cell was controlled at 25 • C. Measurements were repeated at least five times. All scattering data were processed using Malvern Zetasizer Software 5.00 (version 5.00.). Transmission Electron Microscopy TEM images were recorded on a HT7700 TEM instrument (Hitachi, Tokyo, Japan) operated at 110 kV accelerating voltage. The samples with 20 mM solutions were ultrasonicated in water for 10 min and then dispersed on 300 mesh carbon-coated copper grid. NMR Diffusion Spectroscopy The Fourier transform pulsed-gradient spin-echo (FT-PGSE) experiments were performed by BPP-STE-LED (bipolar pulse pair-stimulated echo-longitudinal eddy current delay) sequence. Data were acquired with 150.0 ms diffusion delay, with bipolar gradient pulse duration from 3.0 to 4.0 ms (depending on the system under investigation), 1.1 ms spoil gradient pulse and a 5.0 ms eddy current delay. The bipolar pulse gradient strength was varied incrementally from 0.01 to 0.32 T/m in 16 steps. The diffusion experiments were performed at least three times and only the data with the correlation coefficients of a natural logarithm of the normalized signal attenuation (ln I/I 0 ) as a function of the gradient amplitude b = γ 2 δ 2 g 2 (∆-δ/3) (γ is the gyromagnetic ratio, g is the pulsed gradient strength, ∆ is the time separation between the pulsed-gradients, δ is the duration of the pulse) higher than 0.999 were included. The temperature was set to 30 • C with a 535 l/h airflow rate in order to minimize convection effects. Experimental data were processed with the Bruker Xwinnmr software package (version 3.5). The diffusion constants were calculated by exponential fitting of the data belonging to individual columns of the pseudo 2D matrix. Single components have been assumed for the fitting routine. All separated peaks were analyzed and the average values were taken. Cell Viability Evaluation Cell viability of human hepatocytes cells (Chang liver cell line from the N. F. Gamaleya Research Center of Epidemiology and Microbiology) toward macrocycles was determined by means of multifunctional system Cytell Cell Imaging (GE Healthcare, Issaquah, WA, USA) using application Cell Viability BioApp and Automated Imaging BioApp. The cells were dispersed on a 96-well plate at a concentration of 200,000 cells/mL and cultivated in CO2-incubator at 37 • C. Next, the culture medium was sampled in 24 h, and 150 µL of the studied dispersions was added to each well. The experiments were repeated three times. Intact cells cultivated simultaneously with the studied ones served as a reference. The fraction of the grown-up cells was expressed in % vs. reference cells. Degree of cell growth inhibition under the influence of testing agent was calculated by equation: where Exp is the quantity of uninhibited cells in sample studied, Control is the quantity of uninhibited cells in control sample. Then the IC 50 (concentration which caused 50% cell growth inhibition) was determined from curve of cell cultural growth versus macrocycle concentration. The experiments were repeated three times and results are presented as the mean ± standard deviation. Hemolytic Activity The hemolytic activity of the compounds was estimated by comparing the optical density of a solution containing a compound being tested and the blood with the optical density of the blood upon 100% hemolysis. A 10% suspension of human erythrocytes was used as an object of investigation; an erythrocytic mass with heparin was washed three times with a physiologic saline (0.9% NaCl) solution, centrifuged for 10 min at 800× g, and resuspended in the physiologic saline (0.9% NaCl) solution to a concentration of 10%. The concentrations of the compounds that corresponded to the MIC for the bacterial test strains were prepared in a physiologic saline (0.9% NaCl) solution (supplemented with 5% DMSO), and 4.5 mL of a compound at the corresponding dilution was added to 0.5 mL of a 10% suspension of erythrocytes. Samples were incubated for 1 h at 37 • C and centrifuged for 10 min at 2000× g. The release of hemoglobin was controlled by measuring the optical density of the supernatant on an AP-101 digital photoelectrocolorimeter (Apel, Kawaguchi, Japan) at 540 nm. Simultaneously, control samples were prepared: controls for zero hemolysis (blank): 0.5 mL of a 10% suspension of erythrocytes was added to a physiologic saline (0.9% NaCl) solution; 100% hemolysis: 0.5 mL of a 10% suspension of erythrocytes was added to 4.5 mL of distilled water. The experiments were repeated three times and data are presented as the mean ± standard deviation. Conclusions The calix [4]resorcinarenes were firstly derivatized by N-methyl-d-glucamine moieties at the upper rim to result in GCR-1 and GCR-2. The difference in structure of the lower rim of the studied macrocycles determines the different type of aggregation. The GCR-1 molecules form various large aggregates of the head-to-head and head-to-tail types due to the multicenter supramolecular interactions. The presence of methyl groups on the lower rim adjacent to the aromatic rings in the GCR-2 structure promotes the self-aggregation with formation of small spherical particles due to cooperative intermolecular hydrogen bonds between hydroxyl groups and π-stacking between the aromatic rings of adjacent molecules. This aggregation behavior of GCR-1 and GCR-2 is identical in both aqueous and aqueous-organic medium. Evaluation of hemolytic activity showed that the hemolytic effect of macrocycles decreases with decreasing concentration, and the degree of hemolysis for GCR-2 is greater than GCR-1. The antimicrobial activity of calix [4]resorcinarenes appears at concentrations from 0.13 to 1 mM. In general, these macrocycles are non-toxic, which will allow them to be used in biomedical applications.
9,913.4
2019-05-01T00:00:00.000
[ "Chemistry", "Medicine" ]
A New Aspect of the TrkB Signaling Pathway in Neural Plasticity In the central nervous system (CNS), the expression of molecules is strictly regulated during development. Control of the spatiotemporal expression of molecules is a mechanism not only to construct the functional neuronal network but also to adjust the network in response to new information from outside of the individual, i.e., through learning and memory. Among the functional molecules in the CNS, one of the best-studied groups is the neurotrophins, which are nerve growth factor (NGF)-related gene family molecules. Neurotrophins include NGF, brain-derived neurotrophic factor (BDNF), neurotrophin 3 (NT-3), and NT-4/5 in the mammal. Among neurotrophins and their receptors, BDNF and tropomyosin-related kinases B (TrkB) are enriched in the CNS. In the CNS, the BDNF-TrkB signaling pathway fulfills a wide variety of functions throughout life, such as cell survival, migration, outgrowth of axons and dendrites, synaptogenesis, synaptic transmission, and remodeling of synapses. Although the same ligand and receptor, BDNF and TrkB, act in these various developmental events, we do not yet understand what kind of mechanism provokes the functional multiplicity of the BDNF-TrkB signaling pathway. In this review, we discuss the mechanism that elicits the variety of functions performed by the BDNF-TrkB signaling pathway in the CNS as a tool of pharmacological therapy. INTRODUCTION The neurotrophins are the nerve growth factor (NGF)related gene family molecules, including NGF, brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), and NT-4/5. In the central nervous system (CNS), neurotrophins are expressed from the early embryonic stage to the adult stage and regulate a wide variety of functions, such as cell migration, outgrowth of neurites, synaptogenesis, cell survival and death, neuronal transmission, and synaptic plasticity [24,59,66,83,106,115,125]. These physiological functions of neurotrophins are induced by their specific receptors expressed on target cells. The neurotrophin receptors are categorized into two groups based on their binding affinities for neurotrophins [10,18]. One is the high-affinity tropomyosinrelated kinase (Trk) receptor family, which includes TrkA, TrkB, and TrkC. NGF specifically recognizes TrkA, both BDNF and NT-4/5 are ligands for TrkB, and NT-3 binds to all Trks, although TrkC mediates the primary biological functions of NT-3. Another is low-affinity p75 neurotrophin receptor that is one of tumor necrosis factor (TNF) receptor family. This receptor can bind to all neurotrophins and enhance or suppress Trk signaling by the interaction between Trk and p75 [15], and transduce its own signals that regulate cell apoptosis or survive [28,110]. How do neurotrophins elicit their various functions? One way is by combining signal transducers [115]. Trks and p75 have many associated proteins that are the starting points of their signaling cascades [59,106,110,115]. These adaptors *Address correspondence to this author at Division of Systems Medical Science, Institute for Comprehensive Medical Science, Fujita Health University, Toyoake, Aichi 470-1192, Japan; Tel: +81-562-93-9383; Fax: +81-562-92-5382; E-mail<EMAIL_ADDRESS>uniformly exist from early stages to adulthood and can transmit the signals of other growth factors, neurotransmitters, and hormones [31,80]. The associated proteins of all Trk receptors closely resemble each other [59,106,115], so the differences in adaptor protein combination can not explain not only the characteristic function of each neurotrophin but also developmental changes of neurotrophin functions. Another possible mechanism by which neurotrophins elicit functions is the alternative splicing of the neurotrophin receptors. Generally, alternative splicing makes it possible to produce functionally distinct proteins that participate in diverse cellular processes, including differentiation and development [45,120]. Among Trk and p75 receptors, there are some alternative spliced forms [10,110]. Recent studies have revealed that splice variants of Trk receptors function as dominant negative forms [32,42,47,70,92], or they have distinct functions via their original signaling pathway [11,93,96,98,109]. In this review, we focus on TrkB receptor, whose splice variants have been well studied, and discuss a new aspect of TrkB signaling for neural functions. STRUCTURES OF TRKB ISOFORMS Among neurotrophins and their Trk receptors, BDNF and TrkB are enriched in the CNS [66], and they play a pivotal role in neural plasticity during development and in adulthood [19]. TrkB is a single-pass transmembrane molecule. Alternative splicing of the TrkB pre-mRNA from the locus on DNA yields at least two isoforms ( Fig. 1) [86]. One is a fulllength form of TrkB, which has the tyrosine kinase domain in the cytosolic region and is designated as TK+. The extracellular domain of TK+ possesses three tandem leucinerich repeats flanked by two distinct cysteine-rich domains and two immunoglobulin-like domains, which are required for ligand binding, from the N-terminal [10]. Another is the tyrosine kinase lacking isoforms, TK-, which consists of two isoforms, T1 and T2. These truncated isoforms contain the same extracellular domain, transmembrane domain, and initial 12 intracellular amino acid sequences as TK+, but they have the specific C-terminal sequences (11 and 9 amino acid residues, respectively) [10]. Interestingly, the C-terminal sequence of T1 is completely conserved in mammals, such as mice, rats, and humans [67,86,118], suggesting that this sequence is essential for this isoform's function. On the other hand, it remains unclear if T2 is expressed in mice and human, since the T2 sequence has been detected only in rats [67,74,86,118]. EXPRESSION OF BDNF AND TRKB ISOFORMS IN THE CNS BDNF is a secreted glycoprotein that is released from the pre-and postsynaptic terminals [3,36,37,51,71]. Importantly, the synthesis of BDNF is up-regulated in a neuronal activitydependent manner. BDNF mRNA and protein are both detected in many CNS regions, such as the neocortex, amy-gdala, thalamus, hypothalamus, pituitary gland, and substantia nigra, suggesting an autocrine and paracrine mode of BDNF in those regions [57,132,133]. On the other hand, the synthetic and functional sites of BDNF are sometimes different. For example, the striatum contains BDNF protein but does not express BDNF mRNA. A previous study showed the anterograde transport of BDNF from the cortex to the striatum [5]. Among the CNS regions, hippocampal formation has been studied the most. In the rat, the dense positive structures of BDNF mRNA were observed in all regions of the hippocampus [33,62], but no immunoreactivity was found in the granule cell bodies or CA1 regions [132]. However, the mossy fiber layer was densely immunopositive for BDNF. One hypothesis is that BDNF mRNA is antero-gradely transported to the axons and/or dendrites of granule cells and CA1 pyramidal neurons, and locally translated to BDNF protein [19,64]. In contrast, both mRNA and protein of BDNF are expressed in all subregions of the monkey hippocampus [55,102,133]. In addition, the expression pattern of BDNF mRNA in the human hippocampus shows good similarity to that in the monkey hippocampus [129]. These differences in BDNF expression between rodents and primates may suggest different functions of BDNF in these species. Previously, many studies of TrkB distribution focused on TK+ [6,20,131], because it is difficult to detect the immunoreactivity of T1. Since the T1 C-terminal is identical in mammals, as described above, the production of anti-T1 antibody is quite difficult. Recently, our group established the immunohistochemistry for T1, by the treatment with guanidine HCl (pH 11) that recovers the antigenicity of T1 [93][94][95]98,101]. The antibody of T1 recognizes the C-terminal 12 amino acid sequence that interacts with Rho GDI1, suggesting that the associated protein of T1 Rho GDI1 inhibits the interaction between T1 and anti-T1 antibody. The treatment with guanidine may dissociate the binding between T1 and Rho GDI1 or denature Rho GDI1, and then the antigenicity of T1 would be recovered. As a result of previous immunohistochemical and in situ hybridization studies of TK+ and T1, researchers now know that both TK+ and T1 are widely distributed in all regions of the adult CNS, including the neocortex, cerebellum, hippocampus, amygdala, basal ganglia, septal region, thalamus, hypothalamus, midbrain, brainstem, and spinal cord [9,12,20,41,53,54,94,95,101,108,131]. On the other hand, western blot analysis with each antibody of TK+ and T1 has shown that the distributions of those molecules overlap considerably in almost all regions of the CNS in adulthood [2,41,70,94,95,99,100,101]. CELLULAR EXPRESSION OF BDNF AND TRKB ISOFORMS In light of the expressions of TK+ and T1 at the cellular level, experiments have clarified that each expression pattern is considerably different from the other. In the neocortex of the adult rat, TK+ is detected in pyramidal neurons and GABAergic interneurons, whereas the expression of T1 is observed in not only neurons but also astrocytes [97,101]. Similar results were obtained at the TK+ and T1 mRNA level [9,12,40]. Northern blot analysis demonstrated that TKis expressed in neurons, astrocytes, and oligodendrocytes, whereas TK+ transcript is only detected in neurons [40]. These results suggest that the interaction of TK+ and T1 may occur in neurons. In glias, T1 is a major isoform among TrkB subtypes and is involved in the function of glias. EXPRESSION CHANGES OF BDNF AND TRKB ISO-FORMS DURING DEVELOPMENT The expression of BDNF is observed beginning in the mid-stage of development in the mammal [60,89,90,114]. For example, in the developing cerebral cortex of the macaque monkey, which has an embryonic period of 165 days, BDNF mRNA was not detected before the 110th embryonic day (E110d), and the positive signals of BDNF were sparsely distributed in neocortical layers by E121d [60]. Also, at the protein level, BDNF content was at a low level at E120d, and then it gradually increased with the progress of development [89]. The level of BDNF protein in the monkey neocortex increased by 2-fold compared to the adult level at the early postnatal period, around postnatal 2 months (P2m), and decreased thereafter [89]. This increase in BDNF mRNA has also been reported in the human neonatal temporal cortex [129]. This developmental change of BDNF expression was also found in the rat occipital cortex, in which BDNF mRNA was at a low level by P10d, started to increase by 5-fold compared to the P10d value after the second postnatal week, and declined after P30d [114], indicating that this expression change of BDNF during development was conserved among mammals. The developmental expression of TrkB isoforms exhibits a specific pattern [2,70,99]. TK+ is expressed in almost all regions of the CNS from the early developmental period, and its expression level is maintained into the adult stage. In contrast, the T1 expression in the forebrain, such as the neocortex, hippocampus, amygdala, olfactory bulb, striatum, and septum, is at a very low level by the early and middle developmental stages and increases markedly at the late developmental stage, with a high level of T1 expression maintained until the adult stage. Interestingly, the inflection points of both BDNF and T1 expression during development coincide well with the period of elimination of excessive axons and synaptogenesis [2,51,52,99]. DEVELOPMENTAL CHANGE OF TRKB DIMERI-ZATION AND FUNCTIONS OF BDNF-TRKB SIG-NALING Neurotrophins exist in vivo as a non-covalently linked homodimeric protein, and the binding of neurotrophin to its receptor invokes the receptor dimer [10,18]. The dimerization of receptors induces autophosphorylation in the kinase domain of the cytosolic region of Trk receptors, followed by the activation of various signaling pathways, such as the Ras/MAP kinase, phospholipase C (PLC), and PI3 kinase pathways [24,59,66,106,115]. Thus, the dimerization of Trk receptor is very important as a starting point of intracellular signaling. It is interesting to know that the pattern of TrkB dimerization changes during development of the monkey neocortex [100]. In the early developmental stage of the monkey neocortex, when T1 is not expressed, at embryonic day 120 (E120), TK+/TK+ homodimer is formed in a ligand-dependent manner [100], suggesting that the signaling pathway of TK+ mainly works during this period. Then, the adaptor proteins that interacted with TK+ are activated by the TrkB ligands (BDNF, NT-3, and NT4/5). The main signaling pathways, including the PLC-γ1, Ras/MAP, and PI3K pathways, function in neuronal plasticity, neurite outgrowth and survival, and cell motility, respectively (Fig. 2) [59,106]. The formation of TK+ homodimer is consistent with the finding that these phenomena occur actively during the early developmental period. In the early phase of development, the expression of BDNF is at a very low level, while the other TrkB ligand, NT-3, is expressed at a relatively higher level than its expression level at the adult stage [90]. Together with the fact that NT-3 can induce the dimerization of TrkB [100], the NT-3-TK+ signaling pathway might play an important role in the regulation of the cell cycle and migration [44,122]. TK+/TK+ and T1/T1 homodimers are formed at the newborn stage (NB) of the monkey neocortex [100]. The number of axons in the corpus callosum and the anterior commissure has been reported to reach a maximum in NB and to decrease to about 75% by P60 [51,52,75,76]. The increase in T1 expression correlates well with the period when commissural axons are eliminated and synaptogenesis occurs [51,52,75,76,105]. This result suggests that T1 might be involved in the elimination of axons. As possible mechanisms, the followings may be considered: 1) the expression of T1 increases in neighbouring glial cells and T1 in glial cells absorbs the excess BDNF for axonal pruning, 2) in neurons, the increase of T1 induced the inhibition of the action of TK+ by the dominant effect of T1. In the monkey neocortex, the density of synapses increases after birth, reaches the highest level between postnatal 2-4 months in all cortical areas, and decreases to about half of the maximum level within several years after birth [51,52,105]. The expression of T1 increases remarkably after birth [2,70,99]. Interestingly, the dendritic filopodia, which are known precursors of synaptic spines, are induced by overexpression of T1 in hippocampal neurons from postnatal rats [49]. This outgrowth of dendritic filopodia is not observed in TK+-overexpressing neurons. Thus, T1 by itself might be involved in synaptogenesis, although the mechanism is unclear. As described above, T1 participates in axon elimination, whereas it exhibits an increase in the number of dendritic filopodia. This is a contradiction, but it may be explained by the difference in intracellular localization of TK+ and T1, such as dendrites and axons. In fact, in adult brains, T1 is concentrated in the presynaptic site [7,103], whereas TK+ is localized in both pre-and postsynaptic regions [7,103,112]. In the developing brains, the distributions of TK+ and T1 might be dynamically changed. At the adult stage, TK-/TK-homodimer and the TK+/TK-heterodimer have been observed to form in the monkey cerebral cortex [100]. Furthermore, surprisingly, TK+ homodimer is not formed at adulthood. Although it would be very interesting to determine whether the TK+/TKheterodimer can transduce the intracellular signals, T1 may function as a dominant negative receptor of TK+ in neurons. At the same time, T1 plays an important role in glial cells, which we discuss in the following section. Fig. (2). TrkB signaling pathways. In the neuron, shown in a dotted square, BDNF induces three TrkB dimers: TK+ homodimer, TK+-T1 heterodimer, and T1 homodimer. The signaling cascade of TK+ homodimer has been well studied. Activation of PLCγ results in the activation of PKC, which promotes synaptic plasticity. Activation of Shc protein induces activation of the PI3K-Akt and Ras-MAP kinase signaling cascades, which regulate cell survival and differentiation, respectively. It is unclear whether TK+-T1 heterodimer can transduce the signals. Furthermore, T1 plays an important role in synaptic transmission, although the mechanism is not understood. Since T1 homodimer has not yet been observed in neurons, further investigation is needed. In the astrocyte, which is shown in a gray square, T1 is a major isoform of TrkB receptors. The binding of BDNF to T1 induces T1 homodimer, which results in the release of Rho GDI1 and the morphological changes of astrocytes. Moreover, T1 is involved in a Ca 2+ influx in astrocytes. On the other hand, TTIP (truncated TrkB-interacting protein) is a binding protein of T1, but it is not clear whether it transduces the signals. SIGNALING PATHWAY OF T1 T1 had been hypothesized to be a dominant-negative form of TK+ because of a lack of the tyrosine kinase domain and to be involved in negative functions against TK+, such as TK+ phosphorylation [70], calcium efflux [31], neurite outgrowth [42], cell survival activity [47], and gene expression by BDNF [92]. According to this hypothesis, T1 was postulated to form a homodimer or heterodimer with TK+, which prohibited TK+ signaling or limited the availability of BDNF to neurons by trapping excess BDNF [17]. In contrast, there were several reports that provided evidence against the hypothesis that T1 was a dominant-negative form of TK+. For example, several researchers showed that the expression of T1 increases markedly at various important periods in the developing mammalian CNS, such as axonal remodeling and synaptogenesis [2,41,99,100]. The specific alignment of the intracellular domain of T1 is completely identical among mice, rats, and humans [67,86,118], suggesting that this alignment plays a unique role. In addition, T1 is capable of binding to BDNF at the same level as does TK+ [17]. Taken together with the fact that T1 has been reported to mediate signal transduction (i.e., the acid metabolite release from cells) [11], these findings raised the possibility that T1 has its own signaling pathway. Recently, T1 has been reported to possess a signaling pathway (Fig. 2) [96,98]. T1 is directly bound to Rho GDI1, a Rho guanine nucleotide dissociation inhibitor that can stabilize the inactive, GDP-bound form of Rho GTPase [98]. The Rho signaling pathway controls the remodeling of microfilaments, intermediate filaments, and microtubules [35,121]. In the BDNF-T1 signaling pathway, Rho GDI1 is released from T1 in a BDNF-dependent manner, which causes decreases in the activities of Rho-signaling molecules such as RhoA, Rho-associated kinase (ROCK), p21-activated kinase (PAK), and extracellular-signal regulated kinase (ERK) 1/2 [96]. Consequently, T1 alters the cell morphology of astrocytes in primary cultures and acute slices [93,98]. T1 has been involved in the intracellular Ca 2+ influx in astrocytes, via PLC>IP3 production [109]. Since the PLC pathway plays an important role on neuronal plasticity [119], T1 in neurons might participate in this process. Another binding protein of T1 is truncated TrkBinteracting protein (TTIP), which is isolated from 15N neuroblastoma cells by coimmunoprecipitation with GST fusion protein containing the intracellular juxtamembrane of T1 (Fig. 2) [73]. TTIP has a molecular weight of 61 kDa. However, BDNF stimulation cannot modulate the interaction between T1 and TTIP. It is also uncertain whether Rho GDI1 and TTIP bind directly to the different motifs in the T1specific region or compete for the same binding site. Further studies on TTIP are needed in the future. REGULATION OF CELL MORPHOLOGY BY TRKB ISOFORMS One function of the BDNF-TrkB signaling pathway is that it is heavily involved in the regulation of the cell morphology. BDNF regulates the branching and extension of axons and dendrites during development both in vitro and in vivo [4,[25][26][27]58,63,79,81,82,84,113]. In addition, treatment with BDNF increases in the number of synapses [1,4,23,113,116,117]. These experiments were performed using developing dissociated neurons, brain slices, and animals, suggesting that TK+ mainly functioned in neurons in these studies. In the P14 ferret neocortex, where indeed the expression of TK+ is several times that of T1 [2], BDNF administration increases the length and complexity of dendrites [82,84]. Interestingly, the laminar specificity for neurotrophin response is observed: neurons in layers 4 and 5 to BDNF and neurons in layers 5 and 6 to NT-4. In these layers that are responsible to TrkB ligands, TK+ like immunoreactivity is intensely detected at P10-24. Thus, TK+ promotes axonal and dendritic growth during development. Studies of T1 with regard to cell morphology employed the strategy of T1 overexpression in cell line and slice cultures. In the N2a cell line, the transient overexpression of T1 led to a ligand-independent change of cell morphology, such as the growth of filopodia and processes [48]. The study demonstrated that deletion mutants lacking the T1 specific intracellular domain induce filopodia and processes, but the mutant lacking the extracellular domain failed to have this effect. In addition, p75 was not involved in this process. Thus, the authors suggested that the extracellular domain of T1 might function as a cell adhesion molecule. Another study in rat hippocampal primary cultures [49] showed that T1 induced the formation of dendritic filopodia, which occurred independent of ligand formation. In contrast, the interaction between T1 and p75 was essential for the induction of filopodia. This might have been due to material differences, such as the cell line [48] or primary cultured hippocampal neurons [49]. The study using P14 ferret neocortical slice culture showed that TK+ and T1 regulated distinct modes of dendritic growth [130]. The transfection of TK+ induced prominent outgrowth of short dendrites that extended from the cell body and the proximal region of the apical dendrites. In contrast, the transfection of T1 did not increase short dendrites near the soma, but it did elevate the arborization of distal dendrites. Providing exogenous ligands blocked the distal growth of dendrites in T1-transfected neurons. Furthermore, in proximal dendrites, the treatment with ligands decreased dendritic complexity compared with the control level. Most recently, in T1-deficient mice, morphological abnormalities in the length and complexity of neurons in the basolateral amygdala were described [21]. Considering that the expression of T1 increased at the stage of synaptogenesis, T1 might have fine-tuned the growth of dendrites, axons and synaptic structures, by the interaction with TK+. Further examination of the function of T1 in regulating neuronal morphology will be interesting. Most importantly, T1 plays a role on astrocyte functions. For example, T1 induced a rapid change of astrocytic morphology via Rho GTPase in primary astrocyte cultures [98] and in the rat neocortex layer I [93]. Additionally, T1 controlled calcium entry into astrocytes [109]. Since the release of BDNF is highly regulated by neuronal activity [50,71], these findings led us to the idea that BDNF release by neuronal activities induces morphological changes of astrocytes in the CNS. Recent studies have shed light on the interactions between neurons and glial cells [38,124,128]. In particular, researchers have demonstrated that calcium entry into astrocytes modulated synaptic transmission [14,39]. In addition, astrocytic endfeet enwrap synapses [127], i.e., those synapses referred to as tripartite synapses [8]. Furthermore, astrocytic processes surrounding active synapses have been described as altering their morphology in the brainstem [56], hypothalamus [77], cerebellum [61], hippocampus [13], and neocortex [93] of infant-to pubertal-stage rodents, suggesting that the tripartite synapse is a common structure in the CNS. In contrast, alterations of fine neuronal structures such as dendrites and spines in the neocortex of adult mice hardly occur under normal conditions [45,126]. These findings suggest that the morphological alteration of astrocytes may be essential for the maintenance and plasticity of synaptic transmission, as well as for transmitter clearance. Therefore, neuronal and glial structural modifications might be regulated by the interaction of TK+ and T1 in neurons and the T1 in astrocytes, respectively. TRKB ISOFORMS IN THE SYNAPTIC PLASTICITY BDNF-TrkB signaling has an effect on morphological changes of neurons and glias and plays an important role in synaptic plasticity [65,72,85,87]. In light of this, we wanted to explore two important issues, 1) activity-dependent expression and secretion of BDNF, and 2) subcellular localization of TrkB subtypes, i.e., pre-or postsynaptic sites. Not only in vitro stimulations such as the administration of drugs but also physiological stimulations, such as exercise [91], visual input [22], and whisker stimulation [107], showed the increase of BDNF expression and secretion. It is unclear whether dendritic production of BDNF (i.e., BDNF mRNA targeting to dendrites) and concentration of BDNF protein in the secretion vesicles occur in the active synapse [51,72]. Furthermore, it has not yet been clarified whether BDNF is released from the vesicles in the pre-or postsynaptic sites [3,36,37,50,71], like neuropeptide transmitters [111]. TrkB subtypes are widely distributed throughout the brain, as described in the previous section. However, considering that the signaling pathway of TK+ is distinct from that of T1, it is important to clarify the subcellular localization of TK+ and T1 in neurons. Subcellular fractionation of the rat brain showed that 1) both TrkB subtypes were concentrated in synaptic membrane fraction [7,94,97,103], 2) TK+ and T1 exhibited a differential subcellular distribution; TK+ was present in the presynaptic active zone and postsynaptic density, while T1 was mainly distributed in the presynaptic active zone [7,103]. Interestingly, using cultured hippocampal neurons infected with the T1-expressing adenovirus vector, Schuman's group demonstrated that presynaptic, but not postsynaptic, expression of T1 inhibited BDNF enhancement of synaptic transmission, whereas activation of TrkBassociated signaling enhanced neurotransmitter release from presynaptic terminals [78]. Although pre-and postsynaptic modifications are involved in long-term potentiation, at least presynaptic T1 might play an important role in the regulation of initial synaptic potentiation between neurons. Since T1 inhibits the phosphorylation of TK+, the activation of BDNF-TK+ signaling may be required for BDNF-induced potentiation. On the other hand, Rho GTPases are involved in Ca 2+ -dependent neurotransmitter exocytosis via the regulation of actin filament [30,88]. Thus, the BDNF-T1-Rho GDI1 signaling cascade may regulate the neurotransmitter release, regulating Rho GTPases activity. PHARMACOLOGICAL USEFULNESS OF T1 AS A MOLECULAR SWITCH The T1 signaling cascade challenges the conventional view that T1 acts as a dominant negative form of TK+. It is reasonable to assume that T1 could exert dual roles in an age-dependent manner and/or by subcellular and cellular localization. In neurons, T1 could act as a dominant negative form of TK+ through the formation of the TK+/TK-heterodimer in adulthood. In astrocytes, T1 could act as a negative regulator for the Rho signaling cascade. Thus, T1 may be a Janus-faced receptor of BDNF as a "molecular switch." Also, the change of TrkB receptor dimerization may be one of the mechanisms generating the variety of biological functions of BDNF during development. If we can control each expression of TrkB subtypes in a certain cell type, i.e., in a neuron-or astrocyte-specific manner, by drugs or gene-transferring treatment in the near future, the results might be useful in the treatment for psychiatric and neurological diseases, including depression and suicide [34], schizophrenia [104], and neurodegenerative diseases [29]. These studies suggest that it is essential for maintaining neuronal functions to regulate adequately the expression of T1. For example, the decrease in expression of T1 is observed in the frontal cortex of suicide completers [34]. Interestingly, the methylation in the trkB promoter regions is significantly reduced, which results in only the decrease in T1 expression without the change of TK+ expression [34]. In the model mouse of schizophrenia, both mRNA and protein levels of T1 were significantly higher in the frontal cortex, but those of TK+ were not altered [104]. Furthermore, using a trisomic mouse model, the suitable expression level of T1 is important for the survival of neocortical and hippocampal neurons. Taken together, pharmaceutical preparations to regulate the proper expression of T1, such as T1 siRNA and cDNA [93,98] and synthetic peptide of T1 specific Cterminal sequence [98], will be valuable for the treatment of psychiatric and neurological disorders. In addition, the combination use of T1 siRNA and cDNA and cell type-specific promoters can be more useful. T1 has been shown to be expressed in the neurogenic regions [43,123]. Recent study suggests that overexpression of T1 increases the proliferation of neural progenitor cells [123]. Interestingly, BDNF has anti-proliferative activity on the self-renewal of neural stem cells; however, it also functions as a differentiation factor for stem cells, which are affected by the expression of TK+. Thus, the rate of TrkB subtype expression in stem cells is of importance in determining the balance between proliferation and differentiation. In vivo study also showed that dopaminergic periglomerular interneurons in the olfactory bulb were decreased in TrkB KO mice. Moreover, calbindin-positive cells were slightly decreased, compared with the control, suggesting that TrkB may play a selective role in regulating the proliferation and differentiation of subtypes of specific interneurons [43]. Furthermore, as described in the above sections, TrkB subtypes influence neural plasticity via regulation of the neuronal and glial morphology. Therefore, the control of BDNF-TrkB signaling can regenerate neurons and repair neuronal networks as therapy following a brain injury such as trauma or ischemia.
6,442.8
2009-11-30T00:00:00.000
[ "Biology", "Medicine" ]
Evolutionary pressures on microbial metabolic strategies in the chemostat Protein expression is shaped by evolutionary processes that tune microbial fitness. The limited biosynthetic capacity of a cell constrains protein expression and forces the cell to carefully manage its protein economy. In a chemostat, the physiology of the cell feeds back on the growth conditions, hindering intuitive understanding of how changes in protein concentration affect fitness. Here, we aim to provide a theoretical framework that addresses the selective pressures and optimal evolutionary-strategies in the chemostat. We show that the optimal enzyme levels are the result of a trade-off between the cost of their production and the benefit of their catalytic function. We also show that deviations from optimal enzyme levels are directly related to selection coefficients. The maximal fitness strategy for an organism in the chemostat is to express a well-defined metabolic subsystem known as an elementary flux mode. Using a coarse-grained, kinetic model of Saccharomyces cerevisiae’s metabolism and growth, we illustrate that the dynamics and outcome of evolution in a chemostat can be very counter-intuitive: Strictly-respiring and strictly-fermenting strains can evolve from a common ancestor. This work provides a theoretical framework that relates a kinetic, mechanistic view on metabolism with cellular physiology and evolutionary dynamics in the chemostat. The fitness benefit and cost of protein expression suggests the existence of an (environment-dependent) optimal expression level where the difference between the benefit -the biochemical activity -and cost -the resource consumption during expression -is maximized 11 . Indeed, in a laboratory-evolution experiment, Escherichia coli attained predicted, environment-dependent optimal expression levels within 500 generations 23 . The important role of protein costs on physiology is perhaps best exemplified by the covariation of the proteome of E. coli with growth conditions, where the expression levels of whole sectors of metabolism are tuned to the environmental requirements 24 . For instance, the ribosomal protein fraction increases linearly with growth rate in E. coli 10,25 ; likely because the ribosome concentration is precisely tuned to the prevailing conditions to prevent overexpression 26,27 . Another example is the switch from respiration to fermentation with increasing substrate availability, which is observed in many micro-organisms. Molenaar et al. hypothesized that this was the result of the relatively high enzyme-investment required for respiration 28 . Recently, Basan et al. tested this hypothesis for the overflow metabolism in E. coli 22 . Their results were in agreement with this hypothesis and ruled out a number of other ones, such as limitations in respiratory capacity or cytoplasmic membrane area. These insights raise the question: which enzymes should be expressed, and to what concentration, to achieve fitness maximization? We recently addressed these questions for cells growing in a constant environment 11,29 . We showed that an optimal metabolic strategy must be an elementary flux mode (EFM) 29 , which is a minimal, steady-state 'route' through a metabolic network. An EFM represents a 'pure' metabolic strategy, e.g. fermentation or respiration, but not respiro-fermentation. We also found that the optimal enzyme concentration is proportional to the influence it exerts on the flux, its flux control coefficient 11 (Also see refs 7, 30 and 31). This flux control coefficient we could relate to the selection coefficient and the fitness costs and benefits of enzyme expression. Because a feedback occurs from the physiology of the cell to the environmental conditions in a chemostat, the existing theoretical understanding of the evolution in a constant environment 11,29 cannot immediately be extrapolated to the chemostat. Our aim is therefore to formulate a theory of optimal metabolic strategies in a chemostat, expressed in terms of kinetics and costs of metabolic enzymes. We will extend the theory from constant conditions to chemostat conditions, and show the theoretical possibilities for metabolic evolution in the chemostat. Furthermore, we develop a modeling strategy to perform evolutionary simulations of a 'self-replicator model' 28 of metabolism and growth of Saccharomyces cerevisiae in a chemostat (see ref. 32 for another approach to coarse grained models, incorporating details of translation and ribosome competition, more explicit trade-offs and less metabolic detail). We use this model to illustrate how selection shapes the decision between respiratory and fermentative metabolism. We find that negative frequency-dependent selection causes the evolutionary-stable coexistence of a purely fermenting and a purely respiring strain, and that these strategies can evolve from a common ancestor that does both. Methods Model description. In this section we will briefly discuss how we simulate metabolism and growth of a coarse-grained self replicator model of yeast in a chemostat environment. It is explicitly not our intention to provide an as-realistic-as-possible model of S. cerevisiae. Rather, we use a simplified model to focus on how evolutionary pressures relating to the protein economy affect the 'choice' between respiration and fermentation, specifically when taking the feedback of the cellular physiology on the conditions in the chemostat into account. For details and a mathematical description we refer to the Supplementary Information S2. A self replicator model of yeast. The basic concept behind our model is that it is a self-contained representation of cellular growth. We model the cell as a self-replicator, with a focus on metabolism and protein synthesis. The ribosomes synthesize all proteins required for growth, including themselves. (DNA and RNA synthesis are not included in the model). The model is coarse grained, meaning that a large number of reactions are lumped into a single rate equation. The model consists of a glucose transporter, glycolysis, a fermentation pathway, a respiration pathway, and ribosomes (Fig. 1A). The transporter transports glucose into the cell. Next, it is metabolized by glycolysis, yielding two pyruvate and two ATP. Pyruvate and ATP are both requires for protein synthesis by the ribosomes, but in a ratio of 2.4 ATP per pyruvate. As a consequence, more ATP needs to be generated to balance its production and consumption. This is done by the additional uptake of glucose that is metabolized in two different ways: fermentation or respiration. In fermentation, the additional glucose is metabolized to pyruvate after which it is discarded in the form of ethanol. Alternatively, in respiration, pyruvate is completely metabolized to CO 2 , which generates an additional 9 ATPs per pyruvate. This means that our model has two EFMs, which we refer to as the respiratory and the fermentative strategy (Fig. 1B,C). A mixed strategy uses a combination of these EFMs, and is characterized by the respiratory ratio, the flux towards respiration over the total flux towards respiration and fermentation. We refer to cells employing a pure respiratory or fermentative strategy as respirers and fermenters, respectively. Fermentation and respiration differ in their protein requirement. While respiration generates more ATP per glucose molecule, it is more costly than fermentation because it requires more proteins per unit flux. Transport of glucose is modeled as facilitated diffusion, which ensures that product inhibition still affects the rate, even at high substrate concentrations. All other reactions are described by Michaelis Menten rate equations with product inhibition, meaning that they become saturated at high substrate concentrations. Importantly, in our model extracellular ethanol inhibits the fermentative flux, but not the respiratory flux. This is based on experimental results that suggest a stronger inhibition on (partially) fermenting strains 33 . Moreover, ethanol slows down the glycolytic flux in S. cerevisiae after a glucose pulse, a regime in which the yeast mainly ferments 34 . The biosynthetic flux -the rate at which ribosomes synthesize the required proteins -sets the self-replication rate, which we interpret as the specific growth rate. The enzyme concentrations are set by the fraction of ribosome dedicated to the synthesis of that enzyme, and the dilution rate. The higher the concentration of an enzyme, the more cellular resources -ribosomes, precursors and ATP -are required to maintain that concentration. This makes overexpression costly. On the other hand, if the concentration of an enzyme is too low, it becomes a bottleneck and starts limiting the biosynthetic flux. In other words, maximizing the growth rate requires each enzyme to attain an optimal concentration. What these optimal concentrations are depends on the extracellular glucose and ethanol concentration. The parameters of our model are as much as possible based on literature. However, the k cat -values of pathways, i.e. the maximal turnover of a pathway per unit enzyme per unit time, were not available. We therefore estimated their relative value based on the number and size of the different enzymes in the pathway; Pathways with many or large enzymes have a lower k cat . Subsequently, by multiplying all relative k cat s with the same factor, we fitted their absolute value such that the model had a realistic maximal growth rate. While we could make a realistic estimate for the other modules, the costs for respiration, such as the maintenance of mitochondria and damage due to reactive oxygen species, was difficult to assess. We fitted the cost of the respiration module to ensure a switch between optimal strategies from respiration to fermentation at intermediate glucose concentrations of 0.265 h −1 , which has been observed in experiments. Chemostat model. In a chemostat, medium enters and leaves the vessel at a fixed flow rate (thereby keeping the volume constant), which determines the dilution rate D. The medium is composed such that one nutrient will become growth-rate limiting; in our model, this is glucose. The glucose concentration in the vessel reaches a concentration where the growth rate equals the dilution rate. An experimentalist can therefore set the growth rate by changing the dilution rate. We modeled the concentrations of biomass, glucose and ethanol in the chemostat with differential equations describing in-and outflow, growth, nutrient uptake and ethanol production. We used the self-replicator model discussed above to calculate the growth rate of the cells as a function of enzyme levels and of the glucose and ethanol concentration in the chemostat. In this way, we can relate the kinetics of metabolism and growth to the dynamics of biomass density in the chemostat. Cells that ferment produce ethanol, which inhibits fermentation. Since more fermenting cells produce more ethanol, our model contains negative frequency-dependent selection. The steady-state biomass density and the glucose and ethanol concentration in the chemostat are therefore all interdependent. Ultimately, they are set by the dilution rate, the glucose concentration in the feed and the kinetic properties of the organisms in the chemostat. Results Evolutionarily stable strategies in a chemostat must be elementary flux modes. In a chemostat, the steady-state growth rate of cells equals the dilution rate that is set by the experimenter. The growth rate of a cell depends on the concentrations of the limiting nutrient and inhibiting compounds, which are all variable in a chemostat. Evolution in a chemostat does therefore not simply select for growth-rate maximization at fixed nutrient levels, and it is not straightforward to define what an optimal strategy entails. Moreover, the optimal phenotype, carrying out the optimal strategy, should be evolutionarily stable, such that a mutant with a different strategy (or with different enzyme concentrations) is not able to invade. Since a mutant can only invade the chemostat by growing faster than the dilution rate, selection is still ultimately mediated through differences in growth rate. In a previous paper, we have presented a formal and extensive proof that, in a constant but otherwise arbitrary environment, the highest specific growth rate is always achieved by an EFM 29 . An EFM is a metabolic subnetwork that can attain a steady state, carries a flux in a thermodynamically-feasible direction, none of its enzymes can be removed without violating the steady-state requirement, and it has one independent flux (only one flux value needs to be known to determine all flux values at steady state). The intuition behind this proof is and is subsequently metabolized to pyruvate by glycolysis. Next, pyruvate can serve as a precursor for biomass formation by the ribosome, or it can be further metabolized through either fermentation or respiration, the latter yielding additional ATP. The model has two elementary flux modes, a fermentative mode (B) and a respiratory mode (C). as follows: Suppose a cell has two parallel metabolic pathways (EFMs) to generate for instance ATP. The rate of ATP-synthesis per amount of protein invested in these pathways will not be the same. In other words, the 'return on investment' differs between these pathways. This implies that the total return on investment for the cell can be enhanced by taking resources away from the pathway with the lower return on investment, and instead invest them in the more productive pathway. A metabolic network typically has many EFMs, and which particular one is optimal depends on the environmental conditions and metabolic enzyme kinetics 29 . Although a chemostat is designed to operate at steady state, it is actually not a constant environment in any evolutionary sense, as changes in microbial physiology affect, for instance, the concentration of substrates and inhibitors in the vessel. However, as selection in the chemostat is ultimately mediated through differences in growth rate, any optimal strategy in a chemostat must be an EFM. This can be shown by a 'Gedankenexperiment' . Suppose that the optimal strategy is not an EFM. When the vessel is in steady state and hence the environment is constant, there must be an alternative strategy that can grow faster under the prevailing conditions, because under such constant conditions the maximal-growth-rate strategy is an EFM. A mutant employing this strategy will be able to invade and hence, a non-EFM strategy can be invaded and therefore cannot be an optimal strategy. We will illustrate some implications of this result with a coarse-grained whole cell model of yeast in a chemostat environment (Fig. 1). Our self-replicator model has two EFMs: A purely fermentative strategy and a purely respiratory strategy. The reasoning above implies that a respiro-fermentative strategy cannot be an optimal strategy, as this is not an EFM but a mixture of two. A respiro-fermentative strategy can always reach a higher growth rate by reallocating proteins from respiration to fermentation or visa versa, to the mode that gives the highest return on investments under the current conditions. This results simplifies finding the potential evolutionary steady states in a chemostat, because it greatly limits the set of possible solutions. The relation between enzyme concentrations and selection coefficients. In this section, we study what the optimal enzyme concentrations within an EFM are and we quantify the fitness effects of deviations from optimality. Selection between two species or mutants in a chemostat is typically quantified by the selection coefficient , which is the time derivative of the ratio of biomass densities 35 . For convenience, it can be normalized by the dilution rate D. We are specifically interested in the effect of changes in enzyme levels. The selection coefficient  m r , of a mutant m, with a (infinitesimal) perturbation in the concentration of enzyme i compared to a resident r, e i → e i + δe i , is given by (cf. Supplementary Information S1) . This relation therefore describes how changes in enzyme biochemistry and the topology of metabolic networks affect fitness in chemostats. In principle, one can find the optimal enzyme concentrations by solving = e e C / e J i tot i bm for all enzymes. However, in practice this is typically not the simplest way to do this. A change in the concentration of an enzyme affects the concentrations of the limiting-nutrient (s, glucose in our model) and the metabolic product (p, ethanol). These dependencies turn out to be closely related to the selection coefficient, and are captured by the following relation (Supplementary Information S1), . We predicted the rate at which a mutant with a 1% reduction in e ferm takes over the chemostat and confirmed this prediction with a simulation of the chemostat dynamics ( Fig. 2A). Equation (1) is only strictly true for infinitesimal changes in enzyme levels. Since larger changes might occur due to mutations or experimental modification, we also test if the selection coefficient is a good approximation of the rate at which a mutant might take over the chemostat for larger changes in enzyme levels. For this, we predicted the rate at which a mutant with a 10% reduction in e ferm would takes over the chemostat and compared it with a simulation (Fig. 2B). The agreement between the estimate and the simulation is reasonable, indicating that Equation (1) can give a good approximation even in the case of significant changes in enzyme concentrations. The concentration of an enzyme is optimal when neither its increase nor its decrease enhances the selection coefficient (fitness). In other words, the selection coefficient of a mutant with an infinitesimal change in enzyme concentration must be zero. For an optimal phenotype, the following must therefore hold for each enzyme: This is the same condition for optimality as found by Berkhout et al. 11 for growth in mid-exponential batch conditions (at nutrient excess). This result was previously derived by Klipp and Heinrich 31 , albeit in a different context. Furthermore, combining equations 2 and 3 shows that in the optimum the residual glucose concentration is minimized and that the total biomass concentrations is maximized (Supplementary Information S1). Equation 3 characterizes the optimal levels of protein expression by a fermenter and respirer at a given dilution rate and glucose feed concentration. Figure 3 shows the selection coefficient and growth rate of a fermenter (A) and a respirer (B), as a function of the concentration of fermentation and respiration enzymes, respectively. (For details on how to find the optimal state in a chemostat, see Supplementary Information S2.4) The growth rate has a maximum, equal to the dilution rate, when S m,r = 0, indicating that indeed when equation 3 holds, no mutant employing the same strategy (but at different enzyme level) can invade. Invasion by alternative strategies shows evolutionary instability of single strategies. It might be tempting to conclude from the argumentation above that an evolutionarily stable situation must be a single pure strategy-in this case, a single genotype that either purely ferments or respires. However, for both the optimal fermenting and respiring strategy, the population can be invaded by a mutant employing the alternative strategy (Fig. 4A,B). These invaders will not take over completely, but a coexistence of two strains emerges. Because we start with strains that have optimal enzyme levels at the initial conditions (that is the steady state with only the optimal respirer (Fig. 4A) or only the optimal fermenter (Fig. 4B)), the respirer (and fermentor) strains in Fig. 4A,B do not have exactly the same enzyme levels and the biomass concentrations in the final state of coexistence differ. The origin of this mutual potential for invasion can be illustrated by fitness-landscapes, defined as the dependency of the selection coefficient on the metabolic strategy. Figure 4C,D depict the selection coefficients of mutants in a resident population of respiring (4C) or fermenting (4D) cells. A phenotype with a positive selection coefficient can invade, but the mutant will affect the conditions in the chemostat, and therefore change the shape of the fitness landscape. For instance, when a fermenter invades, the concentration of ethanol in the vessel will increase, (1), can be used to predict the rate at which the mutant increases in abundance (gray dashed line). While this equation is technically only valid for infinitesimal changes in enzyme level, the result gives a good approximation of the rate at which a mutant takes over the population for more substantial changes in enzyme levels (the black and grey lines are very close in (A) and reasonably close in (B)). Scientific RepoRts | 6:29503 | DOI: 10.1038/srep29503 reducing the fitness advantage of the fermenting strategy until the ethanol concentration is such that the growth rate of the fermenters is equal to the dilution rate and a new steady state is reached. In conclusion, due to negative frequency-dependent selection, a coexistence between two phenotypes is possible. Whether in our model coexistence of a purely fermenting and a purely respiring strain is evolutionarily stable depends on the dilution rate and the glucose concentration in the feed. This dependency can be visualized by considering the growth rate of the (optimized) strains as a function of glucose concentration, in the absence of ethanol (Fig. 5A). We interpret the substrate concentration at which the cells achieve half of their maximal growth rate as the Monod constant. Figure 5 illustrates the dependency of this constant and the maximal growth rate on the metabolic strategy. Respirers have a lower Monod constant, whereas fermenters attain a higher maximal growth rate. Coexistence is impossible at low growth rates (phase I), because even in the absence of ethanol, respiring strains require a lower glucose concentration to attain a particular growth rate. At high growth rates (phase III), coexistence cannot occur because respiring strains will wash out. Phase II -with growth rates below the μ max of the respiring strain, but where in the absence of ethanol, the fermenting strains will out-compete them-has the potential for stable coexistence. Fermenting cells initially outgrow respiring cells, which will lead to the accumulation of ethanol, inhibiting the growth of fermentative cells. When the glucose concentration in the feed is low, resulting in a low biomass concentration, ethanol will not accumulate to levels high enough to substantially inhibit the growth of the fermenting cells and a single fermenting strategy will be optimal. At higher feed concentrations ethanol will start to inhibit the fermenting cells and a stable coexistence will arise (Fig. 5B). The calculations of the dilution rates with stable coexistence can be found in Supplementary Information S2.4. Evolutionary dynamics: Diversification in the chemostat. We have shown that under some conditions any single strategy can be invaded by another strategy, which allows for diversification in a chemostat. To test this, we simulated the evolutionary dynamics in a chemostat, initialized with a resident employing a mixed, respiro-fermentative strategy. Our simulation methodology is similar to the method employed by Beardmore et al. 36 and technical details can be found in the Supplementary Information S2.5. We divided the population into discrete genotypes, ranging from purely fermentative through respiro-fermenters to purely respiratory, each having different fractions of transporter protein. We vary both metabolic strategy and nutrient transport because of the focus on nutrient transport in the literature discussing evolution and selection pressures in the chemostat. Each genotype is defined by the relative flux to ethanol and respiration and the relative transport levels, but at each moment all enzyme levels are optimized for the prevailing conditions. The enzyme levels are allowed to take any value within the constraints of the genotype. This reflects the assumption that additional regulatory mechanisms regulate other aspects of the cellular resource allocation in an optimal manner. With these genotypes we can distinguish between selection on both metabolic mode and the transport protein. The chance that one genotype mutates into another decreases exponentially with the distance between the genotypes. (A,B) At t = 0, a 'mutant' of opposite metabolic strategy appears that settles in the chemostat, but does not take over completely. This indicates that (under these conditions) there is not a single optimal strategy in this chemostat. (C,D) The potential of opposing strategies to invade is clear from the fitness landscapes, which show the difference in growth rate, relative to the dilution rate, as a function of the relative flux through respiration. When a respirer (fermenter) is the resident, more fermenting (respiring) strategies have a positive selection coefficient, indicating that they have the potential to invade. The optimal specific growth rate of a respiring and a fermenting strain as a function of glucose concentration, in the absence of ethanol. At each glucose concentration, the optimal enzyme levels are calculated. There is a potential for coexistence in a chemostat when the dilution rate is such that in the absence of ethanol the fermenting strains outgrow the respiring strains, but the respirers will not wash out (Phase II). An additional requirement for coexistence is that the glucose concentration in the feed allows for enough accumulation of ethanol to inhibit the growth rate of fermenting cells to the extent that it equals the growth rate of respiring cells. Panel (B) shows combinations of glucose feed and dilution rate where coexistence can occur. We refer to the Supplementary Information S2.4 for details on how the borders between the phases (the dashed lines) can be calculated. Scientific RepoRts | 6:29503 | DOI: 10.1038/srep29503 We start the simulation with a resident employing a mixed, respiro-fermentative strategy and a transport protein fraction of about 16%. This population gradually evolves into different strategies, which also express different levels of transporters ( Fig. 6 and Supplementary Video File). First, increasingly more fermentative mutants take over the population. As fermenters accumulate, so will the ethanol concentration in the chemostat, giving respiring strains a growth advantage. After some time, respirers indeed increase in numbers. In the end, the initial respiro-fermenters are completely replaced and a coexistence between respirers and fermenters results. The fermenters express more transporters than the initial respiro-fermenters, while the respirers express less, illustrating the complex interplay between metabolic strategy, optimal enzyme concentrations and growth characteristics. Optimal enzyme allocation can also lead to an decrease in the average investment of transport protein in the population (Fig. 6B). There appear to exist three periods in which the biomass composition changes, as can be seen in the time evolution of the extracellular conditions and in the population changes (Fig. 6B,C) and the complete simulation in the Supplementary Video File. Between day 4 and day 15, the fermenters appear and settle. Because these fermenters have a lower biomass yield, this leads to a reduction in the total biomass. An intricate interplay between the abundances of the different phenotypes and the accumulating ethanol concentration during and after this period causes the residual glucose concentration to increase. As a consequence, between days 15 and 50 a respiro-fermenter with reduced transporter expression emerges, which become progressively more respiratory until fermentation stops between days 100 and 1000. This coincides with a decrease in average investment in transporter, indicating that selection in the chemostat does not always increase transporter levels (and neither does the affinity of the cell for the substrate always increase, see Supplementary Figure S3). The maintenance of several genotypes close to the optimal strategy in the final state-which might seem contradictory with our arguments above-is the result of a mutation-selection balance. At lower mutation rates, the total biomass is distributed over less genotypes in the evolutionarily stable state (data not shown). This simulation illustrates the potential for diversification in a homogeneous environment by way of negative frequency-dependent selection. Despite the complex and unpredictable dynamics during evolution, the evolutionarily stable endpoint can be predicted. Discussion In this work we showed the theoretical possibilities for the evolution of metabolism in the chemostat; these are only single EFM strategies with a possibility for coexistence. We obtained a better understanding of how evolutionary pressures affect metabolic strategies and concentrations of metabolic enzymes. We took both the benefits and costs of protein expression into account and expressed the cellular growth rate in terms of the rate with which metabolism makes new biomass per unit biomass. In this manner, we could couple the extracellular dynamics of nutrients and inhibitors in the chemostat to the metabolic state and strategy of a growing cell. We found that selection acts in chemostat cultures in a similar fashion as it does under constant conditions 29 : The optimal metabolic strategy is an EFM that maximizes the growth rate at the prevailing conditions. However, the difference is that in the chemostat a coexistence of 'pure' EFM strategies can evolve. This result is unique for environments where the strains influence their environments, and would not hold if the environment is constant (e.g. a microfluidics setup with low glucose concentrations where the products are directly washed away), because the negative feedback through the environment is essential for the coexistence. We derived theory that predicts the conditions for the optimal concentrations of enzymes, nutrients and inhibitors in the chemostat. We illustrated how species diversification can occur in a homogeneous and apparently (but not quite) constant environment, using a coarse-grained self-replicator model of S. cerevisiae. The fact that the fermentation product inhibits the fermenting strain effectively creates a new niche, as it allows the fermenter to grow equally fast as the respirer, leading to coexistence. An alternative mechanism for niche creation could be a cross-feeding intermediate, which we did not consider in our modeling efforts for the sake of simplicity and clarity. In our evolutionary simulation an equilibrium was reached, evolutionary cycles were not observed. However, we cannot exclude that such cycles are possible. We show that the Monod constant-quantifying the affinity of microorganism for the limiting substrate 37does not necessarily have to decrease during prolonged cultivation in the chemostat. The Monod constant typically depends on the expression of all proteins and is therefore not, as is often suggested, necessarily equal to the half-saturation constant of the transporter of the limiting substrate 12,28 . Our results indicate that in the chemostat environment coexistence of two or more different metabolic strategies can be an evolutionarily stable state. Classically, in game-theory, a population of individuals employing a mixed strategy is equivalent to a coexistence of two populations employing each a pure strategy 38 . A mixed metabolic strategy, however, is not equivalent to a coexistence in our case, because a mixed strategy is not an EFM. The difference arises from the fact that the pure EFMs occur in different organisms and therefore operate at different intracellular concentrations in the respiring and fermenting cells. For instance, a fermenting cell benefits from a high intracellular glucose concentration, because the affinity of fermentation for intracellular glucose is low. The respiration metabolic-mode has a high affinity for intracellular glucose and respiring cells benefit from a low level of intracellular pyruvate, because it decreases the product inhibition on glycolysis. A cell employing a mixed strategy must compromise between these two, and as a consequence is less fit than the pure strategies. Even though mixed strategies are not optimal, the different pure strategies do not necessarily have to be genetically hardwired. The subpopulations can also arise from phenotypic plasticity, as long as each subpopulation exhibits a pure strategy. Coexistence of different phenotypes in the chemostat has often been observed [39][40][41][42][43][44] , and several theoretical explanations have been suggested. Phenomenological models of microbial growth show multiple mechanisms that can lead to stable coexistence: cross-feeding 45 ; negative frequency-dependent selection through inhibitioneither by a waste product 46 or by an antibiotic substance 47 ; or a combination of a rate-affinity and rate-yield trade-off 36,48 . These models do not take metabolism and protein costs explicitly into account and therefore have to postulate the fitness effects leading to coexistence. Pfeiffer and Bonhoeffer showed that under particular conditions coexistence of cross-feeders and (partial or complete) glucose degraders can be an evolutionarily stable state, due to inhibition by -and feeding on -a glycolytic 'waste' product 49 . While they did take enzyme costs into account, they simply postulated them to be present, but did not derive them. Furthermore, they required particular assumptions about the toxicity of metabolic intermediates, and modeled growth rate indirectly. Our analysis adds to these works that we identify 'pure' metabolic strategies (i.e. EFMs) as the only evolutionarily stable strategies, we derive the effects of enzyme costs directly from a model, which integrates metabolism and growth, and we show explicitly that coexistence can evolve from a single ancestor, regardless of its starting phenotype. In our models ethanol cannot be consumed. This can however be added to our modelling framework. This would open up the possibility for coexistence of crossfeeding, for which the conditions can be determined in a similar fashion as done in this article. A survey of this setting was done by Pfeiffer et al. 49 , albeit not in a fully mechanistic model. The results of the same analysis with the framework proposed in this article is expected to give qualitatively similar results. Several researchers have evolved S. cerevisiae and E. coli 39 in the chemostat (summarized in Gresham & Hong 50 ). These experiments have been conducted at low dilution rates, possibly for practical reasons, such as medium supply. These dilutions rates are below the threshold for coexistence in our analysis, and, indeed, a shift from mixed to fully respiratory metabolism and an increased yield were observed in these evolution experiments [51][52][53] . Prolonged evolution in a chemostat often leads to increased substrate transport capacity, which is associated with an increased limiting-substrate affinity of the cells, and hence their fitness 50 . The most straightforward manner for a cell to enhance its transport capacity is however through increased expression of the transporter proteins, illustrating the importance of the adaptation of protein levels to the environment for fitness. The stability of species coexistence due to inhibitory compounds has been shown experimentally with a coexistence of S. cerevisiae and E. coli 44 strains. This study also indicated that coexistence can depend on the feed concentration, in agreement with our predictions. In principle those experiments can be repeated with two S. cerevisiae strains that differ in their respiratory ratio. These strains can be natural strains 33,54 or modified strains. For the coexistence to be stable it is required that the fermenting strains should grow faster than the respiring strain, at a certain dilution rate, and that the fermenting strain should be inhibited enough by ethanol (to make coexistence possible). When comparing theory with experimental results, it has to be kept in mind that microorganisms are likely not optimized for the chemostat environment, nor can we expect that cells are optimal after prolonged evolutionary experiments. In addition, selective pressures can act on other processes than the allocation of biosynthetic resources. For instance, limitations in membrane area could help explain the evolution of cell size and shape 28 and the catalytic properties of enzymes might also be subject to selection 55,56 . An understanding of the intricate interplay of selective pressures on different aspects of cellular functioning, as well as an understanding of which pressures are dominant under particular conditions, requires first a proper understanding of how selection acts on each aspect individually.
7,961.8
2016-07-06T00:00:00.000
[ "Biology" ]
Hydrogenated Gold Clusters from Helium Nanodroplets: Cluster Ionization and Affinities for Protons and Hydrogen Molecules We report the mass spectrometric detection of hydrogenated gold clusters ionized by electron transfer and proton transfer. The cations appear after the pickup of hydrogen molecules and gold atoms by helium nanodroplets (HNDs) near zero K and subsequent exposure to electron impact. We focus on the size distributions of the gold cluster cations and their hydrogen content, the electron energy dependence of the ion yield, patterns of hydrogenated gold cluster cation stability, and the presence of “magic” clusters. Ab initio molecular orbital calculations were performed to provide insight into ionization energies and proton affinities of gold clusters as well as into molecular hydrogen affinities of the ionized and protonated gold cluster cations. Electronic supplementary material The online version of this article (10.1007/s13361-019-02235-1) contains supplementary material, which is available to authorized users. Introduction G old, precious in so many other ways, is at most only moderately effective as a catalyst, at least as a clean bulk metal, when compared to group VIII to X metals including platinum for example, its neighbor on the periodic table. As a hydrogenation catalyst, pure bulk gold has been found to have only a weak affinity for molecular hydrogen (unless dispersed and supported on a metal oxide) [1,2]. Also, there appears to be no direct evidence that molecular hydrogen chemisorbs by dissociation on bulk gold at room temperature and below. But gold behaves differently as very small clusters of atoms [3][4][5][6][7][8]. The chemical nature of small aggregates of gold has been studied extensively in recent decades [9][10][11] and has led to the development of, for example, gold-based catalysts [12,13]. More specifically, the history of studies on gold-hydrogen complexes goes back at least a century [14]. Since then, numerous studies of complexes of gold and hydrogen have been carried out [15][16][17]. Computations have shown that Au 2 and Au 3 bind one and even two molecules of hydrogen, the first with binding energies (D e ) of 0.55 and 0.71 eV, respectively [18]. However, the computations also predict the presence of a substantial energy barrier for the dissociation of adsorbed hydrogen, 1.10 and 0.59 eV, respectively. Other calculations using density functional theory, as well as infrared spectroscopy experiments in solid hydrogen, have characterized AuH, AuH 2 , (H 2 )AuH, and (H 2 )AuH 3 [19,20] and the decomposition of AuH 2 by the release of H 2 [20]. Sugawara et al. studied reactions of small gold cluster cations Au n + (n = 1-12) with molecular hydrogen in an FT-ICR mass spectrometer and did not observe any reaction products [21]. However, mixed cluster ions of the form Au n H x + (n < 8) are efficiently formed via laser ablation of a gold rod in an atmosphere of a hydrogen (5.3%)/ helium mixture. Pronounced intensity anomalies of these cations as a function of the number of attached hydrogen atoms, x, have been reported [21]. Here, we apply a very low temperature technique with which we can encourage both Au atoms to cluster and molecular hydrogen to adsorb on these clusters within a superfluid helium environment provided by helium droplets [22][23][24][25][26]. A beam of nanodroplets of helium is seeded with molecules of hydrogen and atoms of gold and these are allowed to interact before electron impact ionization of the droplets. In this way, clusters of gold and hydrogen are allowed to form and are then exposed to electron and proton transfer reactions that produce positive ions that ultimately are det ected mass spectrometrically. The mass spectra provide the stoichiometry of the hydrogenated gold cluster cations as a function of cluster size and, indirectly, insight into the precursor neutral hydrogenated gold clusters. Furthermore, with molecular orbital calculations, we explore the energetics of gold clusters losing electrons or gaining protons as well as the structures and stabilities of the hydrogenated gold cluster cations that are observed to Bmagically^predominate in the mass spectra. Experimental The experimental apparatus is described in detail elsewhere [22,[27][28][29], but an overview of the processes involved can be found in Figure 1. He nanodroplets were produced via supersonic expansion of pre-cooled gaseous He (Messer, 99.9999% purity) under a pressure of 2.25 MPa through a 5-μm diameter nozzle cooled to 9.55 K. The mean size of the produced droplets is estimated to be 10 6 He atoms [30,31] and their velocity is approximately 260 m/s. The helium beam passed through a 0.8-mm diameter skimmer and entered a pickup region where hydrogen (Messer Austria GmbH, purity 99.999%) was introduced via a needle valve. Gold vapor was produced from solid gold heated with 118 W, which gives a temperature of at least 950°C, in an oven similar to the one reported by Feng et al [26] that is located another 115 mm downstream. The vapor pressures in the two pickup cells are on the order of 10 −6 mbar. The doped droplets underwent ionization in a Nier-type ion source with electron kinetic energies of 85 eV for positive ion formation. The dopants were ionized through different processes depending on the polarity [32,33]. The ionized complexes were then driven through a set of Einzel lenses into the extraction region of a commercial, reflectron time-of-flight mass spectrometer (Tofwerk AG, model HTOF) where spectra of the signal intensity versus mass per charge were obtained. The spectra were evaluated in the custom software IsotopeFit with which overlaps were deconvoluted, background signals were subtracted, and mass peaks were fitted [34]. Theory We have investigated structures and properties of pure and hydrogenated gold clusters with ab initio calculations using second-order Møller-Plesset (MP2) perturbation theory. In order to find the most energetically preferred structure for each cluster size (each unique combination of Au and H atoms), we optimized the structures for any different starting geometries at the MP2/def2-SVP level. We then further optimized the most stable candidates at a higher MP2/def2-TZVP level. Core potentials (as defined in the respective basis sets) were utilized for the Au atoms to include relativistic corrections and to speed up the calculations. A vibrational frequency analysis was performed to ensure that proper minima were achieved in the structure optimizations and to calculate the zero-point energy corrections that are included in the presented energy values. The calculations were performed using the Gaussian 16 software [35]. Molecular ions containing up to 7 Au atoms were studied and, depending on the number of Au atoms, up to 11 H atoms. As the system size increases, so too does the number and complexity of stable isomers as well as the computational resources required. This was the main limitation to the sizes of systems that we have studied. We have investigated a wide range of possible structures for our mixed clusters, including the structures for pure gold clusters from Schooss et al. [27] as initial guesses for the optimizations, and for certain cluster sizes, the preferred structure of the gold atoms changed depending on the number of hydrogens added. A comparison of the pure gold cluster structures and the structures for the magic numbers can be found in the SI. Results and Discussion Observation of Hydrogenated Gold Cluster Cations Figure 2 shows the intensity distribution obtained mass spectrometrically for the hydrogenated gold cluster cations Au n H x + seen with n up to 15. A clear oscillation in intensity is seen, with odd-numbered clusters generally being more intense than even numbered clusters. With Au n + carrying the positive charge, we note that the odd-numbered clusters are even electron systems while the even-numbered clusters are The coarse structure at the lower mass range is composed mainly by pure He clusters. He droplets with a size of about 10 6 atoms were generated with a nozzle temperature of 9.55 K and a stagnation pressure of 2.25 MPa. They were doped first with H 2 molecules and then with Au atoms. Pickup pressure was 1.18 × 10 −3 Pa for H 2 . The metal oven was heated with 118 W. The electron energy for the ionization process was 85 eV. The inset is a close-up on part of the Au 3 + cluster series where the added hydrogens can easily be distinguished. We also see some residual water molecules binding to the gold clusters Ion Yields Au doping of HNDs leads to the formation of pure clusters of Au atoms, Au n , and when hydrogen is present as well, the formation of hydrogenated clusters Au n (H 2 ) m . The strongly bonded H 2 molecules are not expected to dissociate in the presence of the gold clusters at the low temperature of the HNDs; the reaction of molecular hydrogen with gold atoms to produce AuH is known to be endothermic by more than 1 eV [19]. When the HNDs are exposed to electron impact ionization and He + ions are formed, both Au n and Au n (H 2 ) m clusters can become ionized by electron transfer to these He + ions (IE[He] = 24.59 eV [36]). We have previously reported the formation of positive clusters of gold atoms under similar conditions in the absence of molecular hydrogen [37]. The excess energy of the electron transfer heats up the charged clusters and can promote its fragmentation. Some cluster ions with a longer lifetime will demonstrate enhanced stability as a consequence of more efficient quenching by the ultra-cold helium matrix. Other precursors of cluster ionization include metastable helium atoms He* as well as He* − or by proton transfer from He n H x + or H x + derived from He + reactions with H 2 [22] leading to Au n H(H 2 ) m + cations. Figure 3 presents the influence of the energy of the electrons impacting the HNDs, doped with Au and H 2 , on the relative ion yield of protonated cluster ions Au n H + with n = 3, 6, 9, and 12 and of hydrogenated gold cluster ions Au n H 4 + with n = 3, 6, 9, and 12. Both populations exhibit an onset around 20 eV, near the 19.8 eV required to form He* in its lowest lying excited state, and there seem to be no remarkable differences in the shapes of the ion profiles. Interestingly, this differs somewhat from the behavior of the ion efficiency curves for pure gold cluster cations, where the position of the maxima shifts towards lower electron energies with increasing cluster size [29]. The reason for this difference is not entirely clear, but could be because of a difference in mean droplet size or by the presence of H 2 in the droplets. Computed Ionization Energies and Proton Affinities of Gold Clusters Because of the excess energy available in both the electron and proton transfer reactions, Au n H(H 2 ) m + formation can be accompanied by the dissociation of the cluster ions through H 2 elimination with, as we shall see, the ultimate preferred formation of Bmagic^hydrogenated clusters of special stability. The excess energy in the ionization of Au n (H 2 ) m can be considerable because of the high recombination energy of, for example, He + (24.59 eV). Similarly, the very low proton binding energy of the proton donors, e.g., HeH + or H 3 + with PA(He) = 1.82 eV [38] and PA(H 2 ) = 4.39 eV [39], leads to high excess energies in the proton transfer to the gold clusters in secondary HeH + / H 3 + + Au n → HeH/H 3 + Au n H + reactions that may drive the annealing of the final mixed cluster products. Ionization energies of small pure gold clusters and their variation with size have been reported previously in the literature, but little appears to be known about the proton affinities and their variation with size. The variation of ionization energy of Au n for n = 1-22, known in 2003, has been graphed by Sugawara et al [21]. A striking even/odd oscillation with cluster size n, with even-n clusters relatively more predominant, is clearly evident and the authors remark on how this oscillation matches that observed in the binding energy D e of Au n + -Au. The results of our calculations of IE are summarized in Table 1 and plotted in Figure 1. There is agreement as regards both magnitudes and even/odd oscillations in IE. Also included in Table 1 and Figure 1 are our computed values for the proton affinities of the gold clusters. Of note is the sharp increase in PA for clusters with n > 1. Figure 5 provides panels that show the distribution in hydrogenation observed in our experiments for Au n H x + cluster sizes from n = 1 to 8 and x from 1 up to 20. These distributions exhibit oscillations and the presence of intense Bmagic^numbers that shift to higher hydrogenation for clusters with up to 5 gold atoms. Oscillations appear to be more pronounced for odd-numbered gold clusters and at lower degrees of hydrogenation. They are still present for clusters with 6 to 8 gold atoms but strong magic numbers are less pronounced in relative intensity. Another striking feature is the shift from the very pronounced magic numbers that can be seen for n ≤ 5 to the richer intensity distributions for n ≥ 6. For example, there is a sharp drop in ion yield after Au 6 H 9 + , but also rather high intensities of ions with fewer H atoms. This could be an indication of a transition from 2D to 3D structures as the cluster sizes increase, leading to more possible isomers being available, each contributing with their own different magic combinations of Au and H atoms. Observed Profiles of Hydrogenation for Individual Gold Cluster Sizes We note the following hydrogenation features for specific gold cluster ions: n = 1: Odd-numbered AuH x + are more intense than their even-numbered neighbors with the notable exception of AuH 4 + which clearly exhibits special stability. n = 2: Au 2 H 5 + clearly predominates and Au 2 H 6 + also has a relatively high intensity compared to all other less remarkable Au 2 H x + cluster sizes. n = 3: The early even-numbered Au 3 H x + ions are observed to increase in intensity from x = 0 to 2 to 4 to 6. Also for the odd-numbered Au 3 H x + ions, an increase can be observed from x = 1 to 3 to 5 to 7 with odd x Au 3 H x + ions being less intense than the preceding even x ions. The Au 3 H 6 + ion is the most intense overall. n = 4: This time, the early Au 4 H x + are observed to increase in intensity from x = 0 to 7, with a local minimum at x = 2 and even numbered ions being less intense than preceding odd x ions. The Au 4 H 7 + ion is the most intense overall. Au 4 H x + cluster sizes with x = 9 to 20 are of low intensity but exhibit a clear odd-even oscillation, with minima at even numbers of H atoms x. n = 5: As for n = 3, the early even-numbered Au 5 H x + ions are observed to increase in intensity from x = 2 to 4 to 6 to 8 and odd x Au 5 H x + ions being less intense than neighboring even x ions. Au 5 H 8 + and Au 5 H 7 + are the most intense even and odd x ions, respectively, with the former being the most intense overall. n = 6: After the initial appearance of the protonated cluster Au 6 H + , oscillations are seen with local maxima of the ion yield of Au 6 H x + at x = 5 and 8, with Au 6 H 8 + being slightly more predominant. A sharp drop-off in ion intensity is seen after Au 6 H 9 + . Curiously, a Brogue^cluster ion Au 6 H 17 + shows a small maximum beyond Au 6 H 11 + . n = 7: Four strong oscillations are seen early on for the even x cluster ions Au 7 H x + with x = 2, 4, 6, and 8 with x = 6 predominating. The ion yields for cluster ions Au 7 H x + with x > 9 exhibit no odd-even oscillation. n = 8: A strong protonated cluster peak Au 8 H + is followed by strong adduct peaks with one and two H 2 molecules. Note from Table 1 and Figure 4 that the calculations indicate that Au 8 has the highest proton affinity (7.76 eV) of the systems studied here. Figure 5 also includes the FT-ICR data of Sugawara et al. [21] obtained in experiments with the laser ablation of gold in a H 2 (5.3%)/He mixture (gray bars). Hydride gold cluster distributions are observed that are sometimes similar but more often distinctly different from ours. The extent of hydrogenation is generally seen to be much smaller, but the presence of magic number intensities for Au 2 H 5 + and Au 3 H 6 + coincides with ours. Magic numbers in the FT-ICR spectra are also observed otherwise, but generally shifted to lower hydrogenation. These differences may well be due to the higher temperature of the FT-ICR experiments, direct formation of Au n + cluster ions by laser ablation, and a significant presence of H atoms in the Au n + cluster ion formation region. . All the structures that are shown are planar, in regard to the positions of the Au atoms, except Au 7 H 6 + , which agrees with the nonplanar Au 7 + structure. Comparisons between the structures in Figure 6 and the bare Au n + structures can be found in the SI. Computed Structures of Hydrogenated Gold Cluster Cations For each combination of Au and H, several structures were optimized to find the one with the lowest potential energy. In the cases of the Bmagical^structures, alternative isomers found have at least 0.1 eV higher energy than the proposed minima. The calculations suggest that H 2 molecules bond directly to Au atoms of the gold cluster Bskeleton^and that the extra H atom in the even-numbered gold clusters (n = 2, 4, and 6) simply bridges two Au atoms. Computed Energies of Hydrogenated Gold Cluster Cations The odd-even oscillations seen in the data shown in Figure 5 correspond to cluster cations with added intact H 2 molecules in the presence or absence of an H atom. In our calculations, we explored the H 2 affinities (ΔE 0 ) of the gold cluster cations for molecular hydrogen. The results are summarized in Table 2 and graphed in Figure 7. Hydrogenation with H 2 molecules was seen to be limited with larger cluster cations exhibiting a greater capacity for hydrogenation but weaker bonding of individual hydrogen molecules. Up to two H 2 molecules bind strongly to Au + and Au 2 H + with energies of 0.8 to 1.1 eV. Au 3 + and Au 4 H + have a significant affinity for up to three molecules of H 2 , the first two with about 0.7 eV and the third somewhat lower still by 0.2 and 0.3 eV, respectively. The H 2 affinities of Au 5 + , Au 6 H + , and Au 7 + are the lowest, below 0.68 eV, but the trends suggest that the Bmagic^numbers seen in the experiments correspond to gold clusters that are saturated with a first layer of relatively strongly bound H 2 units. Conclusions Our experiments have shown that H 2 molecules readily attach to gold clusters with up to at least 8 gold atoms in a He environment near zero K. These hydrogenated clusters are readily ionized in the presence of electron acceptors such as He + or proton donors such as HeH + and some H 2 elimination may ensue due to the high excess energy of these processes. There was no evidence for the dissociation of adsorbed H 2 molecules; there was no indication of H elimination that might result from dissociation. The hydrogenated gold cluster ion distributions exhibit Bmagic^features that appear to reflect special stabilities for certain numbers of H 2 adsorbed molecules. Our calculations have indicated that the number, including the Bmagic^number of H 2 adsorbed molecules, is determined by the structure of the underlying (most often flat) Au cluster skeleton and the number of Au atoms exposed on the periphery. The computed H 2 affinities of the cation clusters are as high as 1.1 eV, but weaken with increasing cluster size. H atoms appear to bridge two Au atoms in hydrogenated clusters with an even number of Au atoms.
4,879
2019-06-05T00:00:00.000
[ "Chemistry", "Physics" ]
Current Applications of Genetic Risk Scores to Cardiovascular Outcomes and Subclinical Phenotypes Genetic risk scores are a useful tool for examining the cumulative predictive ability of genetic variation on cardiovascular disease. Important considerations for creating genetic risk scores include the choice of genetic variants, weighting, and comparability across ethnicities. Genetic risk scores that use information from genome-wide meta-analyses can successfully predict cardiovascular outcomes and subclinical phenotypes, yet there is limited clinical utility of these scores beyond traditional cardiovascular risk factors in many populations. Novel uses of genetic risk scores include evaluating the genetic contribution of specific intermediate traits or risk factors to cardiovascular disease, risk prediction in high-risk populations, gene-by-environment interaction studies, and Mendelian randomization studies. Though questions remain about the ultimate clinical utility of the genetic risk score, further investigation in high-risk populations and new ways to combine genetic risk scores with traditional risk factors may prove to be fruitful. Introduction Multi-cohort genome-wide association studies (GWAS) have now identified hundreds of genetic variants that are credibly associated with cardiovascular outcomes, subclinical cardiovascular phenotypes, and risk factors for cardiovascular disease (CVD). However, the individual genetic variants, or single nucleotide polymorphisms (SNPs), that have been identified typically explain a very small fraction of the variation in complex traits and thus have limited predictive capacity for disease risk [1]. Aggregating information about multiple SNPs, each with small effects, into a single genetic risk score (GRS) has become a useful tool for examining the cumulative predictive ability of genetic variation at known loci on cardiovascular disease outcomes and related phenotypes [2]. Here, we outline key aspects of creating GRSs, discuss their quantification and evaluation, and provide a brief summary of the predictive ability of GRSs for cardiovascular outcomes and subclinical CVD phenotypes. Emerging uses of GRSs will then be discussed, including (1) prediction in clinical and high-risk populations, (2) GRS-by-environment interaction studies, and (3) Mendelian randomization studies. Cardiovascular outcomes discussed include coronary heart/artery disease (CHD/CAD), myocardial infarction (MI), ischemic stroke (IS), hypertension (HTN), and a composite CVD phenotype that includes both heart disease and stroke (also conceptualized as Btotal cardiovascular diseases^ [3]). Subclinical This article is part of the Topical Collection on Cardiovascular Disease phenotypes include artery calcification and intimal-medial thickness (IMT). Creating Genetic Risk Scores Fundamentally, the creation of a GRS involves summarizing information across multiple SNPs. The most common method sums the number of risk-conferring alleles that an individual has (0, 1, or 2) across all loci. A statistically analogous coding scheme is to assign the heterozygous state (i.e., Aa) a value of 0, the non-risk homozygous state −1, and the risk homozygous state 1. If an individual is missing a small proportion of genotype data needed to construct the GRS (such as 1 or 2 SNPs), imputation to the most common genotype category is commonly used. An alternative to imputation is to only include SNPs that have complete genotype data in the GRS, followed by GRS rescaling to be consistent with GRSs created using all SNPs. SNP Selection To create a GRS, one must first select the genetic variants to be included in the risk score. Although earlier studies creating GRSs for cardiovascular traits included SNPs from biologically plausible candidate gene association studies [4,5], most current GRSs are constructed using SNPs found to be associated with traits through GWAS. The standard in the field has been to use SNPs that reached genome-wide significance (p<5×10 −8 ) in large, consortium-based, multi-study GWAS meta-analyses. Typically, these consortium-based meta-analyses also include a replication phase, which further enhances the robustness of the findings. While selecting SNPs from large meta-analyses is considered the gold standard, less preferable strategies for SNP selection may be used if metaanalyses have not yet been conducted for the trait of interest in populations that are demographically and/or ethnically similar to the population under study (for example, SNPs may be selected from biologically plausible candidate genes or from a single-study GWAS). The vast majority of GRSs in the literature include only common variants (SNPs with minor allele frequency (MAF)>5 %), because meta-analyses tend not to have sufficient power to detect effects from rarer variants. A disadvantage of including only the most highly significant and replicated SNPs is that there may be many other SNPs with true effects that do not reach the stringent genome-wide significance levels. Recent work evaluating the relationship between coronary artery calcification (CAC) and GRSs constructed from CHD/MI-associated SNPs at various meta-analysis p value thresholds showed that trait variation for CAC is maximally explained by including thousands of SNPs that are at least marginally associated with CAD/MI (p<0.2) in the GRS [6•]. On the other hand, research on type 2 diabetes suggests that GRSs constructed from increasing numbers of SNPs may not substantively improve risk prediction [7]. With these limited, sometimes conflicting results, more work is needed to verify whether inclusion of additional marginally significant SNPs in GRSs is the best approach for cardiovascular traits. It has also been suggested that relying solely on genomewide significant SNPs from the largest, most current metaanalysis may not be the best approach to SNP selection. New algorithms that integrate information from multiple sources into GRS SNP selection are beginning to appear. For example, Belsky and colleagues implemented a novel SNP selection method using public-access resources, including GWAS results databases and web-based GWAS analysis tools, to select SNPs from 16 published GWAS for an obesity GRS [8••]. This type of algorithm may represent a systematic and replicable method for integrating results from a wider variety of sources. Weights When equal weights are assigned to each genetic variant, the score is Bunweighted,^and its construction is based on the assumption that each risk allele confers identical risk. However, for most complex traits, effect sizes across identified SNPs vary (see, for example, [9]). Thus, GRSs are often constructed by weighting SNPs by their GWAS meta-analysis effect sizes, thus giving more weight to variants with stronger effects. Weighted scores may increase statistical power compared to unweighted scores, provided that the weights are accurately determined [10••]. Weights are ideally calculated from consortium-based meta-analysis effect sizes, which are more precise due to large sample sizes. This weighting method is commonly used when the target population (the population in which the GRS is going to be evaluated) has a similar demographic and ethnic composition as the meta-analysis population (the population used to estimate the effect sizes). See below for a more detailed discussion of the importance of ethnicity in GRS creation. An unweighted score is often the best option if there are no stable effect estimates available because (1) no GWAS meta-analyses have yet been performed on the trait of interest (and thus SNPs are selected from candidate gene studies or small, un-replicated GWAS), (2) existing meta-analyses are comprised of studies with different ethnicities or demographic profiles than the population to be studied, or (3) SNPs identified using multiple traits on different measurement scales are to be combined into a single GRS (for example, a GRS that comprises SNPs associated with multiple Bintermediate traits,^described in greater detail below). Population-Specific Considerations A potential disadvantage of selecting SNPs based on published meta-analysis results is that many meta-analyses for complex traits have been conducted solely in Europeanancestry (EA) populations. This practice is problematic because the SNPs most significantly associated with a trait often differ across ethnicities for a variety of reasons including (1) ethnicity-specific genetic variation, (2) allele frequency differences across ethnicities, and (3) differing patterns of linkage disequilibrium (LD) resulting in ethnicity-specific Btag SNPst hat are associated with the causal variant(s) [11]. Trans-ethnic meta-analyses for cardiovascular traits are beginning to emerge [12,13] and may offer several advantages over single-ethnicity analyses [11]. However, until trans-ethnic analyses become commonplace, other approaches for GRS SNP selection may be required for GRSs constructed for use in non-EA populations. One strategy is to evaluate SNPs from EA meta-analyses for association within the target population and retain only those that have at least a marginal effect. SNPs selected in this manner may also be combined with the most highly significant SNPs from smaller ethnicity-specific GWAS meta-analyses (see, for example, [14,15]). Population-specific factors such as age, sex, and demographics may also be important considerations for SNP selection and weighting. While there is an awareness that confounding and effect modification by population-specific factors may influence both estimates and inferences, the field of genetics has been primarily concerned with race/ethnicity because of the way in which it fundamentally changes the variants that are included and identified in an analysis. Aside from ethnicity, GWASs often include populations with a large range of demographics in order to achieve the sample sizes necessary to obtain enough power to accurately identify SNPs. Nevertheless, glaring demographic differences should be considered when creating GRSs. Estimating and Evaluating Genetic Risk Score Effects The effect sizes for the associations between GRSs and cardiovascular phenotypes are typically reported as beta estimates, odds ratios (ORs), or hazard ratios (HRs), as appropriate for the type of outcome. Effects may be reported per risk allele (corresponding to a one-allele increase in GRS), per GRS standard deviation (SD), or with respect to a particular comparison such as the contrast between the highest and lowest GRS quartiles. Differing methods of reporting effects often makes comparison across GRS studies difficult. In addition, per allele effect sizes tend to decrease as newly discovered variants are incorporated into GRSs. This is due to smaller effect sizes of the newly discovered variants compared to those discovered in the first wave of GWAS meta-analysis, as has been demonstrated for type 2 diabetes [9]. Thus, reporting effects per GRS SD is more effective for crossstudy and cross-trait comparisons. The contribution of GRSs to quantitative cardiovascular traits, such as CAC, is typically reported as the percent of variation in the trait explained by the GRS. This may be assessed before and after adjustment for traditional cardiovascular risk factors (e.g., body mass index (BMI), lipids, HTN, diabetes, and others). For clinical outcomes, such as CAD or HTN, the predictive capacity of the GRS is most commonly evaluated using metrics for risk discrimination and risk reclassification (reviewed in [16]). Prediction models are constructed before and after including traditional cardiovascular risk factors and/or family history of disease. If the GRS significantly improves prediction after inclusion of traditional risk factors, it demonstrates the potential for clinical utility through more accurate disease risk prediction for patient populations. The area under the receiver operating curve (AUC, or cstatistic) is commonly used to assess discrimination between people with and without disease [17][18][19]. The c-index is the analogous measure for survival data. Higher AUCs indicate more accurate discrimination, and model improvement is assessed by change in AUC across models. However, many have argued that AUC-based methods may not be optimal for predicting risk [19]. The net reclassification improvement index (NRI) is a popular choice for evaluating risk reclassification [16, 20, 21, 22••]. This statistic evaluates a prediction model's ability to correctly reassign individuals into disease classifications when compared to a different model. Positive values of NRI correspond to prediction improvement. Clinical NRI corresponds to correct risk reclassification of individuals at intermediate risk for disease and may be more clinically relevant than the traditional NRI [23]. Other methods for assessing the potential for clinical utility, such as the integrated discrimination improvement (IDI) [20], are also available. Applying GRSs to Cardiovascular Traits A variety of approaches have been used to examine the relationship between GRSs and cardiovascular traits. Below, we discuss the current literature on (1) CAD/CHD-associated SNPs predicting CAD/CHD, (2) blood pressure (BP)-associated SNPs predicting BP and HTN, both within and across ethnic groups, (3) intermediate trait-GRSs predicting cardiovascular outcomes, (4) GRSs predicting composite CVD, and (5) GRSs associated with subclinical measures of heart disease. Representative examples of studies that fall into each of these categories are provided for dichotomous cardiovascular outcomes and quantitative cardiovascular traits in Tables 1 and 2, respectively. CAD/CHD-Associated SNPs Predicting CAD/CHD The original concept of using GRSs to predict cardiovascular disease focused on using SNPs associated with the trait of interest to predict that same trait (i.e., CAD-GRS to predict CAD). Several studies have used SNPs found to be associated with CAD in consortium-based meta-analyses to construct GRSs for evaluation with respect to CAD/CHD in EA populations. For example, a weighted 24-SNP GRS constructed using CAD-associated SNPs from four meta-analyses was significantly associated with incident CHD in a Finnish cohort of >24,000 individuals (HR=1.27 per GRS SD) [24]. A study of >10,000 Swedes showed similar results for a 46-SNP GRS constructed using CAD-associated SNPs from the largest CAD meta-analysis to date [12] (HR = 1.54 for incident CHD comparing first versus fourth GRS quartile) [25]. In both studies, as is often the case when using GRSs to predict cardiovascular disease outcomes, models including the GRS modestly but significantly improved risk reclassification beyond traditional risk factors, but discrimination was not improved. BP-Associated SNPs Predicting BP and HTN The strength of association between GRSs constructed from BP-associated SNPs and HTN or BP has been examined in EA populations. In 2009, the International Consortium for Blood Pressure Genome-Wide Association Studies consortium (ICBP) conducted a GWAS meta-analysis of HTN and BP phenotypes in >200,000 EA [26]. A GRS was created from 29 SNPs associated with systolic blood pressure (SBP) and/or diastolic blood pressure (DBP) at p<5×10 −9 , weighted by the mean effect size for SBP and DBP. The GRS was evaluated in an independent cohort of 23,294 women and showed an increase of 1.65 and 1.10 mmHg per SD of the GRS for SBP and DBP, respectively, as well as a 23 % increase in the odds of HTN. When this same GRS was evaluated in a longitudinal study of >17,000 Swedes, a 1 SD increase in GRS was significantly associated with an increase of 1.0 and 0.6 mmHg in SBP and DBP, respectively, as well as a 61 % increase in the odds of hypertension at baseline [27]. The proportion of variation explained by the GRS was 1.0, 0.7, and 2.9 % for SBP, DBP, and HTN, beyond traditional risk factors. Changes in SBP (beta=0.03 mmHg), DBP (beta= 0.023 mmHg), and HTN incidence (OR=1.11) were also significantly associated with the GRS. In this study, discrimination for HTN was marginally but not significantly improved by adding the GRS to traditional risk factors. These studies show that while GRSs created for BP and HTN are strongly associated with these traits, clinical utility may be limited. GRSs consisting of BP-associated SNPs have also been evaluated in non-EA ethnicities. A trans-ethnic GWAS metaanalysis with an African-ancestry discovery sample and a multi-ethnic replication sample identified five SNPs credibly associated with blood pressure that were not previously identified through EA meta-analyses [13]. In the African-ancestry discovery sample, a weighted GRS with these five SNPs explained 0.44 and 0.54 % of the variation in SBP and DBP, respectively, after adjustment for age, body mass index, gender, and the top ten genetic principal components (to account for ancestry). A composite score that included the five SNPs along with the 29 ICBP variants [26] explained 0.80 and 1.42 % of the variation in SBP and DBP. This illustrates that GRSs constructed from SNPs identified in EA-only metaanalyses are often associated with the same traits in other ethnicities but to a lesser degree than in EA samples. In addition, ethnicity-specific SNP identification often leads to an increase in predictive capacity for GRSs, underscoring the need for GWAS meta-analyses that include multiple ethnic groups. A GWAS meta-analysis with over 80,000 Han Chinese (including discovery and replication samples) identified several SNPs that met genome-wide significance for association with SBP, DBP, and/or HTN [14]. A GRS was constructed that included these SNPs as well as SNPs from previous GWAS meta-analyses that were conducted in EA-only or East Asian-only samples. Prior to inclusion in the GRS, SNPs identified in the EA or East Asian meta-analyses were screened to have at least nominally significant associations with BP in the Han Chinese sample. In a subset of >28,000 subjects, the GRS was significantly associated with HTN (OR=1.66 for the highest versus lowest quintile of GRS) and was also significantly associated with SBP and DBP. This study illustrates a well-powered hybrid approach to GRS SNP selection for use in non-EA ethnic groups. Intermediate Trait-GRSs Predicting Cardiovascular Outcomes In order to gain a better understanding of the relative genetic contribution of specific intermediate pathways or risk factors to cardiovascular traits, GRSs constructed from SNPs associated with intermediate traits (e.g., CAC) may be evaluated for association with cardiovascular outcomes such as CHD, stroke, and CVD. This approach can augment or extend findings from studies using GRSs constructed from trait-specific SNPs. For example, GRSs constructed from BP-associated SNPs (SBP and DBP separately) were associated with incident CHD, IS, and CVD in a large cohort of Finnish subjects (HRs for CHD=1.25 and 1.23; HRs for IS=1.25 and 1.35; HRs for composite CVD=1.23 and 1.26 for GRS SBP and GRS DBP , respectively). This study illustrates that the genetic factors that influence BP also have a significant effect on clinical outcomes [28]. In some cases, constructing GRS based on intermediate traits may be the only option due to the relative lack of studies or replicable significant findings from meta-analyses for the trait itself. For example, large-scale meta-analyses of stroke or IS are just beginning to emerge ( [29,30], but the sample sizes are typically smaller for this cardiovascular trait than other CVD outcomes (such as CHD) or for intermediate traits (such as BP or lipids). Several studies have evaluated the relationship between intermediate trait or risk factor-based GRSs and IS. For example, a GRS constructed from SNPs associated with BP was significantly associated with IS in a Swedish sample of 3,677 stroke cases and 2,415 controls (OR=1.09 per SD increase in GRS) [31]. The addition of the GRS demonstrated weak but significant improvement in risk reclassification. A GRS constructed from atrial fibrillation-associated SNPs was also significantly associated with incident IS in Swedish participants (HR=1.23 comparing top and bottom quintiles of GRS) and modestly but significantly improved risk discrimination and reclassification [32]. A GRS constructed from high-density lipoprotein cholesterol (HDL-C) SNPs, however, was not associated with incident IS in European Americans [33]. Recent studies have now gone further to combine SNPs associated with multiple intermediate traits into a composite risk score. For instance, Malik et al. combined SNPs credibly associated with atrial fibrillation, CAD, HTN, and SBP into a single 113-SNP GRS and found that it was significantly associated with IS in both clinic-based case-control and population-based samples (OR=1.06 per GRS SD in casecontrol sample) [34]. Adding the GRS improved prediction of IS beyond a sex-adjusted model in the case-control sample but not in the population-based sample. Another emerging strategy for creating a composite GRS is to combine intermediate trait-associated SNPs with SNPs identified for the trait itself. In a meta-analysis of four population-based EA cohorts, a 324-SNP GRS comprised of SNPs credibly associated with stroke and nine stroke risk factors improved discrimination and risk reclassification for IS beyond a well-validated risk factor prediction model [35•]. Studies have also evaluated whether GRSs constructed from CHD-associated SNPs only, intermediate traitassociated SNPs only, or the combination of both types of SNPs are predictive of CHD [25,36]. A case-control study of individuals in the Netherlands found that the best predictor of CHD was a weighted 29-SNP GRS consisting of CHDassociated SNPs only, with an HR=1.12 per risk allele after adjustment for traditional risk factors [36]. Other GRSs that included intermediate trait SNPs were less strongly associated with CHD and were attenuated after risk factor adjustment. A separate study in Swedes, however, found that the CHDspecific GRS and the GRS constructed from CHD-plus intermediate trait-associated SNPs were similarly associated with CHD (HR=1.5 for first vs. fourth quartile of GRS) after adjustment for traditional risk factors, and that risk reclassification was modestly but significantly improved beyond traditional risk factors for both GRSs, although the AUC was not [25]. Taken together, these studies suggest that adding SNPs from intermediate traits to a trait-specific GRS may not improve prediction of cardiovascular outcomes. GRSs Predicting CVD as a Composite Phenotype To date, there have been few consortium-based GWASs that use a composite CVD phenotype (including both heart disease and stroke) as the outcome measure, although some are beginning to emerge [37•]. Instead, GWAS meta-analyses have focused on specific CVD endpoints (such as CAD/CHD, MI, or stroke), subclinical CVD phenotypes (such as CAC, IMT, and plaque), and intermediate phenotypes (for a review, see [38]). Following this trend, many GRSs evaluated for their prediction of CVD have been constructed using these traitspecific SNPs from consortium-based GWAS meta-analyses. Studies that specifically evaluate the shared genetic variation between CAD and stroke may lead to a more refined set of SNPs that best predict composite CVD. In early work on CVD, Paynter et al. constructed an unweighted 12-SNP GRS from published associations with CVD-related endpoints (p<10 −7 in meta-analysis) and found a significant relationship with incident CVD in EA women (per-allele HR=1.05), although this association was attenuated upon adjustment for traditional risk factors [39]. A second unweighted 101-SNP GRS also included SNPs associated with intermediate traits (cholesterol, BP, diabetes, etc.), but the effect of this GRS on CVD was weaker (per-allele HR= 1.02). Thanassoulis et al. found a significant association between incident CVD and an unweighted 13-SNP GRS constructed from MI/CHD-associated SNPs, with HR=1.05 per allele after adjustment for CVD risk factors and parental history of CVD in the Framingham Heart Study [40]. However, an unweighted 102-SNP GRS that included SNPs associated with intermediate traits was not associated with CVD. These studies show that GRSs with CHD/CAD-associated SNPs are more strongly predictive of composite CVD than more comprehensive GRSs that include intermediate trait-associated SNPs. The same 13-SNP GRS used in Thanassoulis was also found to be significantly associated with prior CVD (OR= 1.51) as well as CVD mortality (HR=1. 35) in EAs with diabetes, after adjustment for CVD risk factors [41]. In all studies, the GRS failed to improve discrimination, although it did modestly improve risk reclassification of some CVD cases in the latter two studies. GRSs Associated with Subclinical Measures of Heart Disease Since only a very small number of SNPs have been reliably associated with artery calcification and IMT, studies have primarily focused on the evaluation of GRSs constructed from CAD/MI and/or other risk factor-associated SNPs with subclinical measures of heart disease. In an EA cohort, a GRS created from three SNPs that have been credibly associated with CAC explained 2.4 % of the variation in CAC, and a 45-SNP GRS constructed from CAD-associated SNPs explained an additional 4 % [6]. Another study conducted in EA found that a GRS from the same three CAC-associated SNPs was associated with calcification in multiple vessel beds, but the associations were no longer significant after adjusting for traditional cardiovascular risk factors [42]. A 132-SNP GRS created from SNPs associated with lipids was also associated with vessel bed calcification, though less strongly than the GRS that contained only CAC-associated SNPs. A third study conducted primarily in EAs found a relationship between plaque and GRSs constructed from lipid-associated SNPs but very limited associations between those GRSs and IMT [43]. However, a separate study found that IMT was associated with a GRS that included five fasting glucose-associated SNPs (beta=0.0048 mm per GRS SD) [44]. Overall, these studies illustrate that SNPs associated with intermediate traits may be useful for explaining variation in subclinical phenotypes but that more work is needed to identify the SNPs most strongly associated with artery calcification and IMT. Emerging Uses of GRS Prediction in Clinical and High-Risk Populations Currently, there is interest in exploring whether GRSs are associated with CVD outcomes and subclinical phenotypes in clinical and other high-risk populations. Accurate prediction of CVD events in high-risk populations, such as those with comorbidities, is imperative because patients are treated according to their risk classification. Initial investigations indicate that GRSs are associated with CVD in populations with comorbidities. For example, as discussed previously, GRSs constructed from CAD/CVDassociated SNPs are associated with CVD, CAC, and CVD mortality in EAs with diabetes, even after adjusting for traditional risk factors [41]. There has also been interest in investigating whether GRSs may be useful for secondary prevention, but most studies have indicated that GRSs are not particularly successful at predicting new CVD events in patients with previous CVD. For example, in a study of 5,742 patients with symptomatic vascular disease, a 30-SNP GRS constructed from CAD-associated SNPs was not able to significantly improve 10-year risk prediction of a composite CVD outcome consisting of MI, stroke, and vascular death [45•]. In a separate study of subjects undergoing heart catheterization, GRSs constructed from CAD/MI-associated SNPs were associated with prevalent, but not incident, MI [46]. More work is needed to assess whether the use of GRSs will translate to clinical utility in terms of risk assessment and ultimately differential treatment in high-risk populations. GRS-by-Environment Interaction Studies Cardiovascular disease is likely to be due, in part, to interaction between genetic and non-genetic components [47], including demographic, dietary, behavioral, environmental, and social factors. Studies of common, chronic diseases have recently begun to utilize GRSs as the genetic variation in gene-by-environment interaction studies, since GRSs cumulatively explain more trait variation than individual SNPs. GRS-by-environment interaction studies are emerging in obesity, type 2 diabetes, and lipids research. For instance, GRSby-age and GRS-by-BMI interactions have been reported for type 2 diabetes [48,49], and a GRS-by-education interaction was observed for hemoglobin A1c [50]. Several studies have noted GRS-by-diet interactions, including a GRS-by-sugar sweetened beverages interaction for BMI and obesity [51], a GRS-by-macronutrient intake interaction for adiposity traits [52], and a GRS-by-adiposity interaction for triglycerides and HDL-C [53]. This avenue of research is likely to lead to a greater understanding of the etiological factors that underlie the development of complex diseases and thus represents a promising direction for cardiovascular research. GRSs as Instrumental Variables in Mendelian Randomization Studies Mendelian randomization is a method for obtaining an unbiased estimate of the potential causal effect of a risk factor on an outcome of interest using observational data. With this approach, genetic variants are used as an instrumental variable, or a proxy, for the risk factor. GRSs have become a popular choice for instrumental variables because they typically explain more trait variation than single SNPs [10••]. Recent applications of GRSs in Mendelian randomization studies for cardiovascular diseases include the use of a 14-SNP GRS to explore the causal relationship between HDL-C and MI [54] and an 8-SNP GRS to evaluate the relationship between uric acid and multiple cardiometabolic phenotypes [55]. Burgess and Thompson thoroughly review the use of GRSs as instrumental variables in Mendelian randomization studies and provide simulation studies and recommendations for use [10••]. The extension of Mendelian randomization techniques to other data types, such as epigenetic and metabolomic data, may also be a promising area of research for cardiovascular disease and is discussed in [56]. Conclusions As we take stock of the findings from GWAS meta-analyses conducted over the past decade, the GRS has been one of the most promising ways to aggregate multiple sets of results into a single genetic predictor for cardiovascular disease. More work is needed to identify the genetic factors associated with subclinical phenotypes and cardiovascular outcomes, especially in non-EA populations, so that GRSs can most effectively capture relevant genetic variation. Questions remain about the ultimate clinical utility of the GRS, but further investigation in high-risk populations and new ways to combine GRSs with traditional risk factors may prove to be fruitful.
6,406.4
2015-07-01T00:00:00.000
[ "Biology", "Medicine" ]
Intergenogroup Recombination in Sapoviruses This first report of intergenogroup recombination for any calicivirus highlights a possible route of zoonoses. Sapovirus, a member of the family Caliciviridae, is an etiologic agent of gastroenteritis in humans and pigs. Analyses of the complete genome sequences led us to identify the first sapovirus intergenogroup recombinant strain. Phylogenetic analysis of the nonstructural region (i.e., genome start to capsid start) grouped this strain into genogroup II, whereas the structural region (i.e., capsid start to genome end) grouped this strain into genogroup IV. We found that a recombination event occurred at the polymerase and capsid junction. This is the first report of intergenogroup recombination for any calicivirus and highlights a possible route of zoonoses because sapovirus strains that infect pig species belong to genogroup III. T he family Caliciviridae contains 4 genera, Sapovirus, Norovirus, Lagovirus, and Vesivirus. The sapovirus (SaV) and norovirus (NoV) strains are etiologic agents of gastroenteritis in humans, although animals such as pigs, cows, and mice can also be infected. SaV strains were originally detected by using electron microscopy, but today the most widely used method is reverse transcription-polymerase chain reaction (RT-PCR), which has a high sensitivity (1). Based on the capsid gene sequence, SaV can be grouped into 5 distinct genogroups (GI to GV) (2). Human SaV belong to GI, GII, GIV, and GV, whereas pig SaV belongs to GIII. The SaV GI, GIV, and GV genomes are believed to each contain 3 main open reading frames (ORFs), whereas the SaV GII and GIII genomes each have only 2 main ORFs (2). ORF1 encodes nonstructural proteins and the capsid protein, while ORF2 and ORF3 encode proteins of yet-unknown functions. Using complete genome sequence analysis, we recently identified the first recombinant (intragenogroup) SaV strains (3). Two SaV strains, Mc10 and C12, both belonging to GII, were identified as recombinants. Phylogenetic analysis of the nonstructural region (i.e., genome start to capsid start) grouped Mc10 and C12 together in 1 GII cluster (or genotype), while the structural region (i.e., capsid start to genome end) grouped Mc10 and C12 into distinct GII genotypes. Evidence suggested that the recombination site occurred at the polymerase and capsid junction on ORF1. This site is highly conserved among SaV strains, which suggests that the recombination event occurs when nucleic acids of parental strains come into physical contact in infected cells, e.g., during copy choice recombination (4), as we have recently described with recombinant NoV strains (5). Materials and Methods We compared the complete genome sequences of 11 SaV strains to analyze suspected novel recombinant SaV strains. For this study, we sequenced the complete genomes of 4 SaV strains (Mc2, SK15, Ehime1107, and SW278). The Mc2 strain was isolated from a child with gastroenteritis in Chiang Mai, Thailand, in 2000 (6); SK15 was isolated from an adult with gastroenteritis in Sakai, Japan, in 2001 (unpub. data); Ehime1107 was isolated from an adult with gastroenteritis in Matsuyama, Japan, in 2002 (unpub. data); and SW278 was isolated from an adult with gastroenteritis in Solna, Sweden, in 2003 (7). The complete genome sequences were amplified and sequenced as described earlier (3). Phylogenetic analysis was performed by using the Genetyx program (Genetyx for the Macintosh version 13.0.5, Genetyx Corp., Tokyo, Japan) and ClustalX (Version 1.82; available from http://www.embl.de/~chenna/clustal/darwin/). Trees were drawn by using njplot (for the Macintosh; available from http://pbil.univ-lyon1.fr/software/njplot.html). Results Based on the classification scheme of either the partial or complete capsid sequences in our previous studies, we grouped Manchester into GI; Bristol, Mc2, Mc10, C12, and SK15 into GII; PEC into GIII; and NK24 into GV (6,8,9). For this study and on the basis of the structural region (i.e., capsid start to genome end), we grouped Manchester into GI; Mc2, Bristol, Mc10, C12; and SK15 into GII; PEC into GIII; SW278 and Ehime1107 into GIV, and NK24 into GV ( Figure 1). These genogroups were not maintained when we analyzed the nonstructural region (i.e., genome start to capsid start). We found that SW278 and Ehime1107 clustered into GII for the nonstructural region-based grouping but clustered into GIV for the structural region-based grouping. All genogroups were supported by bootstrap values (10), except for the structural region-based grouping of GI, which had a slightly lower value of 897. Nevertheless, these results indicate that the nonstructural region of SW278 and Ehime1107, i.e., a GII sequence, did not belong to a distinct genogroup, unlike their structural region, which belonged to a distinct genogroup (proposed as GIV). Comparisons of the complete genome sequences showed that SW278 and Ehime1107 shared >97% nucleotide identity and likely represented the same strain, although it was isolated from different countries; however, the lengths were different. Either SW278 or Ehime1107 had a 10-nucleotide insertion or deletion in the nontranslated region at the 3′ terminus. A number of closely matching partial sequences to SW278 and Ehime1107, which included both the polymerase and capsid gene, were available on the database, which indicates the circulation of similar strains in other countries. We next used SimPlot (available from http://sray. med.som.jhmi.edu/SCRoftware/simplot/) with a window size of 100 and an increment of 20 bp (11) to further analyze these novel recombinant SW278 and Ehime1107 strains. We analyzed 7 complete genome SaV sequences. The Mc10 genome sequence was compared to C12, Bristol, Mc2, SK15, SW278, and Ehime1107. We observed a sudden drop in nucleotide similarity after the polymerase region for SW278 and Ehime1107 ( Figure 2A). Nucleotide sequence analysis of the nonstructural region showed that SW278 and Ehime1107 shared between 74.0% to 77.6% nucleotide identity to the Mc2, C12, Mc10, and SK15 sequences, whereas analysis of the structural region showed that SW278 and Ehime1107 had only 54.0%-55.2% nucleotide identity to the Mc2, C12, Mc10, and SK15 sequences (Table); i.e., the nonstructural and structural regions of SW278 and Ehime1107 were »20% different. A similar result was observed with the nonstructural and structural regions of the already-established recombinant Mc10 and C12 strains, which had an 18.6% difference (3). When we analyzed the nonstructural and structural regions of Mc2 and SK15, we found only a 1.5% difference. Likewise, all other SaV strains generally maintained their nucleotide identities over the complete genome (Table). This result can be best explained as a recombination event at the polymerase and capsid junction for the SW278 and Ehime1107 strains, i.e., the nonstructural region originated from a GII strain, and the structural region originated from a strain belonging to another genogroup. The SaV GI, GIV, and GV genomes are predicted to encode an ORF3, whereas the SaV GII and GIII genomes have 2 main ORFs. We found that SW278 and Ehime1107 each had an ORF3, which is predicted to encode a yet-unknown protein of 161 amino acids. Notably, the structural region-based grouping showed that GI, GIV, and GV grouped in 1 major branch, while GII and GIII represented 2 other branches. These data provide further evidence of the intergenogroup recombination for SW278 and Ehime1107 strains. The SaV subgenomic RNA has not yet been identified, but for other caliciviruses the subgenomic RNA was identified (12)(13)(14). We recently provided evidence that the SaV viral protease was responsible for the cleavage of nonstructural and capsid proteins on ORF1 (15). Therefore, SaV replication may occur through at least 2 pathways: 1) the capsid protein was transcribed as a polyprotein on ORF1 and then cleaved, or 2) the capsid protein was transcribed as subgenomic RNA and then translated. The suspected recombination occurred at the highly conserved polymerase and capsid junction for human SaV, as shown in Figure 3. Recombination is thought to occur when nucleic acids of the parental strains come into physical contact in infected cells, e.g., during copy choice recombination (4). These data suggest that recombinant SaV strains were formed either by full-length RNA template switching or full-length and subgenomic template switching. Discussion These results are noteworthy because this is the first report of intergenogroup recombination for any calicivirus. These findings provide evidence that zoonoses could occur within the Sapovirus genus because strains that infect pig species belong to GIII. Furthermore, since the parent nonstructural region of SW278 and Ehime1107 has not yet been identified, we could not rule out that the parents of SW278 and Ehime1107 came from a strain that infects animals. We have conducted a number of molecular epidemiologic studies using broad-range primers and found that GIV strains were infrequently compared to other genogroups (6,8,9,16,17). This finding suggests 1) the emergence and/or recombination of GIV strains from an animal reservoir, 2) a lower prevalence of GIV strains, though a number of similar sequences were identified in the United States, or 3) our primers were less sensitive in detecting variant GIV sequences. Nevertheless, further complete genome analysis of other SaV strains is needed to identify other recombinant strains and determine the extent of recombination in the Sapovirus genus. Although we cannot easily pinpoint where and when the recombination event took place, screening of animals with primers designed against human SaV strains may also help identity the potential parental strain(s) of these 2 novel recombinants. Conclusions To date, we have identified 4 different recombinant SaV strains, Mc10, C12, SW278, and Ehime1107. Collectively, these strains have 2 kinds of nonstructural sequences but 3 kinds of structural sequences (Figure 1). In addition, all nonstructural sequences belonged to GII. These data suggest that SaV could evade host immunity by readily changing their structural region (immunoreactive, i.e., capsid protein) and that GII strains (nonstructural-based grouping) are more capable of recombination than other genogroups. In 1999, Jiang et al. (18) identified the first naturally occurring human recombinant NoV, and several other strains were later described as recombinants (5,6,(19)(20)(21). The site of genetic recombination for NoV was also between the polymerase and capsid genes. Human SaV and NoV strains cannot be cultivated, but the expression of the recombinant capsid protein (rVP1) in a baculovirus expression system results in the self-assembly of viruslike particles (VLPs) that are morphologically similar to native SaV. In a recent study, we genetically and antigenically analyzed 2 recombinant NoV strains (strains 026 and 9912-02F) (17). When polymerase-based grouping was performed, these 2 strains clustered together, but when capsid-based grouping was performed, these 2 strains belonged in 2 distinct genotypes. When we compared the cross-reactivity of these VLPs with an antibody enzyme-linked immunosorbent assay (ELISA), the titers of 026 antiserum against 026 and 9912-02F VLPs were 1:2,058,000 and 1:512,000, respectively, a 4-fold difference, whereas the titers of 9912-02F antiserum against 9912-02F and 026 VLPs were 1:1,024,000 and 1:128,000, respectively, an 8-fold difference. These results demonstrated that 026 and 9912-02F likely represented distinct antigenic types, which correlated with the genetic analysis. The expression of SaV VLPs is also needed to determine the cross-reactivity among these recombinant strains, although our results have shown that GI and GV VLPs (capsid-based grouping) were antigenically distinct by an antibody and antigen ELISA (22), which suggests that these 2 recombinant strains are also antigenically distinct from GII strains. And finally, these results will have a major influence on the future phylogenetic classification of SaV strains. Therefore, the genetic classification of SaV strains needs to be addressed, and a consensus of prototype strains representing genogroups and genotypes should be established to avoid further grouping conflicts. This work was supported by grants-in-aid from the Ministry of Education, Culture, Sports, Science and Technology, Japan, and a grant for research on reemerging infectious diseases from the Ministry of Health, Labour, and Welfare, Japan. G. Hansman received a fellowship from the Human Science Foundation of Japan. Dr Hansman is a scientist at the National Institute of Infectious Diseases, Japan. His research interests include the epi-demiology, virus expression, and cross-reactivity of viruses that cause gastroenteritis in humans, particularly SaV and NoV.
2,706.2
2005-12-01T00:00:00.000
[ "Biology" ]
Re-Emergence of HMPV in Gwangju, South Korea, after the COVID-19 Pandemic The non-pharmaceutical interventions implemented to prevent the spread of COVID-19 have affected the epidemiology of other respiratory viruses. In South Korea, Human metapneumovirus (HMPV) typically occurs from winter to the following spring; however, it was not detected for two years during the COVID-19 pandemic and re-emerged in the fall of 2022, which is a non-epidemic season. To examine the molecular genetic characteristics of HMPV before and after the COVID-19 pandemic, we analyzed 427 HMPV-positive samples collected in the Gwangju area from 2018 to 2022. Among these, 24 samples were subjected to whole-genome sequencing. Compared to the period before the COVID-19 pandemic, the incidence rate of HMPV in 2022 increased by 2.5-fold. Especially in the age group of 6–10 years, the incidence rate increased by more than 4.5-fold. In the phylogenetic analysis results, before the COVID-19 pandemic, the A2.2.2 lineage was predominant, while in 2022, the A2.2.1 and B2 lineage were observed. The non-pharmaceutical interventions implemented after COVID-19, such as social distancing, have reduced opportunities for exposure to HMPV, subsequently leading to decreased acquisition of immunity. As a result, HMPV occurred during non-epidemic seasons, influencing the age distribution of its occurrences. Introduction Human metapneumovirus (HMPV) causes respiratory infections in infants and young children (<5 years old) [1,2].These infections are like those caused by the human respiratory syncytial virus (HRSV), ranging from upper respiratory distress to bronchiolitis and pneumonia among infants, young children, older adults, and immunocompromised hosts [3][4][5].HMPV infection disrupts dendritic cell activity and reduces antigen-specific T cell activation, resulting in incomplete virus clearance and an increased likelihood of re-infection [6].Thus, individuals infected with HMPV do not acquire lifelong immunity to the virus, and re-infection occurs [7][8][9]. HMPV is a non-segmented single-stranded RNA virus belonging to the family Pneumoviridae.The HMPV genome is approximately 13 kb long and consists of eight genes that encode nine proteins (N, P, M, F, M2-1, M2-2, SH, G, and L) [10,11].N, P, L, M2-1, and M2-2 are proteins associated with the assembly of the nucleocapsid.The M protein envelops the RNA core, and the exterior of the virus consists of a lipid envelope.The envelope contains three surface glycoproteins: fusion protein (F), small hydrophobic protein (SH), and attachment glycol protein (G) [6].Based on the antigenicity of the surface proteins, HMPV has been classified into two lineages: A and B. The two lineages have each been subdivided into sub-lineages, with A1, A2, B1, and B2 [10].The A2 sub-lineage has been further subdivided into A2a and A2b [12].Subsequently, an A2 genotype with 180-nucleotide duplication and 111nucleotide duplication in the G protein was reported in Japan and has also been reported in Spain, China, and Croatia [13][14][15][16][17].The strain containing a duplication in the G protein is gradually increasing [18,19].However, the sub-classification of the A2 genotype has shown inconsistency in nomenclature among researchers [20][21][22][23].Establishing a consistent classification system for the A2 lineage is essential for understanding the emergence of new lineages and conducting epidemiological and surveillance studies of HMPV [11]. Non-pharmaceutical interventions were implemented in response to the global COVID-19 pandemic, and these interventions may affect the circulation of other seasonal respiratory viruses [29,30].Indeed, in South Korea as well, there have been delayed outbreaks of PIV3 after the COVID-19 pandemic.We published the research results on this last year [31].HMPV also exhibits a seasonal distribution.It predominately occurs in the winter to spring months in the northern hemisphere and southern hemisphere [6].The off-season outbreaks of HMPV have been reported in other countries as well following the COVID-19 pandemic.Delayed HMPV outbreaks occurred in Israel from May to June 2021; in Australia, a summer surge followed by a delayed winter season outbreak was observed from the end of 2020; in 2021, off-season outbreaks of HMPV were reported in children and adults in the UK in June and July, and in Spain, an HMPV outbreak in children was reported in November 2021 [9,[32][33][34].Similarly, In South Korea, HMPV was prevalent from late winter to spring prior to the COVID-19 pandemic but was not detected during the COVID-19 pandemic in 2020 or 2021, and an off-season HMPV outbreak was observed in the fall of 2022. Understanding the relationship between the off-season outbreak of HMPV and the virus' characteristics is necessary to predict and develop preventive measures against future HMPV epidemics.Molecular epidemiology studies of HMPV have focused on the G protein and the F protein [35].Partial genome sequencing cannot capture changes occurring outside the targeted regions, which could include important viral variations.Especially, there is a possibility of missing variations like the recently reported G protein's 180-nt duplication or 111-nt duplication using partial genome sequencing.However, whole genome sequencing allows for a comprehensive understanding of the molecular epidemiology of the entire virus [24].Therefore, we conducted a phylogenetic analysis of the whole genome of HMPV to analyze the molecular epidemiological characteristics of HMPV before and after the COVID-19 pandemic. Surveillance and Sample Collection The Korea Influenza and Respiratory Virus Surveillance System (KINRESS) is a program overseen by the Korea Disease Control and Prevention Agency (KDCA), in which primary hospitals nationwide are selected for sampling surveillance.Every week, samples from acute respiratory patients are collected and sent to the designated research institutions for analysis.We participated in KINRESS to monitor Acute Respiratory Infections (ARIs) in South Korea.Throat or nasopharyngeal swabs were collected from outpatients with ARIs throughout the year from three primary collaborating hospitals in the Gwangju area.From 2018 to 2022, a total of 6,334 samples were collected, and we conducted screening for eight types of acute respiratory viruses, including HMPV. RNA Extraction and Real-Time PCR Following the manufacturer's instructions, nucleic acids were extracted from the samples using a QIAamp RNA kit (Qiagen, Hilden, Germany).We used 140 µL samples and 60 µL final nucleic acid elutions. HMPV was identified by using a One-step RSV A&B/HMPV Real-time PCR Kit (Kogene, Seoul, Korea) in accordance with the manufacturer's instructions.The amplification conditions were as follows: 50 • C for 30 min, followed by 95 • C for 10 min and 40 cycles of 95 • C for 15 s and 60 • C for 1 min. Whole Genome Sequencing Among the 427 HMPV-positive samples, a total of 24 samples for whole-genome sequencing were randomly selected based on Ct values of ≤25.Out of the 24 samples, 16 were from the COVID-19 pandemic, and eight were after the COVID-19 pandemic samples.Viral RNA was extracted using a QIAamp Viral RNA mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.A panel was developed using Ion AmpliSeq On-Demand Panel (ThermoFisher Scientific, Carlsbad, CA, USA) technology for use on the Ion Torrent platform (ThermoFisher Scientific, Carlsbad, CA, USA).The customized panel was designed to obtain coverage of the entire HMPV genome using combinations of 100 and 125 primers divided into two sets (pools) according to the manufacturer's protocol.Reverse transcription was performed using a SuperScript VILO cDNA Synthesis Kit (Thermo Fisher Scientific, Carlsbad, CA, USA) following the manufacturer's recommendations.For library preparation, an Ion Ampliseq Library 2.0 Kit (Thermo Fisher Scientific, Carlsbad, CA, USA) was used according to the manufacturer's protocol.The automated Ion Chef (ThermoFisher Scientific, Carlsbad, CA, USA) instrument prepared templates from the 25 µL sample pool using Ion 510/520/530 Chef Kits and Ion 530 Chips that were sequenced using the Ion S5 XL sequencer (ThermoFisher Scientific, Carlsbad, CA, USA).The sequencing reads were aligned, mapped, and subjected to variant analysis using the CLC Genomics Workbench 21.0.3 (Qiagen, Hilden, Germany) program. Phylogenetic Analyses For the phylogenetic analyses, 53 reference strains were selected from GenBank (Table S1).Multiple sequence alignment was performed using the MUSCLE algorithm in MEGA X software.Phylogenetic trees were constructed using the Maximum Likelihood (ML) method with the General Time Reversible model in MEGA X software.The reliability of the branching order was assessed by performing 1000 bootstrap replicates. Epidemiology of HMPV Before the COVID-19 pandemic, human metapneumovirus exhibited a progressive increase from January, reaching its peak in April, followed by a subsequent decline during the summer months.During the COVID-19 pandemic, HMPV infections rarely occurred in 2020, and HMPV was not detected in 2021, coinciding with the implementation of non-pharmacological interventions against COVID-19.In 2022, according to the results of the KINRESS in the Gwangju area, HMPV reappeared in July, and the number of HMPV-positive cases increased in September and October.The incidence rate of HMPV before and after the COVID-19 pandemic showed that the incidence rate of HMPV after COVID-19 was 2.5 times higher than before COVID-19 and increased significantly.The seasonal distribution of HMPV infections from 2018 to 2022 is shown in Figure 1.The age distribution of HMPV occurrences showed an approximately 2-fold increase in the age groups 0-2 years and 3-5 years.Particularly, there was a more than 4.5-fold increase in the age group 6-10 years, and this increase was statistically significant (Table 1).However, there was no significant difference in the incidence rate of HMPV between males and females before and after the COVID-19 pandemic (Table 1). positive cases increased in September and October.The incidence rate of HMPV be and after the COVID-19 pandemic showed that the incidence rate of HMPV after COV 19 was 2.5 times higher than before COVID-19 and increased significantly.The seaso distribution of HMPV infections from 2018 to 2022 is shown in Figure 1.The age distri tion of HMPV occurrences showed an approximately 2-fold increase in the age group 2 years and 3-5 years.Particularly, there was a more than 4.5-fold increase in the age gro 6-10 years, and this increase was statistically significant (Table 1).However, there was significant difference in the incidence rate of HMPV between males and females bef and after the COVID-19 pandemic (Table 1). Phylogenetic Analysis of HMPV Whole Genome Sequences We analyzed 24 whole-genome sequences and 53 reference sequences obtained from GenBank to determine their lineages.Among the 16 before the COVID-19 pandemic samples, 15 clustered within the A2.2.2 lineage, and one belonged to the B2 lineage.Out of the 15 samples belonging to the A2.2.2 lineage, 9 were strains carrying a 111-nucleotide duplication in the G protein.Among the eight strains identified after the COVID-19 pandemic, five belonged to the A2.2.1 lineage, and three belonged to the B2 lineage.A1, A2.1, and B1 were not detected in any of the samples analyzed in this study.The 2022 A2.2.1 sequences were observed in a monophyletic clade, with one sequence that circulated in the USA in 2016.However, the 2022 B2 sequences were distributed between two closely related strains, one from Australia in 2020 and the other from the USA in 2019, without clade formation (Figure 2).The whole genome sequences of this study have been deposited in the NCBI Sequence Read Archive under the Bio Project PRJNA987724.The phylogenetic tree was constructed based on 77 whole HMPV genome sequences.The tree was created using the maximum likelihood method with a GTR+G+I substitution model and tested with 1000 bootstrap replicates.In the tree, HMPV samples before the pandemic were depicted as triangles.Among them, samples containing a 111-nucleotide duplication in the G gene were indicated with orange triangles.HMPV samples after the pandemic were represented by green circles.All sequences from this study have been registered in the SRA (Sequence Read Archive) database (accession number: PRJNA987724, https://www.ncbi.nlm.nih.gov/sra/PRJNA987724, it can be accessed after 31 July 2024).Biosample accession numbers of all strains are indicated in parentheses. Discussion The epidemiological patterns of human metapneumovirus (HMPV) in Korea signify its role as a noteworthy respiratory pathogen.HMPV is recognized for inducing acute respiratory infections, especially among young children and the elderly.Within Korea, HMPV infections exhibit seasonal trends, demonstrating elevated incidence rates during colder months, generally from late fall to early spring.The virus is frequently linked to respiratory symptoms, including fever, cough, and breathing difficulties. The non-pharmaceutical interventions implemented to prevent COVID-19, such as mandatory mask-wearing, social distancing, and travel restrictions, have affected the prevalence of respiratory viruses [29,30].Social distancing measures were implemented in South Korea following the first COVID-19 outbreak in January 2020 and relaxed by April 2022.Changes in the prevalence of respiratory viruses were also observed during this period.According to the KINRESS results, PIV3, which did not occur in 2020, reemerged in the fall of 2021 [31].In 2022, HMPV reappeared in the fall, which is typically a non-epidemic season, and the magnitude of the epidemic was larger than that before the COVID-19 pandemic. HMPV mainly affects children under five years of age [1,32].In this study, we also observed a significant increase in HMPV incidence among children aged 5 and under before and after the COVID-19 pandemic, consistent with the known pattern of HMPV primarily affecting this age group [1,32].However, after the COVID-19 pandemic, there was a significant increase in the age group of 6-10 years affected by HMPV.An atypical age distribution of acute respiratory viruses after the COVID-19 pandemic has also been observed in RSV [36,37].After the COVID-19 pandemic, HMPV did not occur for two years.Children aged 5 and under who had spent two years with lowered immunity to HMPV, were exposed to HMPV after the COVID-19 pandemic, leading to an increase in the age of occurrence beyond 5 years.In other words, this atypical age distribution might be associated with reduced immunity owing to the lack of exposure to HMPV during the COVID-19 pandemic. This study conducted a whole-genome analysis to investigate the molecular genetic characteristics of HMPV before and after the COVID-19 pandemic.The whole genome sequencing of viruses using NGS as a pathogen has enabled comprehensive molecular epidemiological analysis [35,38,39].Furthermore, using highly detailed molecular biological classification, it is highly useful for understanding the characteristics of viruses with a high level of resolution [24].Analysis of the 24 whole-genome sequences confirmed the circulation of HMPV lineages A2.2.1, A2.2.2, and B2 between 2018 and 2022.Before the COVID-19 pandemic, strains from the A2.2.2 and B2 lineages circulated, and among the 15 strains from the A2.2.2 lineage, 9 had a 111-nt duplication in the G protein.After the COVID-19 pandemic, strains from the A2.2.1 and B2 lineages circulated.Although the number of analyzed samples was limited, and definitive conclusions cannot be drawn, it seems that the circulating lineages changed between before and after the COVID-19 pandemic. Although some researchers have suggested that the A2.2.2 lineage may be more virulent, the variation in virulence among HMPV lineages remains unclear [14,40,41].In this study, the A2.2.1 lineage was responsible for the re-emergence in 2022, and the scale of occurrence was larger than that before the COVID-19 pandemic when the A2.2.2 lineage was prevalent.This could be due to lower herd immunity to the HMPV virus resulting from reduced exposure during social distancing measures rather than differences in virulence between the lineages. In this study, all genotype B strains belonged to the B2 lineage.Out of the 24 strains, lineage A strains were detected more often than lineage B viruses.This could have been due to sample bias selection.The selection of samples was based on Ct values ≤25, and previously, it was shown that this could result in a biased selection of the samples (Groen et al. [24]). Overall, due to the social distancing measures implemented to prevent the spread of COVID-19, there was a lack of exposure to HMPV, resulting in lower natural immunity to HMPV.With the relaxation of social distancing measures in 2022, HMPV exposure during this period led to an off-season outbreak of HMPV in South Korea.This indicates that the COVID-19 pandemic may have impacted the age and lineage distribution of HMPV, emphasizing the importance of social distancing measures.Strengthening herd immunity is thought to help prevent future epidemics.Therefore, continuous genomic monitoring of HMPV is required for vaccine development and distribution. Conclusions Although the sample size was limited, and definitive conclusions cannot be drawn from this study, the data suggest differences in the circulation of HMPV lineages before and after the COVID-19 pandemic.In addition, a significant increase in HMPV-positive patients was identified in individuals aged 6-10 years after the COVID-19 pandemic compared with the period before the pandemic.This might be explained by the implementation of social distancing measures during the pandemic, which reduced the exposure to HMPV in younger children and established herd immunity.The release of the measurements has subsequently led to off-season outbreaks of not only HMPV infections but also other respiratory infections. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/pathogens12101218/s1,Table S1: Accession numbers of viruses used as reference for phylogenic analysis of the HMPV-positive samples. Figure 2 . Figure2.The phylogenetic tree was constructed based on 77 whole HMPV genome sequences.The tree was created using the maximum likelihood method with a GTR+G+I substitution model and tested with 1000 bootstrap replicates.In the tree, HMPV samples before the pandemic were depicted as triangles.Among them, samples containing a 111-nucleotide duplication in the G gene were indicated with orange triangles.HMPV samples after the pandemic were represented by green circles.All sequences from this study have been registered in the SRA (Sequence Read Archive) database (accession number: PRJNA987724, https://www.ncbi.nlm.nih.gov/sra/PRJNA987724, it can be accessed after 31 July 2024).Biosample accession numbers of all strains are indicated in parentheses. Table 1 . Age and sex distribution of HMPV-positive samples from 2018 to 2022 in Gwangju, So Korea. Table 1 . Age and sex distribution of HMPV-positive samples from 2018 to 2022 in Gwangju, South Korea.
3,952.2
2023-10-01T00:00:00.000
[ "Biology" ]
Inversion of Bayesian Networks Variational autoencoders and Helmholtz machines use a recognition network (encoder) to approximate the posterior distribution of a generative model (decoder). In this paper we study the necessary and sufficient properties of a recognition network so that it can model the true posterior distribution exactly. These results are derived in the general context of probabilistic graphical modelling / Bayesian networks, for which the network represents a set of conditional independence statements. We derive both global conditions, in terms of d-separation, and local conditions for the recognition network to have the desired qualities. It turns out that for the local conditions the property perfectness (for every node, all parents are joined) plays an important role. Introduction A generative model is a set of probability distributions that models the distribution of observed and latent variables.Generative models are used in many machine learning applications.One is often interested in performing inference of the latent variable given an observation, i.e. obtaining the posterior distribution.For complex generative models it is often hard to calculate the posterior distribution analytically.The field of variational Bayesian inference (Wainwright et al., 2008) studies different ways of approximating the true posterior.One approach within this field is called amortised inference (Gershman and Goodman, 2014).This approach distinguishes itself through using one set of parameters for recognition that is optimised over multiple data points.This can be contrasted with "memoryless" inference algorithms, such as the message passing algorithm (Pearl, 1982;Cowell et al., 1999), which finds a separate set of parameters for every data point.Both the variational autoencoder (VAE) (Kingma and Welling, 2013) and Helmholtz machine (Dayan et al., 1995) are examples of amortised inference.In their most general form these consist of a Bayesian network that is used to model the generative distribution.A second network, called the recognition model, is used to model the posterior distribution.Both these networks have the same set of nodes, namely the union of the observed and latent variables.However, in the generative network the arrows point from the latent to the observed nodes but in the recognition network it is the other way around.The recognition network is therefore in some sense an inversion of the generative network.In many applications, one simply flips the direction of the edges of the generative network to obtain the recognition network.However, as the simple example in Figure 1 shows, this does not guarantee that the recognition model is actually able to model the true posterior distribution of the generative model.In this paper, we study the necessary and sufficient properties of the recognition network such that we do have this guarantee.We first discuss these properties in terms of d-separation, subsequently in terms of perfectness, and finally in terms of single edge operation using the Meek conjecture (Meek, 1997). where G ′ is obtained by flipping the direction of the edges in G.The variables z 1 , z 2 represent the latent variables and x the observed variable.The distribution p such that z 1 , z 2 are Bernoulli(0.5)and x = z 1 + z 2 mod 2 can be modelled by G, but the conditional distribution p z1,z2|x cannot be modelled by G ′ . In practice, one often puts further restrictions on the probability distributions the networks can model by, for example, letting the distribution of an individual node be Gaussian, with the mean (and variance) being a function of the values of the parent nodes.We discuss the general case of a restricted set of probability distributions, and in particular the case of the Gaussian distributions, in the last part of the results section. The question of finding a sparse G ′ that can approximate the posterior distribution of the generative model well is also studied from a more practical perspective, using methods from machine learning.One can use a sparsity prior when learning the recognition model, to encourage that only the edges really necessary for modelling the posterior are added.Löwe et al. (2022); Louizos et al. (2017); Molchanov et al. (2019) present several approaches. Markov equivalence is a property of a pair of Bayesian networks that indicates that they encode the same set of conditional independence statements (Verma and Pearl, 1990;Flesch and Lucas, 2007).A generalisation of this, that we will call Markov inclusion, is when the set of conditional independence statements encoded in one graph is a subset of the conditional independence statements encoded in the other graph (Castelo and Kočka, 2003).We will see in Proposition 1 that the results in this paper can also be viewed as describing under which conditions one Bayesian network is Markov inclusive of another. Example Before giving a formal definition of the problem, we illustrate the topic of this paper by an example.Consider the generative model for diseases and their symptoms in Intuitively it is clear that when someone is congested, the fact whether they have muscle pain or not, does give extra information on how likely it is that that person has hayfever.If someone is congested and also has muscle pain, the congestion is more likely to be caused by the flu.This dependence is however not captured in the graph in Figure 3, because no information can flow from muscle pain to flu.By adding an edge between muscle pain and hayfever, or between flu and hayfever, this dependence can be captured.The example above is intended to give an intuitive idea of the nature of the problem addressed in this paper, and provide context for the more formal treatment below. Notation Graph theory For a comprehensive overview of the theory and terminology of probabilistic graphical models, we refer to (Lauritzen, 1996;Cowell et al., 1999;Studeny, 2005).Let G = (N, E) be a directed acyclic Figure 5: Different subsets of N for a graph G graph (DAG), that we always assume to be connected.We say that two vertices s, t ∈ V are joined if (s, t) ∈ E or (t, s) ∈ E. A set of vertices is called complete if all pairs are joined.The set of parents, children, descendants, and non-descendants of a node s ∈ N are denoted pa(s), ch(s), des(s), nd(s) respectively.G is called perfect if for all s, the set pa(s) is complete.For a subset A ⊂ N , the vertex-induced subgraph of G is denoted G[A].We let Leaves(G) = {s ∈ N : ch(s) = ∅} be the set of nodes without children, and Roots(G) = {s ∈ N : pa(s) = ∅} be the set of nodes without parents.Furthermore we let V = Leaves(G) be the set of visible nodes, that corresponds to the set of observed variables (such as the symptoms in the example) and H = N \ Leaves(G) be the set of hidden nodes, which are the variables to be inferred (such as the diseases in the example).See Figure 5.For e = (s, t) ∈ E, let e * = (t, s), E * = {e * : e ∈ E}, G * = (N, E * ) the graph G with its edges reversed, G ∼ = (N, E ∪ E * ), the skeleton (i.e.undirected version) of G.The moral graph of G, denoted G M , is the skeleton of G, with extra (undirected) edges between parents of the same child in G.A path in G from s to t is a sequence of nodes s = u 1 , ..., u n = t such that (u i , u i+1 ) ∈ E for all i ∈ {1, ..., n}.A trail γ in G is a sequence of vertices that forms a path in G ∼ .A trail γ is said to be blocked by S ⊂ N if γ contains a vertex u such that either: (1) u ∈ S and the arrows do not meet head to head at u; (2) u and des(u) are not in S and the arrows do meet head to head at u. Two subsets A, B ⊂ N are said to be d-separated by S if all trails from A to B are blocked by S and we write A ⊥ d B | S. A topological ordering of G is an injective map O : N → N that assigns to every node a number such that, if two nodes are joined, the edge points from the lower to the higher numbered node.When the topological ordering is implied, we will write s < t to mean O(s) < O(t) and say "s is older than t" and the same for ">", with s being younger.Given a topological ordering O, the set of predecessors of a node s, denoted pr O (s), is the set of all nodes with a lower topological number, i.e. pr O (s) = {t ∈ N : O(t) < O(s)}.Note that this set in general depends on the choice of topological ordering (see Figure 6).For alternative DAGs G ′ or Ḡ we denote the above defined symbols with their respective accent, e.g.ch ′ (s), pa(s), ⊥ ′ d , < ′ , etc. Probability on graphs To every node s ∈ N we associate a measurable space (X s , X s ).The state spaces are either real finite-dimensional vector spaces or finite sets and to each measurable space we associate a (σ-finite) base measure µ s which is typically the Lebesgue measure or counting measure respectively.Then we let (X, X ) = (× s∈N X s , ⊗ s∈N X s ) and assign to this space the base measure µ = ⊗ s µ s .In this paper, we consider probability distributions P over the space (X, X ).For every s ∈ N we let X s : X → X s be the random variable projecting onto the individual spaces. For a subset A ⊂ V we let (X A , X A ) = (× s∈A X s , ⊗ s∈A X s ) and similarly X A = (X s ) s∈A and X = X N .A typical element of X s is denoted x s with x A = (x s ) s∈A and x = (x s ) s∈N .We write P A for the pushforward measure of P though X A on (X A , X A ), i.e. for A ∈ X A , For A, C ⊂ N disjoint, we say that a map K : Furthermore, we say that K is a (regular) version of the conditional probability of A given C if it is Markov kernel and for all C ∈ X C holds.It can be shown that in our setting, one can always find such a Markov kernel that is unique P C -a.e.(Dudley, 2018).We therefore also denote such a Markov kernel by P A|C .For disjoint subsets A, B, C ⊆ N we says that A is conditionally independent of B given C and write For s ∈ N , a kernel function will be a map A probability distribution P is said to factorise over G if it has a density p w.r.t.µ and there exist kernel functions (k s ) s∈N such that We denote the set of probability distributions on X that factorise over G by We denote the set of such Markov kernels by K G . Problem statement Goal I Given a DAG G = (N, E), find a DAG G ′ = (N, E ′ ) such that Roots(G ′ ) = Leaves(G) and for every P ∈ P G , there exists K ∈ K G ′ that is a version of the conditional distribution P H|V . It turns out (Proposition 1 in the results section) that this goal is equivalent (up to edges between nodes in Leaves(G)) to the following goal: Goal II Given a DAG G = (N, E), find a DAG G ′ = (N, E ′ ) such that there exists a topological ordering of G ′ such that there is no vertex outside Leaves(G) 1 that is older than the vertices in Leaves(G) and P G ′ ⊃ P G . In the remainder of the paper, we will focus on Goal II.Moreover, we sometimes impose the following extra condition: It can be argued that this is a natural condition since this enforces that the hierarchical structure of the generative model G is preserved when finding a suitable G ′ .Note that this condition also guarantees that there exists a topological ordering of G ′ such that Leaves(G) are oldest.Proof.Since pa(s) ⊃ pa(s) for every node s, a density that can be written as s k s (x s |x pa(s) ) can also be written as s k s (x s |x pa(s) ). Lemma 2. Let A, B, S be subsets of N .We have, A ⊥ ⊥ B | S for all P ∈ P G if and only if S d-separates A and B in G. Lemma 3. (Theorem 5.14 in Cowell et al. (1999)) Let G be a DAG with a topological ordering O. For a probability distribution P on X, the following conditions are equivalent: (1) 1 Although G ′ has the required structure, it can happen that not all possible topological orderings reflect this.See Figure 6 for an example. Corollary 1.Let O, Õ be two topological orderings of G.If P satisfies property (4) of Lemma 3 w.r.t.O, then the same is true for Õ. Proof.Note that (1) − (3) of Lemma 3 are independent of the topological ordering.Therefore we have the following implications: for all s we have s ⊥ ⊥ pr O (s) | pa(s) w.r.t.P =⇒ P ∈ P G (with topological ordering O) =⇒ P ∈ P G (with topological ordering Õ) =⇒ for all s we have s ⊥ ⊥ pr Õ (s) | pa(s) w.r.t.P . In the rest of the paper, we fix a topological ordering for every DAG and in light of the corollary, it does not matter which for the purpose of applying Lemma 3. Therefore, we will omit the dependence on the topological ordering when talking about the set of predecessors. Results Equivalence of two goals and S a set of distributions on X that have a density w.r.t.µ.For all P ∈ S there exists a Markov kernel K ∈ K G that is a version of the conditional distribution of N \ Roots(G) given Roots(G) if and only if P G ⊃ S. Proof. ( =⇒ ) Let P ∈ S with density p and suppose that K is a version of P N \Roots(G)| Roots(G) and K ∈ K G .We need to show P ∈ P G.We can write p as follows: where p x N \Roots(G) |x Roots(G) is the density corresponding to K (Dudley, 2018).From the fact that K ∈ K G we know Since all the nodes in Roots(G) are joined in G we have Combining the above gives and therefore P ∈ P G. ( ⇐= ) Now let P ∈ S again and suppose P ∈ P G and x ∈ X such that p x Roots(G) > 0. We can write where we can switch from pa to pa in the third equality because there are only edges added between nodes in Roots(G) to obtain G.It can be shown that s∈Roots(G) k s x s |x pa(s) = p x Roots(G) (Cowell et al., 1999, p. 70).Dividing by p x Roots(G) on both sides gives: We know that there exists a Markov kernel K that is a version of the conditional distribution of N \ Roots(G) given Roots(G) and that this kernel has density p x N \Roots(G) |x Roots(G) (Dudley, 2018).Equation ( 18) shows that the density factorises and therefore K ∈ K G . Conditions in terms of d-separation Necessary and sufficient conditions for our goal can be deduced from the following theorem: Theorem 1.Let G = (N, E), G ′ = (N, E ′ ) be DAGs.The following statements are equivalent: (1) (2) =⇒ (1) Let P ∈ P G .We need to show This means that P satisfies (2) Lemma 3 w.r.t.G ′ and therefore P ∈ P G ′ . Conditions in terms of perfectness A sufficient condition for our goal can be deduced from the following theorem: Theorem 2. Let G = (N, E), G ′ = (N, E ′ ) be two DAGs.If G ′ contains a subgraph Ḡ′ such that Ḡ′ is perfect and its undirected version Ḡ′∼ contains the moral graph G M then, P G ′ ⊃ P G .Proof.Let P ∈ P G .By Lemma 5.9 from Cowell et al. (1999) we know that P factorises undirectedly2 over the undirected graph G M and thus over any undirected graph H = (N, E H ) containing G M .From Proposition 5.15 in Cowell et al. (1999) we know that P factorises (directedly) over any perfect directed graph Ḡ′ such that Ḡ′∼ = H.Therefore when Ḡ′∼ ⊃ G M we have From this theorem we can conclude that if we flip all the edges of G and then add edges until both G ′ is perfect and G ′∼ ⊃ G M , we obtain an inverse of G that satisfies our goal.The example in Figure 7 shows however that the condition that G ′ needs to contain a perfect subgraph Ḡ′ such that Ḡ′∼ ⊃ G M is not a necessary condition.We do have the following necessary condition on the graph G ′ to satisfy our goal: This theorem is based on the following proposition: Note that the proposition implies that when | Roots(G)| = 1 the conditions of Theorem 2 are both sufficient and necessary.We first prove Proposition 2 and then show how Theorem 3 can be obtained from it. Proof of Proposition 2. Below we introduce an algorithm for inverting G.We show that the end result is a perfect graph, and that all the steps in the algorithm are necessary for obtaining a graph Ḡ′ for which Ḡ′ ⊃ G * and P Ḡ′ ⊃ P G holds.This implies that any G ′ for which G ′ ⊃ G * and P G ′ ⊃ P G holds, needs to contain a subgraph Ḡ′ that can be obtained through this algorithm and is therefore perfect and such that Ḡ′∼ ⊃ G M . The algorithm starts by creating a graph Ḡ′ 0 by flipping all edges of G. Now we fix a topological ordering of the nodes3 that is compatible with Ḡ′ 0 .Subsequently all parents in G are joined.The while loop starts with the root of G, r 0 , and every rounds adds more vertices (r i ) to the set R and makes sure that the set pa ′ (r i ) is made complete for every i.The idea is that at every step, this Ḡ′3 ) this is the status halfway the fourth while loop.The red edges have been added by the algorithm between i = 0 and i = 3. set R includes one more node of G and that the induced subgraph Ḡ′ i [R i ] is perfect at every step of the algorithm.See Figure 8 for an example course of the algorithm.Since at the end we have R i = N , we end up with a perfect graph Ḡ′ . End result perfect First note that Ḡ′ 0 [R 0 ] is perfect.Every node r i that enters R i has all its parents joined in Ḡ′ i .After it has entered R i , no new edges will be joined to it.Therefore at every step Ḡ′ All steps are necessary for P Ḡ′ ⊃ P G It is necessary that parents in G are joined At the start of the algorithm we join all nodes in Ḡ′ 0 that are parents of the same node in G.For t 1 , t 2 ∈ pa(s) that are not joined in G, we have that t 1 ⊥ d t 2 | N \{t 1 , t 2 }.However for any graph Ḡ′ for which t 1 and t 2 are not joined and that has G * as a subgraph, we do have Therefore the only way to satisfy condition (2) of Theorem 1 is by joining t 1 and t 2 in Ḡ′ . It is necessary that parents in Ḡ′ i of r i are joined Let t 1 , t 2 ∈ pa ′ (r i ) that are not joined in Ḡ′ i and assume WLOG that t 2 < ′ t 1 .Case 1: There exists a path γ 2 from r By the assumption t 2 < ′ t 1 there is always a path γ 1 in G from r 0 to t 1 not containing t 2 .In order to satisfy property (4) of Theorem 1 we need that the concatenation of the trails γ 1 and γ 2 is blocked by pa ′ (t 1 ).Since all nodes except t 2 are younger in Ḡ′ than t 1 it follows that t 2 must be a parent of t 1 .Case 2: There is no path γ 2 from r 0 to t 2 in G such that γ 2 \ {t 2 } ⊂ R i Let us investigate how the edge (t 2 , r i ) came about.First note that (t 2 , r i ) / ∈ E * since otherwise the path (r 0 , ..., r i , t 2 ) would contradict the assumption of Case 2. Now one of the following must hold: 1. ∃j < i such that r i , t 2 ∈ pa ′ (r j ) 2. ∃s ∈ N such that t 2 , r i ∈ pa(s). In case of option 1, we can ask again how the edge (t 2 , r j ) came about.We have again that (t 2 , r j ) / ∈ E * , for similar reasons as above.The same two options are left (with j taking the role of i): 1. ∃j ′ < j such that r j , t 2 ∈ pa ′ (r j ′ ) 2. ∃s ∈ N such that t 2 , r j ∈ pa(s). Since for j = 0 option 1 is definitely not a valid option any more, we know there must be j * with 0 ≤ j * ≤ i such that option 1 no longer holds for the edge (t 2 , r j * ). At this point, the only option is that the edge (t 2 , r j * ) came about because t 2 and r j * are both parents in G of a node s (see Figure 9).We know that s < ′ t 2 < ′ r i and therefore s / ∈ R i .Furthermore, because there is a path in G from r 0 to s via R i we know by a similar argument as in Case 1, that s must be a parent of t 1 in Ḡ′ .Now, in order to satisfy property (4) of Theorem 1, either the trail (t 1 , ..., r 0 , ..., r j * , s, t 2 ) must be blocked by pa ′ (t 1 ) \ {t 2 } or t 2 ∈ pa ′ (t 1 ), or both.Since s ∈ pa ′ (t 1 ), the v-structure (r j * , s, t 2 ) does not block this path.Since all other nodes on the path, except for t 2 are younger than t 1 in Ḡ′ and there is no other v-structures, it follows that the path is unblocked and therefore t 2 must be a parent of t 1 in Ḡ′ . Example situation of Case 2 in proof of necessity that parents in Ḡ′ i are joined, highlighting the important edges that play a role in the proof. Remark 1.Note that all arbitrariness of the algorithm is captured in the fixation of the topological ordering of Ḡ′ 0 .Given a pair of graph G, G ′ such that G ′ ⊃ G * , P G ′ ⊃ P G and Roots(G) = 1, the algorithm can give us a necessary and sufficient subgraph Ḡ′ by fixing the topological ordering of Ḡ′ 0 to be compatible with G ′ .Remark 2. Since any perfect graph with with a single leave has a unique topological ordering4 , it follows from the proposition that any G ′ such that G ′ ⊃ G * , P G ′ ⊃ P G and Roots(G) = 1 has this same property.E) and Ḡ = (N, Ē) are such that P G ⊂ P Ḡ, then the same holds for the vertex-induced subgraph of both graphs: Proof.One can easily check that the condition (3) in Theorem 1 remains satisfied when taking vertex-induced subgraphs. Proof of Theorem 3. Consider a DAG G with | Roots(G)| ≥ 1.Note that by Lemma 4 for any s ∈ N , P G ′ ⊃ P G implies P G ′ [{s}∪des(s)] ⊃ P G[{s}∪des(s)] .Since s is the unique root for G[{s} ∪ des(s)], we know from Proposition 2 that this implies that G ′ [{s} ∪ des(s)] contains a perfect subgraph Ḡ′ s , such that Ḡ′ s ⊃ G M [{s} ∪ des(s)].In practice, the inverse G ′ is often obtained by simply inverting the edges in G.In this case we have the following necessary and sufficient condition to satisfy our goal. Proof. ( ⇐= ) If pa(s), ch(s) are complete for all s ∈ N and G ′ = G * this implies that G ′∼ ⊃ G M and G ′ is perfect.The result now follows from Theorem 2. ( =⇒ ) We will show the contrapositive.Assume first that there exists an s ∈ N such that t 1 , t 2 ∈ pa(s) are not joined.Now consider the distribution P ∈ P G for X s = X t1 + X t2 mod 2 and all other nodes are independent Bernoulli(0.5).It is easy to see that P / ∈ P G ′ .Now assume that there exists an s ∈ N such that u 1 , u 2 ∈ ch(s) are not joined.Now consider the distribution P ∈ P G such that X u1 and X u2 are equal to X s and all other nodes (including s itself) are Bernoulli(0.5).It is again easy to see that P / ∈ P G ′ . Conditions in terms of single edge operations In the proof of Proposition 2, we suggested an algorithm for inverting G, that started by flipping all the edges of G at once and then add edges where necessary.In this section we are looking at obtaining an inverse of G by flipping the edges one by one, and potentially adding edges where necessary.The reversal (flipping) of an edge (s, t) is called covered when pa(t) = pa(s) ∪ {s}.Meek (1997) states the following conjecture: Conjecture 1 (Meek conjecture).Let G = (V, E) and G ′ = (V, E ′ ) be DAGs.P G ′ ⊃ P G if and only if there exists a sequence of DAGs L 1 , ..., L n such that L 1 = G ′ and L n = G and L i+1 is obtained from L i by one of the following operations: -covered edge reversal -edge removal.Chickering (2002) later proved this conjecture.This result suggests the outline of an algorithm for the inversion of a Bayesian network G.This algorithm starts with G and chooses a suitable next edge of G to be inverted.Before the edge can be inverted, it first needs to be covered.This can be done by adding new edges, or changing the direction of the edges that were added before.However, all of these operations have to conserve the acyclicity of the graph. Restricting the set of possible kernel functions The results derived in the above discuss the question what conditions G ′ must satisfy such that for every P ∈ P G , K G ′ contains a version of the conditional distribution P H|V .Here it is implied that we allow for all possible kernel functions k s in the definitions of P G and K G ′ .In practice, however, restrictions are often put on the space of possible kernel functions.A common choice (Kingma and Welling, 2013) is to allow for only Gaussian kernel functions, of the form with f some fixed possibly nonlinear function.We will now investigate which results remain valid for the restricted case.Given a subset R of kernel functions, we will denote the restricted spaces of probability distributions and Markov kernels factorising over G by P G R , K G R respectively.Before we dive into the results for general restrictions, we start by examining the case where R f is the set of Gaussian kernel functions defined above.Consider the pair of graphs G, G ′ in Figure 10.It is clear that this pair of graphs satisfies our original Goal I.However, when we restrict to the set Gaussian kernel functions, we are no longer able to model the posterior distribution exactly, as we will show now.Consider the distribution in P G R f given by X s ∼ N (f (X), 1).( 22) If the distribution P t|s would be in K G ′ R f , we would need that the joint density of X t , X s satisfies the following proportionality as a function of where only b may depend on x s .Working out the actual joint density gives We can conclude that we only have that P t|s ∈ K G ′ R f if f is a linear function. 5From this example, we can conclude that the conditions that were sufficient for the unrestricted case, are in general not sufficient in the restricted case.Now we look at the validity of our results for the general restrictions.We start with the equivalence of the two goals, Proposition 1. Recall that the proposition shows that finding a G ′ such that there exists a topological ordering of G ′ for which there is no vertex outside Leaves(G) that is older than the vertices in Leaves(G) and P G ′ ⊃ P G is both a necessary and sufficient condition to satisfy Goal I.It is easy to see that it is still a sufficient condition (reverse implication ( ⇐= ) in Proposition 1).However in order to get the forward implication ( =⇒ ), we used that when all the nodes in Roots(G) are connected, any density function can be written as p(x Roots(G) ) = s∈Roots(s) k s (x s |x pa(s) ).This is no longer the case when we restrict the space of possible kernel functions.We have that the condition is only necessary if for every P ∈ P G R , the marginal distribution P V factorises over a complete directed graph of the leaves of G.A slightly weaker necessary condition for Goal I still holds in general, namely that P For Theorem 1, note that conditions (2)-( 4) only relate to the graph structures of G and G ′ .Therefore these conditions will still be equivalent for the restricted case.The implication (2) =⇒ (1) does not hold in general, which was exemplified by the Gaussian kernel functions above.The implication (1) =⇒ (2), on the other hand, does still hold, under the extra assumption that the the restriction R is such that for any graph G, for all A, B, S ⊂ N such that A ⊥ d B | S, there is a P ∈ P G R for which A ⊥ ⊥ B | S. We will sketch how this assumption is satisfied for the Gaussian kernel functions described above.Let A, B, S ⊂ N such that A ⊥ ′ d B | S.This implies that there is a trail γ : A ∋ a → b ∈ B that is unblocked by S. If we let θ (s,t) = 1 for all s, t ∈ γ and zero otherwise, it can be shown Theorem 2 is only a sufficient condition which is, by the Gaussian kernel function example, not sufficient any more in the restricted case.Theorem 3 on the other hand is only a necessary condition.The proof of this theorem only uses the necessity of the conditions in Theorem 1 which we showed above are still valid in the restricted case.We conclude that therefore Theorem 3 also still holds in the restricted case. To conclude this section we summarise the results for the restricted case.We saw that we only have a slightly weaker necessary condition for Goal I, namely that for every subset S ⊂ H we need that P . Necessary conditions for this latter condition are then provided by Theorem 1 and 3, which are still valid for the restricted case. Conclusion In this paper, we derived necessary and sufficient conditions for the recognition network to be able to model the exact posterior distribution of a generative Bayesian network.In case that the generative network has a single node without parents, the necessary and sufficient conditions coincide.However, for multiple nodes without parents there is still a gap in both conditions. Further study directions A further direction of study could be to find a single necessary and sufficient condition for the general case.Another interesting question is the following: "What is the smallest number of edges in an inversion G ′ of G?".Using the results on single edge operations, one could try to find an algorithm that finds an optimal inversion of G.It is generally believed that the recognition network needs many edges to make exact modelling of the posterior distribution possible (Welling, personal communication, 2022).Therefore, the number of edges in the recognition network will be reduced to make it computationally efficient.In practice, this approximation does not seem to affect the quality of the inference.This phenomenon remains an open problem that is relevant for machine learning. Figure 4 : Figure 4: Recognition model capturing the dependence between muscle pain and hayfever Figure 6 : Figure6: Pair of DAGs G, G ′ that satisfy the first requirement of Goal II, but for which there exists a topological ordering of G ′ (the one on the right) that does not reflect this. 2) For all sets A, B, S ⊂ N such thatA ⊥ ′ d B | S, we have A ⊥ d B | S (3) For all s ∈ N , we have s ⊥ d nd ′ (s) | pa ′ (s) (4) For all s ∈ N , we have s ⊥ d pr ′ (s) | pa ′ (s).Proof.(1) =⇒ (2) (by contradiction) Suppose there exist A, B, S such that A ⊥ ′ d B | S, but A ⊥ d B | S.By Lemma 2 this implies there exists an P ∈ P G for which A ⊥ ⊥ B | S.This violates (2) of Lemma 3 and therefore P / ∈ P G ′ . Figure 7 : Figure 7: Pair of DAGs G, G ′ that satisfy Goal II but G ′ does not satisfy the condition in Theorem 2 Figure 8 : Figure 8: Example course of the algorithm.(G) is the original graph.( Ḡ′ 0 ) is the version with the edges of G flipped and the parents connected (red arrow).(Ḡ′3 ) this is the status halfway the fourth while loop.The red edges have been added by the algorithm between i = 0 and i = 3. Figure 10 : Figure 10: Pair of graphs G, G ′ 6 that for this distribution a ⊥ ⊥ b | S and therefore A ⊥ ⊥ B | S. With this extra assumption we will now show (1) =⇒ (2).Suppose A, B, S ⊂ N such that A ⊥ ′ d B | S.This implies that for all P ∈ P G ′ R , we have A ⊥ ⊥ B | S. Now suppose by contradiction that A ⊥ d B | S. By the assumption, there must be a P ∈ P G R for which A ⊥ ⊥ B | S, which would contradict (1).Therefore A ⊥ d B | S which shows (1) =⇒ (2).
8,447.6
2022-12-20T00:00:00.000
[ "Computer Science", "Mathematics" ]
Demonstration of Optical Phase-conjugation in Methyl Green Dye-doped Thin Film Optical Phase-Conjugation (OPC) has been observed i n Methyl Green (MG) dye sensitized gelatin films in Degenerate Four-Wave Mixing (DFWM) configuration at 633 nm radiation from a He-Ne laser of total power 35 mW. The mechanism of PhaseConjugate (PC) wave generation associated with this dye-doped system was discussed. The dependence of th PC wave generation on the incident angle between the forward-pump and the probe beams and ti me of evolution were also studied. A maximum phase-conjugate beam reflectivity of about 0.13% ha s been observed in this dye-doped gelatin films. INTRODUCTION Over the last three decades, Optical Phase-Conjugation (OPC) has been one of the major research subjects in the field of Nonlinear Optics (NLO) and quantum electronics. Optical phase-conjugation defines usually a special relationship between two coherent optical beams propagating in opposite directions with reversed wavefront and identical transverse amplitude distributions. The unique feature of a pair of Phase-Conjugate (PC) beams is that the aberration influence imposed on the forward beam (signal) passed through an inhomogeneous or a disturbing medium can be automatically removed from the backward beam (phase-conjugated beam) passed through the same disturbing medium. Several technical approaches are there to efficiently produce the backward phaseconjugate beam. The first one is based on the Degenerate Four-Wave Mixing (DFWM) process, the second one is based on various backward stimulated scattering processes such as Brillouin, Raman, Raleighwing or Kerr and the third one is based on one-photon or multi-photon pumped backward stimulated emission (Lansing) process. Among these three techniques, the backward DFWM plays an important role in generating phase-conjugate beam, which is the formation of the induced holographic grating and the subsequent wavefront restoration via backward-pump (read) beam. OPC has a wide range of potential applications in the field of science and technology such as real-time holography, adaptive optics, and spectral filtering [1] . Organic dye-doped solid matrices have been very attractive materials in recent years for the generation of phase-conjugate waves [2][3][4][5][6][7][8][9][10][11][12][13][14] . Dyes having a strong absorption at the laser wavelengths and a long life time of their triplet state can generate the phase-conjugate wave at a low laser power. Organic dyes embedded in polymers have been also used as holographic recording media and permanent optical memories [15] . In the present work, DFWM technique was used to generate the phase-conjugate waves of the continuouswave He-Ne laser radiation of 35mW at 633nm. An organic dye (Methyl Green -C. I.42590) -doped gelatin film was used as the Nonlinear Medium (NM) in the DFWM geometry. MATERIALS AND METHODS The organic dye used to sensitize the gelatin film is Methyl Green (MG) which belongs to the triphenylmethane [16] groups. All the dyes of this series are derived from the hydrocarbon, triphenylmethane, and the tertiary alcohol, triphenylcarbinol, which are both colorless. The chromophore of this class is the quinonoid group. The chemical structure and molecular formula of this dye are shown in the Fig. 1. The UVvisible absorption spectrum of MG dye was studied using UV-2401 PC Spectrophotometer and it exhibits the peak absorbance at 631.5nm as in Fig. 2. Gelatin films were prepared by removing silver halide from 10E75 Agfa Gaevert holographic plates by immersing them in sodium thiosulphate solution. The thicknesses of the films obtained were of the order of 10 microns. These plates were soaked in aqueous solutions of MG with appropriate dye concentrations for 2 minutes time duration and dried at room temperature. The optical quality of these dye-doped gelatin films obtained was very good. These films were used for this study without any further process. The Optical Density (OD) of the dye-doped gelatin film chosen for this work was approximately 1. Figure 3 shows the experimental configuration used to realize the optical phase-conjugation. In this work we used the standard DFWM geometry to generate the phase-conjugate wavefronts. A He-Ne laser (Coherent, 31-2140-000 -35mW) beam at 633nm was divided into three beams, two counter-propagating pump beams E 1 and E 2 namely forwardpump and backward-pump respectively and a probe beam E 3 . The optical path lengths of all the three beams were made equal, so that they were coherent at the sample. The spot size of each of these three unfocused beams at the Nonlinear Medium (NM) was 1.25mm in diameter. The constant intensity ratio of the probe beam (E 3 ), forward-pump beam (E 1 ) and backward-pump beam (E 2 ) used in this work was ~ 1:10:10. The angle of incidence (θ) between the probe beam and the forward-pump beam were varied between 5 0 and 10°. The sample was exposed simultaneously to all these three beams. The phase-conjugated wave was separated from the probe beam using the beam-splitter BS3 and was detected with the help of a photo detector (Field Master TM GS -Coherent Inc.). The experimental setup was mounted on a vibration isolation table (Melles Griot-Metric version) to avoid the destruction of the laser induced gratings formed in the MG dye-doped gelatin film due to mechanical disturbances. RESULTS AND DISCUSSION In MG dye sensitized gelatin film, the PC waves originate from two different processes. One of them is due to saturation of absorption while the other is due to the photo bleaching of the dye molecules at the excitation wavelength. Absorption of a photon by MG results in transition of the MG to the first excited singlet state (S 0 → S 1 ). If the singlet-to-triplet crossover is considerable, the dye molecules will switch over to the triplet state (S 1 → T 1 ), where they will remain for a relatively longer time as the triplet-to singlet transition is inhibited, and, consequently, these molecules will not be available for further absorption from the ground state. This will result in a saturation of absorption if the triplet lifetime is long enough. Thus, in the medium, the absorption becomes a function of intensity. Therefore, when the the two write beams interfere, the intensity pattern modulates the complex refractive index, which results in the formation of a grating. The fringe period (Λ) can be determined by the well-known formula Λ = λ / 2sin (θ / 2), where λ is the laser wavelength, and ±θ is the forward-pump and probe beam incident angles with respect to the normal to the nonlinear medium. The second process to be considered is the photo bleaching of the dye molecules at the excitation wavelength. The existence of photo bleaching can be inferred from a simple experiment described as follows. MG dye-doped gelatin film was illuminated with 633 nm radiations at two different incident intensities and the corresponding transmittance of the sample was measured with respect to time. We observed that the transmittance of the sample increases from 33% to 40% and 33% to 45% in a time duration of about 30 minutes (Fig. 4) corresponding to the incident light intensities of 1.5 W/cm 2 and 3 W/cm 2 respectively. The observed increase of transmittance confirms the existence of light-induced bleaching process associated with this dye-doped system. For lower intensities the bleaching may be reversible but when higher intensities are used it may result in complete decomposition of dye molecules and hence become irreversible. The phase-conjugate signal measurements were taken using the experimental setup shown in Fig. 3. To get maximum reflectivities it is necessary that the probe and pump beams should overlap exactly in the nonlinear medium. Figure 5 shows the PC reflectivities observed in the MG dye film as a function of recording angle (θ) between the forward-pump and the probe beams in the DFWM geometry. It is known that the phase-conjugated signal disappears when the angle between the beams is large enough [17] . This is because of less overlap between probe and pump beams in the dye film and small grating periods [18] . The highest phase-conjugate reflectivity (0.13%) was observed at an angle of 7°. Figure 6 shows the measured PC signal in the MG dye-doped gelatin film as a function of recording time. The initial rise to a peak within a few minutes is due to degenerate four-wave mixing and holographic processes; the sudden drop in the intensity of the PC signal after shutting off both the write beam E 1 and E 3 indicates the dominant contribution from DFWM processes. The presence of PC signal even after E 1 and E 3 have been shut off is due to the holographic process, which decays rather slowly. A detailed explanation of these effects has been reported by Fujiwara and Nakagawa [4] . CONCLUSION We have observed low-intensity optical phaseconjugation (OPC) in Methyl green (MG) dye-doped gelatin films using degenerate four-wave mixing (DFWM) at 633nm light radiating from a He-Ne laser. The maximum phase-conjugate beam reflectivity obtained from these dye films was about 0.13%. The mechanism of PC wave generation associated with this dye-doped system was discussed. Dependence of PC wave generation on spatial separation between the forward-pump and the probe beams and time of evolution were studied. Since the MG dye-doped gelatin film was used at 633nm and this may be suitable for low-power semiconductor lasers in the red wavelength region; MG dye-doped gelatin film may be a promising material for real-time double-exposure phase-conjugate interferometry applications. ACKNOWLEDGEMENT We thank All India Council of Technical Education (AICTE), Government of India, New Delhi for their generous support for this work.
2,178.4
2005-08-31T00:00:00.000
[ "Materials Science", "Physics" ]
Ferroptosis: Regulated Cell Death Abstract Ferroptosis is a recently identified form of regulated cell death that differs from other known forms of cell death morphologically, biochemically, and genetically. The main properties of ferroptosis are free redox-active iron and consequent iron-dependent peroxidation of polyunsaturated fatty acids in cell membrane phospholipids, which results in the accumulation of lipid-based reactive oxygen species due to loss of glutathione peroxidase 4 activity. Ferroptosis has increasingly been associated with neurodegenerative diseases, carcinogenesis, stroke, intracerebral haemorrhage, traumatic brain injury, and ischemia-reperfusion injury. It has also shown a significant therapeutic potential in the treatment of cancer and other diseases. This review summarises current knowledge about and the mechanisms that regulate ferroptosis. RCD is often synonymous with caspase-dependent apoptosis (apoptotic cell death), but there are many nonapoptotic forms of RCD such as necroptosis, pyroptosis, ferroptosis, parthanatos, autophagy-dependent cell death, alkaliptosis, and oxytosis (6). These differ from one another in biochemical, functional, and morphological terms (7). Table 1 lists major RCD forms, but there are others, such as cellular senescence (irreversible inhibition of the cell cycle) (1,12), alkaliptosis (cell death triggered by intracellular alkalinisation) (6), lysosome-dependent cell death (mediated by hydrolytic enzymes released into the cytosol after lysosomal membrane permeabilisation) (6,13), entotic cell death (a form of cell cannibalism in which one cell devours and kills another and occurs mostly in epithelial tumour cells) (14), and netotic cell death [mediated by the release of neutrophil extracellular traps (NETs) -extracellular net-like DNA-protein structures released by cells in response to infection or injury] (15). Cell death can generally result in immune response to dead cell antigens, commonly known as "immunogenic cell death" (16). An important role is attributed to the release of DAMPs, which include high-mobility group box 1 (HMGB1) proteins, histones, mitochondrial transcription factor A (TFAM), and non-proteaceous entities such as DNA, RNA, and extracellular ATP (17) from dead or dying cells. Cellular programmes associated with the immune component are apoptosis, necroptosis, ferroptosis, pyroptosis, and parthanatos (9). Necrosis, as a form of ACD, is characterised by a rupture of the plasma membrane, cell swelling (oncosis), a decrease in energy and release of DAMP, which leads to cell lysis and consequent propagation of inflammation (6,9). A variant of necrosis has also been described to involve mitochondrial permeability transition, pore opening characterised by plasma membrane rupture, cell swelling and lysis, energy decline, DAMP release, and mitochondrial swelling (9). The nature, regulation, and physiological and pathological relevance of various cell death programmes continues to be at the centre of research interest. For the last few decades, scientific interest has particularly been focused on the features and molecular mechanisms of ferroptosis, which seems to be involved in both health and disease states (3). The aim of this review is to present the latest insights into this form of cell death, including its main mechanisms of action and possibilities of manipulation. The data selected for this review were collected by searching the PubMed database for articles published in English between 2000 and 2020 using the following terms: cell death, regulated cell death, ferroptosis, lipid peroxides, iron, and glutathione peroxidase. In addition, we ran a search for possible mechanisms and physiological and pathophysiological significances using the following specific terms: programmed cell death, labile iron pool, apoptosis, and erastin. THE CONCEPT As a new form of cell death, ferroptosis (from Latin ferrum for iron and Greek ptosis for decline/failure) was first described in 2012 (18), although some characteristic changes had been known earlier (19). It is an iron-dependent form of RCD or non-apoptotic, caspase-independent cell death with necrotic morphology caused by lipid reactive oxygen species (L-ROS; or lipid peroxides) accumulated in cell membranes through iron-mediated lipid peroxidation. It is an adaptive form of RCD, which means that it depends on metabolic conditions in the cell (20). It is also considered a pro-inflammatory, immunogenic form of RCD, since DAMPs are released during this process (21). According to its genetic, morphological, and biochemical characteristics, this process significantly differs from other forms of RCD. For example, ferroptotic cells have smaller mitochondria, higher mitochondrial membrane density, negligible mitochondrial crystals, and mitochondrial membrane rupture (Table 1). Some studies suggest that ferroptosis still shares several biochemical features with oxytosis (22), necroptosis (23), and autophagy or ferritinophagy (24,25). This indicates some interdependence between these cell death programmes, but further research is needed in this regard. The mechanism of ferroptosis that was discovered the first by in vitro studies (18,20) was the one with erastin, a small molecule that inhibited the cystine/glutamate antiporter system Xc -. This depleted cysteine required for the synthesis of antioxidant glutathione (GSH), a cofactor of the glutathione peroxidase-4 enzyme (GPX4; also known as phospholipid hydroperoxide glutathione peroxidase, PHGPx), which protects cells from L-ROS accumulation by reducing polyunsaturated fatty acids containing phospholipid hydroperoxides (PL-PUFA(PE)-OOHs (or lipid-hydroperoxides, L-OOHs) to the corresponding lipid alcohols and by limiting further formation of highly reactive alkoxyl radicals (L-OO*). The result of reduced GPX4 activity is the accumulation of toxic levels of PL-PUFA(PE)-OOH within the cell (18,20). Figure 1 summarises the main pathways of ferroptosis. According to literature, lipid peroxidation, which is a key factor in ferroptosis, involves various cell organelles, plasma membranes, endoplasmic reticulum, lysosomes, and mitochondrial membranes (26,27), but opinions differ (18,28). The susceptibility of individual cell organelles to lipid peroxidation is generally considered to depend on the "pool" of lipids in each organelle, iron storage, GSH level, and lipoxygenase (LOX) expression, which differs between cell types. Lipid peroxidation requires certain polyunsaturated fatty acids (PUFAs) in phospholipids (PL), whose production is related to iron metabolism and other factors. Table 1 Some forms of regulated cell death and their properties Cell death type Morphological (and biochemical ) features References Apoptosis Apoptosis is a prevailing form of RCD that requires activation of caspases leading to DNA fragmentation without loss of plasma membrane integrity. It can be extrinsic and intrinsic. Membrane blebbing, cell shrinkage, retraction pseudopods, reduction of cellular and nuclear volume, nuclear fragmentation, chromatin condensation, apoptotic body formation (activation of caspases, e.g. CASP2, CASP8, CASP3, oligonucleosomal DNA fragmentation, cytochrome c release altered Bcl-2 family protein expression and activation) 6, 8, 9 Ferroptosis Ferroptosis is a form of cell death caused by iron-dependent lipid peroxidation and L-ROS accumulation. Normal spherical cells-lack of rupture and blebbing of the plasma membrane, rounding up of the cell, small mitochondria with condensed mitochondrial membrane densities, reduction or vanishing of mitochondria cristae, as well as outer mitochondrial membrane rupture, normal nuclear size (L-ROS accumulation, activation of MAPKs, inhibition of system Xc -with decreased cystine uptake, GSH depletion and increased NADPH oxidation, inhibition GPX4, release of AA mediators (e.g. 11-HETE and 15-HETE). 8, 9, 10 Pyroptosis Pyroptosis is a form of lytic cell death that occurs in inflammatory cells in response to proinflammatory stimuli. Inflammasome activation membrane rupture, cell swelling/cell oedema and lysis, pore-induced intracellular traps, DNA fragmentation, nuclear condensation (DAMP release-e.g.HMGB1, ATP), dependent on caspase 1 and 7, proinflammatory cytokine release. MAJOR PATHWAYS REGULATING FERROPTOSIS Iron-dependent lipid peroxidation is considered to be the key to ferroptosis (31). However, this process can also be triggered by physiological conditions such as high extracellular glutamate, small molecules that block cystine import into the cell by the antiporter system Xc -, molecules that initiate degradation or covalently inhibit GPX4, and genetic deletion of GPX4 (18,32,33). The following is a concise description of the major pathways of ferroptosis: GPX4 inactivation, L-ROS accumulation, and presence of redox active iron. Inactivation of GPX4 GPX4 activity may be lowered by direct enzyme inhibition (loss of activity or stimulation of enzyme protein degradation) or inhibition of the antiporter system Xc -. In the first case, inhibition is most often mediated by RASsynthetic lethal 3 (RSL3), an alkylating molecule that irreversibly binds to selenocysteine in the active site of GPX4 (30), but there are other inducers of ferroptosis that can deplete or degrade the enzyme protein, such as ferroptosis inducer 56 (FIN56) and caspase independent lethal 56 (CIL56), a molecule that causes non-apoptotic cell death through an acetyl-CoA-carboxylase-1-dependent process (20,34). Ferroptotic cell death can also be induced through genetic inhibition of GPX4 by siRNA (35). As for the other pathway -the inhibition of system Xc --here is how this membrane-based, sodium-independent and chloride-dependent cystine/glutamate antiporter system works: it imports extracellular cystine in exchange for intracellular glutamate (36) and through the catalytic action of cystine reductase transforms cystine into cysteine, which is required for the synthesis of GSH and subsequently GPX4. Under normal conditions GPX4 catalyses the reduction of PL-PUFA(PE)-OOHs into alcohols (37). When this system is inhibited by small molecules such as erastin and its analogues (piperazine erastin and imidazole ketone erastin) and/or sorafenib, GSH levels drop, GPX4 is inactivated, and L-ROS starts to accumulate (18,30). Accumulation of L-ROS Aerobic organisms are continuously exposed to various reactive oxygen species (ROS) such as superoxide radicals (*O 2 -), hydrogen peroxide (H 2 O 2 ), hydroxyl radical (*OH), and lipid peroxides (L-ROS) such as L-OOH, peroxyl (L-OO*), and L-O* radicals (38). While low, controlled L-ROS levels are acceptable for normal cell and organism functions, higher levels are associated with numerous chronic degenerative processes and acute organ injuries. These conditions are the result of an oxidant-antioxidant imbalance that results in oxidative stress. The formation of cellular L-ROS involves ironcatalysed spontaneous chain reaction generating toxic radicals (non-enzymatic process) (20,39) and enzymemediated oxidation of PUFAs (40,41). Lipid compounds that are the most sensitive to lipid peroxidation and are thus involved in triggering ferroptosis are PL-PUFAs, arachidonic acid (AA) in particular, and the elongation product of adrenic acid (AdA) (29). The formation of PL-PUFAs is mediated by the acyl-CoA long chain synthetase 4 (ACSL4), which catalyses the formation of acyl CoA derivatives (AA-CoA and AdA-CoA). Another enzyme, lysophosphatidylcholine acyltransferase 3 (LPCAT3), esterifies these derivatives into phosphatidylethanolamine (PE) forms AA-PE and AdA-PE that are inserted into the PL membrane. The resulting PL-PUFA(PE) produces L-ROS, which in turn executes ferroptosis (7). PL-PUFA(PE) oxidation takes place in stages. In the first stage, *OH radicals attack PUFA on the bis-allylic position to create carbon-centred radicals that can react with molecular oxygen and produce L-OO*. In the second stage, L-OO* propagates by abstracting hydrogen from another PL molecule, which leads to the formation of PL-PUFA(PE)-OOH. L-OO* may also be added to the bisallylic position of another PL molecule and produce a PL-OO-PL dimer. In the third stage, the reaction is continued until two radicals come together and form a nonradical molecule or the chain reaction is interrupted by some lipophilic antioxidant. If weakly bound or if free iron (Fe 2+ , Fe 3+ ) is present (42,43), PL-PUFA(PE)-OOHs can undergo reductive cleavage producing a toxic lipid LO* radical. These lipid radicals can abstract protons from neighbouring PUFAs and start a new cycle of lipid oxidation and damage (44). In addition, secondary lipid peroxidation products can be generated by intramolecular rearrangement and cleavage of PL-PUFA(PE)-OOH such as malondialdehyde (MDA), 4-hydroxynonenal (4HNE), and oxygenated PLs (33). Enzyme-mediated formation of PL-PUFA(PE)-OOH (35) involves lipid-peroxidising enzymes that contain mononuclear iron centres and can easily receive iron from poly-rC binding chaperone proteins (PCBPs). Their major substrates are AA and linoleic acid. The enzymes are classified according to their positional specificity for AA oxygenation to 5-, 8-, 12-, 15-LOX. They catalyse the introduction of molecular oxygen into PUFAs to produce metabolites of 15-hydroxyeicosatetraenoic acid and 13-hydroxyoctadecadienoic acid (45,46). The contribution of enzymatically and non-enzymatically mediated lipid peroxidation to ferroptosis differs (41,47). If the resulting L-ROS forms are not successfully detoxified by GPX4, they accumulate in cell and organelle membranes, which results in ferroptosis and membrane disruption. Although the exact mechanism is not clear, it is assumed that "hydrophilic pores" formed on membranes change membrane permeability and lay grounds for the "osmotic catastrophe" (48). Another assumption is that the resulting secondary, lipophilic electrophiles (e.g. MDA) may act as downstream signalling molecules and undetected protein effectors (49,50). Presence of redox-active iron Current knowledge indicates that LOX enzymes and the non-enzymatic Fenton reaction contribute the most to lipid peroxidation and L-ROS accumulation in ferroptosis (51,52). The Fenton reaction involves oxidation of the ferrous to ferric form of iron and electron transfer to H 2 O 2 (Fe 2+ +H 2 O 2 →Fe 3+ +*OH+HO -) producing a very reactive *OH radical (ferric iron from the reaction can be reduced to ferrous iron in the presence of O* 2 in the Haber-Weiss reaction). Iron is an essential transition metal for all life forms on Earth. It is necessary for erythrocytopoiesis, for numerous enzymes involved in DNA replication, translation, and repair, for antimicrobial oxidative burst, and for many other biological processes, most often in the form of Fe-S-clusters (53). Thanks to its property to reversibly lose or receive electrons and transit from one valence state to another, it catalyses various biochemical reactions. The presence of free intracellular iron can therefore strongly influence cellular redox status and contribute to oxidative stress in cells. At the cellular level, iron homeostasis is regulated by a post-transcriptional mechanism mediated by the iron (IRP-IRE) system, while at the systemic level (absorption, utilisation, storage, and recycling) it is regulated by the hepcidinferroportin feedback loop and numerous other proteins such as divalent metal transporter (DMT1), ferrireductase, haemoxygenase-1 (HMOX-1), ferroportin, transferrin (Tf), transferrin receptor-1 (TfR1), mitoferrin 1, and frataxin (54)(55)(56). Under normal conditions, iron is delivered to the cell by Tf and TfR1.There ferric iron is reduced to its ferrous form, which is then introduced into the cytosol by DMT1. If there is excess free, non-transferrin-bound iron (NTBI), it is imported by transmembrane transporter proteins ZIP8 and ZIP14 (57). Within the cells, iron binds to multifunctional PCBPs, which prevent iron from becoming part of the redox-active intracellular "labile iron pool" (LIP) and eventually its cytotoxic effects (57). In erythroid cells, most of the iron from LIP is transported into mitochondria by mitoferrin 1 and 2 and is stored as mitoferritin or incorporated into haem and Fe-S clusters, required for all electron transport chain complexes (57). In non-erythroid cells, iron from the LIP or the one bound to PCBPs is stored in the form of ferritin. When the stored iron is mobilised, ferritin binds to the autophagic nuclear receptor coactivator 4 (NCOA4). NCOA4 transports it to lysosomes, where it is degraded and released in a process called ferritinophagy (25). Ferritinophagy plays a major role in the recycling of intracellular redox-active iron, which also emphasizes the importance of lysosomes for ferroptosis (25). The resulting radicals and lipid peroxidation end products are highly reactive and cause massive oxidative damage, which is not exactly consistent with the regulated nature of ferroptosis. Iron-dependent oxidative metabolism has been an indispensable part of life for billions of years. In view of this well-known fact, it was interesting to see that one research from 2010 (57), that is, before ferroptosis was recognised as a process, associated it with cell death. Degenerative changes in many diseases have been found to coincide with dysregulation of iron metabolism (57). Further research will show how consistent these findings are with the demonstrated mechanisms of cell death by ferroptosis in vivo and whether ferroptosis is the oldest and in many contexts the most important form of RCD. For now, it can be concluded that lipid peroxidation mediated by free redox active iron plays a central role in ferroptosis (18). However, the precise role of iron in this process is yet to be determined. What we know is that iron catalyses the formation of L-ROS through the Fenton reaction and/or by iron-dependent LOX (58). The second assumption involves iron-independent redox activity, which needs further investigation (33,59). OTHER BIOCHEMICAL PROCESSES ASSOCIATED WITH FERROPTOSIS In addition to iron, lipid peroxidation, and GPX4, other biochemical pathways are also essential for ferroptosis. One of them is the pentose phosphate pathway, which produces nicotinamide adenine dinucleotide phosphate (NADPH). NADPH is necessary for the catalytic activity of glutathione reductase, an enzyme that catalyses the conversion of GSSG to GSH. Another pathway is transsulphuration, which allows cysteine synthesis de novo in case of deficient extracellular cystine uptake by the glutamate/cysteine antiporter Xcsystem (60). Lipid metabolism and the mevalonate pathway are also associated with the ferroptosis. In addition to ferroptosis inducers, ferroptosis inhibitors deserve particular attention. They can be classified as iron chelators (e.g. deferoxamine, cyclipirox, deferiprone), lipophilic antioxidants (vitamin E, butylated hydroxytoluene, XJB-5-131, liproxstatin-1, ferrostatin-1), LOX inhibitors (baicalein, zileuton), and deuterated polyunsaturated fatty acids, all of which prevent lipid peroxidation. Literature also mentions glutaminolysis inhibitors, protein synthesis inhibitor cycloheximide (which reduces betamercaptoethanol), and neurotransmitter dopamine (20). Some of these inducers and inhibitors of ferroptosis are also suitable for use in vivo (e.g. sorafenib and iron chelators). Possible physiological and pathological roles of ferroptosis Little is known about the role of ferroptosis in the development of tissues and organs. Studies have shown that L-ROS levels are increased in embryonic tissues undergoing cell death and that the process can be controlled with GPX4 and lipophilic antioxidants (19,61). This suggests that ferroptosis is important for maintaining tissue integrity and general homeostasis, although its precise developmental role is still unclear. In vitro and animal studies investigating the role of ferroptosis in tumourigenesis indicate that cancer cell lines derived from the brain, ovary, kidney, bone tissue, and soft tissue are susceptible to ferroptosis, whereas the lines derived from the pancreas, breast, stomach, and upper respiratory system are not (30,67). These differences in sensitivity to ferroptosis have been attributed to differences in the basal metabolic state of these particular cell types, especially in lipid metabolism. Ferroptosis in cancer cells seems to be promoted by tumour suppressor gene p53, NADPH-oxidase (NOX), HMOX-1 and inhibited by miR-137 and transcriptional factors Nrf2 and p53 (10,68). Either way, they directly or indirectly target iron metabolism or lipid peroxidation and show a potential for genetic or pharmacological interventions utilising ferroptosis to eliminate malignant cells and treat different types of cancer. However, it has yet to be established in which cancer types ferroptotic therapy would be effective. Malignant cells usually have high concentrations of iron and are consequently in persistent oxidative stress (69). This is why some cancers contain somatic mutations in the Nrf2/Keap 1 pathway to enhance transcription of antioxidant enzymes (19,70,71). As a consequence, ferroptosis is not frequent in the development of cancers. To stimulate ferroptosis as a therapeutic strategy against cancer, the most common therapeutic targets are the system Xc -, GPX4, iron-related genes, GSH, coenzyme Q10 (CoQ10), and LOXs (67). Tumour suppressor gene p53 (more specifically, acetylation-defective mutant p53 3KR ) is also known to induce ferroptosis (19) by suppressing the antiporter system Xccomponent SLC7A11. However, the roles of wild and mutant p53 in ferroptosis appear to be extremely complex. Depending on the context, different cells and different test conditions can stimulate or suppress ferroptosis by different mechanisms (72)(73)(74)(75). These studies are ongoing. Therapeutic role of ferroptosis The mechanisms controlling ferroptosis have also been studied as therapeutic targets in various pathological conditions. For ferroptosis-based cancer therapy, iron-based nanomaterials have been designed and synthesised in recent times, such as ferumoxytol (otherwise approved by the United States Food and Drug Administration for iron deficiency therapy) and amorphous iron nanoparticles. Due to certain disadvantages of iron-based nanomaterials, other metals with multiple oxidation states (e.g. manganese dioxide-coated mesoporous silica nanoparticles) and GPX4inhibiting nanomaterials (e.g. metal-organic network coated on the surface of polyethylenimine/p53 plasmid complex) are also being tested (74). The role of ferroptosis in the treatment of cancer, however, is not yet sufficiently clear. According to Krysko et al. (76), tumour cell death by ferroptosis is a "doubleedged sword". Namely, ferroptotic cells can activate antitumour immune response through immunogenic cell death and thus enhance the effects of anticancer therapy. However, they can also suppress antitumor immune response and contribute to tumour progression. At this stage, no definitive conclusions can be drawn as to whether ferroptosis is an immunogenic or immunosuppressive form of cell death. This is why some authors call for further basic research on animal models (77). Need for ferroptosis biomarkers To confirm the findings of ferroptosis studies, those in vivo in particular, it is necessary to define reliable molecular biomarkers of this process. Studies conducted so far are mainly based on the use of different inducers and on monitoring the effects of ferroptosis inhibitors and increased L-ROS values (19). A reliable and specific biomarker has not yet been defined. Candidates include increased expression of prostaglandin E synthase mRNA 2 (PTGS2), glutathione-specific gamma-glutamylcyclotransferase 1, and HMOX1, which have been reported following erastininduced ferroptosis (30,78). Other potential ferroptosis biomarkers could be higher activity of cyclooxygenase 2 (30, 31), higher MDA, and lower NADPH levels (79)(80)(81). Various methodological approaches have been used in vitro to determine cell viability, most commonly flow cytometry (30,82), followed by the measurement of intracellular iron using a specific dye and GSH depletion (83). CONCLUSION The main feature of ferroptosis is the accumulation of L-ROS in the cell membrane and organelles through irondependent lipid peroxidation and inadequate activity of GPX4.There are indications that this form of cell death is relevant in a variety of physiological and pathophysiological contexts and that it could be used to target some cancers. However, current knowledge is still insufficient and does not answer what role ferroptosis has in the normal development of the organism, how ferroptosis signalling pathways are controlled, what role does iron play, what actually happens to cell membranes after L-ROS accumulation, is LOX essential for the process, does ferroptosis include other enzymes to which iron is a cofactor, and what role the secondary products of lipid peroxidation play. So far, the most useful information regarding ferroptosis originates from in vitro studies that elucidate additional molecular mechanisms and signalling pathways involved in ferroptosis. Further studies are needed to clarify the remaining uncertainties and help transfer new knowledge to clinical settings. Furthermore, to understand ferroptosis in in vivo conditions we need more specific and reliable biomarkers under normal or pathological conditions. This would help us to predict the susceptibility or resistance of certain diseases to ferroptosis and learn how to modulate this process to establish effective therapeutic strategies.
5,112.8
2020-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Quasi-Regression Monte-Carlo Method for Semi-Linear PDEs and BSDEs † : In this work we design a novel and efficient quasi-regression Monte Carlo algorithm in order to approximate the solution of discrete time backward stochastic differential equations (BSDEs), and we analyze the convergence of the proposed method. With the challenge of tackling problems in high dimensions we propose suitable projections of the solution and efficient parallelizations of the algorithm taking advantage of powerful many core processors such as graphics processing units (GPUs). Introduction In this work we are interested in numerically approximating the solution (X, Y, Z) of a decoupled forward-backward stochastic differential equation The terminal time T > 0 is fixed. These equations are considered in a filtered probability space (Ω, F , P, (F t ) 0≤t≤T ) supporting a q ≥ 1 dimensional Brownian motion W. In this representation, X is a d-dimensional adapted continuous process (called the forward component), Y is a scalar adapted continuous process and Z is a q-dimensional progressively measurable process. Regarding terminology, g(X T ) is called terminal condition and f the driver. Results Our aim is to solve where f j (x, y) := f (t j , x, y), f being the driver in (1). In other words, our subsequent scheme will approximate the solutions to and ∂ t u(t, x) + Au(t, x) + f (t, x, u(t, x)) = 0 for t < T and u(T, .) = g(.). One important observation is that, due to the Markov property of the Euler scheme, for every i, there exist measurable deterministic functions y i : R d → R, such that Y i = y i (X i ), almost surely. A second crucial observation is that the value functions y i (·) are independent of how we initialize the forward component. Our subsequent algorithm takes advantage of this observation. For instance, let X i i be a random variable in R d with some distribution ν and let X i j be the Euler scheme evolution of X j starting from X i ; it writes This flexibility property w.r.t. the initialization then writes Approximating the solution to (3) is actually achieved by approximating the functions y i (·). In this way, we are directly approximating the solution to the semi-linear PDE (5). In order to control better the truncation error we define a weighted modification of y i by y i and y i coincide. The previous DPE (7) becomes The introduction of the polynomial factor (1 + |X i i | 2 ) q/2 gives higher flexibility in the error analysis, it ensures that y (q) i decreases faster at infinity, which will provide nicer estimates on the approximation error when dealing with Fourier-basis. Then we define some proper basis functions φ k which satisfy orthogonality properties in R d and which span some L 2 space. It turns out that the choice of measure for defining the L 2 space has to coincide with the sampling measure of X i i ∼ ν. Our strategy for defining such basis functions is to start from trigonometric basis on [0, 1] d and then to apply appropriate changes of variable: later, this transform will allow to easily quantify the approximation error when truncating the basis. Using the notation we can rewrite the exact solution as y (q) Under mild conditions on f , g and ν, S i (X i i:N ) is square-integrable, and therefore y i,k φ k (x), for some coefficients (α (q) i,k : k ∈ N d ). Using the orthonormality property of the basis functions φ k 's, α where for all k ∈ Γ,ᾱ and Discussion A implementation on GPUs of the GQRMDP algorithm is proposed. It includes two kernels, one simulates the paths of the forward process and computes the associated responses, the other one computes the regression coefficients (α (q) i,k , k ∈ Γ). In the first kernel the initial value of each simulated path of the forward process is stored in a device vector in global memory, it will be read later in the second kernel. In order to minimize the number of memory transactions and therefore maximize performance, all accesses to global memory have been implemented in a coalesced way. The random numbers needed for the path generation of the forward process were generated on the fly (inline generation) taking advantage of the NVIDIA cuRAND library [1] and the generator MRG32k3a proposed by L'Ecuyer in [2]. Therefore, inside this kernel the random number generator is called as needed. Another approach would be the pre-generation of the random numbers in a separate previous kernel, storing them in GPU global memory and reading them back from this device memory in the next kernel. Both alternatives have advantages and drawbacks. In this work we have chosen inline generation having in mind that this option is faster and saves global memory. Besides, register swapping was not observed on the implementation and the quality of the obtained solutions is similar to the accuracy of pure sequential traditional CPU solutions achieved employing more complex random number generators. In the second kernel, in order to compute the regression coefficients, a parallelization not only over the multi-indices k ∈ Γ but also over the simulations 1 ≤ m ≤ M was proposed. Thus, blocks of threads parallelize the outer for loop ∀k ∈ Γ, whilst the threads inside each block carry out in parallel the inner loop traversing the vectors of the responses and the simulations. Conflicts of Interest: The authors declare no conflict of interest.
1,304.8
2019-08-06T00:00:00.000
[ "Mathematics", "Engineering", "Computer Science", "Physics" ]
Design of a Femtosecond Laser Percussion Drilling Process for Ni-Based Superalloys Based on Machine Learning and the Genetic Algorithm Femtosecond laser drilling is extensively used to create film-cooling holes in aero-engine turbine blade processing. Investigating and exploring the impact of laser processing parameters on achieving high-quality holes is crucial. The traditional trial-and-error approach, which relies on experiments, is time-consuming and has limited optimization capabilities for drilling holes. To address this issue, this paper proposes a process design method using machine learning and a genetic algorithm. A dataset of percussion drilling using a femtosecond laser was primarily established to train the models. An optimal method for building a prediction model was determined by comparing and analyzing different machine learning algorithms. Subsequently, the Gaussian support vector regression model and genetic algorithm were combined to optimize the taper and material removal rate within and outside the original data ranges. Ultimately, comprehensive optimization of drilling quality and efficiency was achieved relative to the original data. The proposed framework in this study offers a highly efficient and cost-effective solution for optimizing the femtosecond laser percussion drilling process. Introduction With the constant improvement in engine efficiency and increase in the thrust-weight ratio in the aerospace manufacturing field, turbine blade inlet temperatures are rising, resulting in higher service temperature demands [1,2].However, even the most advanced nickel-based single-crystal superalloys cannot withstand these temperature requirements, making cooling solutions essential [3,4].In this regard, film-cooling hole technology is widely used and can be considered an effective approach for ensuring higher turbine blade service temperatures [5,6]. Conventional methods for film hole processing include the use of long-pulse lasers, electric sparks and electrolyte beams [7,8]; however, femtosecond laser drilling has become the mainstream method for processing high-quality film-cooling holes [9].Compared to traditional processing drilling methods, femtosecond laser drilling offers several advantages, such as fewer thermal defects, higher drilling precision and efficiency due to its extremely short pulse width (hundreds of femtoseconds) [10] and more concentrated laser energy [11].However, the processing quality of film-cooling holes also significantly affects the service life of turbine blades to a large extent [12].Achieving optimal processing parameters and ensuring the quality of micro-holes for femtosecond laser percussion drilling is challenging because of the complex nature of the coupled parameters involved [13]. Traditionally, the search for an optimized process is based on a trial-and-error approach, in which the process parameters and micro-hole quality are regularly explored.For example, A. Corcoran et al. [14] qualitatively investigated the effects of laser pulse energy and pulse width on the quality of femtosecond laser drilling holes through orthogonal experiments.They found that higher pulse energy and smaller pulse width reduced the generation of micro-cracks, thus improving hole quality.Nevertheless, the trial-and-error method involves numerous complex variables related to both thermal and non-thermal parameters, making the exploration time-consuming and costly [15,16].In addition, globally extending the entire parameter space is challenging, resulting in limited optimization.In consequence, there is an urgent need to develop an efficient and cost-effective process optimization method to quickly determine the laser processing parameters.In recent years, machine learning, with its powerful data analysis capabilities, has been used to build prediction models between process parameters and performance objectives.Combined with global optimization algorithms, machine learning can find optimal plans on the basis of objective function, thus achieving high-efficiency and low-cost optimization.This approach has been widely applied in the fields of laser cutting and milling.For example, Chaki et al. [17] established a regression prediction model for the laser cutting of aluminum alloy using an artificial neural network and performed multi-objective optimization using a particle swarm optimization algorithm.Their results showed that the kerf width and surface roughness were decreased by 36.75% and 14.94%, respectively, and the material removal rate was increased by 24% with good performance enhancement.Addona et al. [18] used a CO 2 laser and an artificial neural network algorithm to model the relationship between six-dimensional inputs and two-dimensional outputs in the milling of permanent magnets.Their model provided accurate predictions with an average absolute percentage error greater than 87%.Hence, it is evident that process design incorporating machine learning and optimization algorithms has been successfully applied in the field of laser material processing. In this study, we propose a comprehensive process optimization framework for femtosecond laser percussion drilling using machine learning and a genetic algorithm (GA).Based on machine learning, the Gaussian support vector regression (G-SVR) model enables the accurate prediction of the relationship between process parameters and micro-hole quality.The GA is successfully applied to collaboratively and quantitatively design multiparameters within and outside the original data ranges, achieving the comprehensive optimization of the taper and material removal rate (MRR) of micro-holes.This method provides a reliable and effective plan for the optimization of femtosecond laser percussion drilling. Materials and Experimental Facilities In this research, the DD6 nickel-based single-crystal superalloy was selected for its good performance and low cost, making it widely used for turbine blades.For the experimental setup, a commercial laser from Light Conversion in Lithuania, with a wavelength of 1030 nm, was applied for femtosecond laser percussion drilling.The laser spot diameter was set to approximately 55 µm.The intensity of the laser beam was distributed in a Gaussian space, and a beam splitter, along with a half-wave plate, was used to control the laser energy and monitor the laser power in real-time using a pyroelectric detector.The experimental equipment of drilling is shown in Figure 1. A 0.6 mm thick plate made of DD6 nickel-based superalloy was used in the study.Before drilling, all sample surfaces were subjected to ultrasonic cleaning using ethyl alcohol to remove impurities.Percussion drilling was performed by focusing the laser beam on the surface of the samples.The entrance and exit diameters of the micro-holes induced by the laser drilling were observed and measured accurately using a scanning electron microscope.According to Equation (1), the taper of micro-holes can be acquired [19,20]: where d entrance and d exit represent the entrance and exit diameters, respectively; h is the depth of the micro-holes. Considering the difficulty of measuring the quality of individual micro-holes due to their small size, we calculated the total sum of the quality changes and their average across 60 micro-holes [21,22].Subsequently, we recorded the number of pulses and calculated the volume changes using the material density to determine the MRR of each individual micro-hole.According to Equation ( 2), the MRR of micro-holes can be acquired, where ∆m represents the change in mass, ρ represents the density of the DD6 alloy and n represents the number of laser pulses, respectively. on the surface of the samples.The entrance and exit diameters of the micro-holes in by the laser drilling were observed and measured accurately using a scanning el microscope.According to Equation (1), the taper of micro-holes can be acquired [19 𝑇𝑎𝑝𝑒𝑟 = 𝑡𝑎𝑛 (𝑑 − 𝑑 ) 2ℎ where and represent the entrance and exit diameters, respectively the depth of the micro-holes. 𝑀𝑅𝑅 = 𝛥𝑚/(𝜌 ⋅ 𝑛) Considering the difficulty of measuring the quality of individual micro-holes their small size, we calculated the total sum of the quality changes and their average 60 micro-holes [21,22].Subsequently, we recorded the number of pulses and calc the volume changes using the material density to determine the MRR of each indiv micro-hole.According to Equation (2), the MRR of micro-holes can be acquired, represents the change in mass, represents the density of the DD6 alloy and resents the number of laser pulses, respectively. Simulation of Machine Learning In this paper, according to the scikit-learn platform in Python ® , five kinds of ma learning algorithms were selected to construct the regression prediction models.algorithms included support vector regression (SVR), multilayer perceptron (MLP dom forest (RF), gradient boosting regression (GBR) and extreme gradient boosting The SVR algorithms were divided into linear and Gaussian support vector regressi pending on the kernel function used in this research [23].All selected algorithms ar able for addressing the data problem addressed in this study. It was necessary to standardize the dataset to eliminate any dimensional diffe among the features.Furthermore, we needed to divide the dataset into a training s Simulation of Machine Learning In this paper, according to the scikit-learn platform in Python ® , five kinds of machine learning algorithms were selected to construct the regression prediction models.These algorithms included support vector regression (SVR), multilayer perceptron (MLP), random forest (RF), gradient boosting regression (GBR) and extreme gradient boosting (XGB).The SVR algorithms were divided into linear and Gaussian support vector regression depending on the kernel function used in this research [23].All selected algorithms are suitable for addressing the data problem addressed in this study. It was necessary to standardize the dataset to eliminate any dimensional differences among the features.Furthermore, we needed to divide the dataset into a training set and a testing set using a ratio of 8:2 prior to regression establishment.By randomly partitioning the dataset 50 times, the average prediction results and standard deviation results were obtained.The optimal ratio and number of divisions were determined through a series of pre-tests.To objectively evaluate the generalization ability of the various models and select the optimal ones, the squared correlation coefficient (R 2 ) and mean absolute error (MAE) of the prediction results were calculated, as follows [24,25]: (3) where n is the number of samples; f (x i ) and y i express the predicted values and experimen- tal values of the ith sample, respectively.Undoubtedly, the goal was to obtain an algorithm model with an R 2 value close to 1 and with a low MAE value, indicating a higher level of accuracy in the results.The experimental procedure of this research is shown in Figure 2. Micromachines 2023, 14, x FOR PEER REVIEW 4 of series of pre-tests.To objectively evaluate the generalization ability of the various mod and select the optimal ones, the squared correlation coefficient (R 2 ) and mean absolu error (MAE) of the prediction results were calculated, as follows [24,25]: where is the number of samples; ( ) and express the predicted values and e perimental values of the th sample, respectively.Undoubtedly, the goal was to obtain algorithm model with an R 2 value close to 1 and with a low MAE value, indicating a high level of accuracy in the results.The experimental procedure of this research is shown Figure 2. Dataset Collection The above percussion drilling experiments were conducted to generate a dataset co sisting of 81 sets of data.These data included four process parameters: laser power, pu width, frequency and defocusing amount.In addition, two performance paramete namely taper and MRR, were measured.The selection of the process parameter rang was based on a series of pre-tests and equipment feasibility, with three different levels for each parameter.Specifically, the laser power levels were 5, 10 and 20 W, the pu width levels were 300, 500 and 800 fs, the frequency ranged from 100 to 200 kHz in inc ments of 50 kHz, and the defocusing amount ranged from −150 to 150 µm in incremen of 150 µm.Also, taper and the MRR were used to characterize hole quality and efficienc respectively.In this regard, this study aimed to minimize taper while maximizing t MRR.To investigate the effects of the various process parameters on taper and the MR a full factorial design (3 4 = 81) was employed.Table 1 presents the critical statistics for t model features derived from the experimental data. Dataset Collection The above percussion drilling experiments were conducted to generate a dataset consisting of 81 sets of data.These data included four process parameters: laser power, pulse width, frequency and defocusing amount.In addition, two performance parameters, namely taper and MRR, were measured.The selection of the process parameter ranges was based on a series of pre-tests and equipment feasibility, with three different levels set for each parameter.Specifically, the laser power levels were 5, 10 and 20 W, the pulse width levels were 300, 500 and 800 fs, the frequency ranged from 100 to 200 kHz in increments of 50 kHz, and the defocusing amount ranged from −150 to 150 µm in increments of 150 µm.Also, taper and the MRR were used to characterize hole quality and efficiency, respectively.In this regard, this study aimed to minimize taper while maximizing the MRR.To investigate the effects of the various process parameters on taper and the MRR, a full factorial design (3 4 = 81) was employed.Table 1 presents the critical statistics for the model features derived from the experimental data. Selection of Machine Learning Algorithms This study implemented generalization ability comparison and overfitting analysis to select the machine learning methods with good performance.Overfitting refers to making too stringent assumptions, resulting in excellent predictions in the training set but poor results in the testing set.In contrast, underfitting occurs when the model fails to fit the data well, resulting in poor results in both the training and testing sets [26,27]. The four process features including power, pulse width, frequency and defocusing amount were used as model inputs, while taper and the MRR were treated as predictors of the model.Fifty models were built using an 8:2 division of the data into training and testing sets, and 4-fold cross-validation was employed to ensure the validity and robustness of the results.The R 2 and MAE values of the models were compared in order to analyze the prediction ability of the different algorithms, as portrayed in Figure 3.Each bar graph represents the average of 50 divisions, with error bars indicating the general nature of the models and an absence of outstanding individual results.From the R 2 and MAE figures, it can be observed that L-SVR has the poorest prediction accuracy for both taper and MRR, indicating its inability to effectively address the problems identified in this study.MLP and RF exhibit severe overfitting phenomena in both the training and testing sets.In addition, MLP shows a long error bar, suggesting high instability.In contrast, the G-SVR, GBR and XGB methods demonstrate excellent performance, indicating no significant overfitting or underfitting issues.The R 2 of the taper values is 80% and 60% in the training and testing set, while the MAE is 0.7 • and 1.1 • ); for the MRR, the R 2 is 90% and 85% in the training and testing set, with a MAE of 85 and 125 µm 3 •pulse −1 , respectively.It is worth mentioning that there is no significant difference in the predictions of the MRR among these three algorithms, although G-SVR is slightly less accurate and stable than GBR and XGB when predicting taper. To further assess the extent of overfitting in each algorithmic model, an analysis was conducted on the 50 model prediction results mentioned earlier.This analysis was based on the absolute difference between the R 2 values of the training and testing sets.The results were classified into three levels: 10~20%, 20~30% and >30%.Figure 4 illustrates the number of overfitting models at each level, with larger values indicating more severe overfitting problems.In particular, MLP and RF produced over 35 overfitting models in the 20~30% and >30% ranges for taper, indicating significant overfitting issues.Although the number of overfitting models for the MRR was not as high in the 20~30% and >30% ranges, MLP and RF still produced over 30 overfitting models across all three ranges (>10%), compared to less than 20 for the other four algorithms.This suggests an extremely severe overfitting problem, which is consistent with the previous analysis.In contrast, the remaining four algorithms, namely L-SVR, G-SVR, GBR and XGB, produced relatively low numbers of overfitting models.This indicates their potential for accurate prediction and further optimization, except for L-SVR, which exhibited very low accuracy.Thus, G-SVR, GBR and XGB were selected to further optimize the subsequent prediction model.To further assess the extent of overfi ing in each algorithmic model, an analysis was conducted on the 50 model prediction results mentioned earlier.This analysis was based on the absolute difference between the R 2 values of the training and testing sets.The results were classified into three levels: 10~20%, 20~30% and >30%.Figure 4 illustrates the number of overfi ing models at each level, with larger values indicating more severe overfi ing problems.In particular, MLP and RF produced over 35 overfi ing models in the 20~30% and >30% ranges for taper, indicating significant overfi ing issues.Although the number of overfi ing models for the MRR was not as high in the 20~30% and >30% ranges, MLP and RF still produced over 30 overfi ing models across all three ranges (>10%), compared to less than 20 for the other four algorithms.This suggests an extremely severe overfi ing problem, which is consistent with the previous analysis.In contrast, the remaining four algorithms, namely L-SVR, G-SVR, GBR and XGB, produced relatively low numbers of overfi ing models.This indicates their potential for accurate prediction and further optimization, except for L-SVR, which exhibited very low accuracy.Thus, G-SVR, GBR and XGB were selected to further optimize the subsequent prediction model. Feature Analysis and Building Prediction Models To further improve the accuracy of the models and reduce overfi ing, this study used a dimensionality reduction method, as outlined in a previous study [28].The dimensionality reduction process in this study involved removing features with low correlation between inputs and outputs based on Pearson correlation coefficient and mean decrease ac- Feature Analysis and Building Prediction Models To further improve the accuracy of the models and reduce overfitting, this study used a dimensionality reduction method, as outlined in a previous study [28].The dimensionality reduction process in this study involved removing features with low correlation between inputs and outputs based on Pearson correlation coefficient and mean decrease accuracy (MDA) analyses.The Pearson correlation coefficient is defined as the covariance between two variables divided by the product of their standard deviations.This measures the degree of linear correlation between different variables, ranging from −1 to 1, with absolute values closer to 1 indicating a stronger correlation [29].MDA analysis involves disrupting the order of input parameters and observing the effect on output characteristics.Larger values indicate a greater influence on performance parameters [30]. The results of the Pearson correlation coefficient and MDA analyses are illustrated in Figure 5.All of the results demonstrate tangible values, confirming the rationality of the selected parameters.Depending on the shade of the color and the magnitude of the values, the influence of the input parameters on the relevance and importance of the output parameters can be determined.Among the four process parameters, frequency has the lowest value for both taper and the MRR; therefore, it is apparent that the frequency feature has the least relevance and importance for model performance. Feature Analysis and Building Prediction Models To further improve the accuracy of the models and reduce overfi ing, this study used a dimensionality reduction method, as outlined in a previous study [28].The dimensionality reduction process in this study involved removing features with low correlation between inputs and outputs based on Pearson correlation coefficient and mean decrease accuracy (MDA) analyses.The Pearson correlation coefficient is defined as the covariance between two variables divided by the product of their standard deviations.This measures the degree of linear correlation between different variables, ranging from −1 to 1, with absolute values closer to 1 indicating a stronger correlation [29].MDA analysis involves disrupting the order of input parameters and observing the effect on output characteristics.Larger values indicate a greater influence on performance parameters [30]. The results of the Pearson correlation coefficient and MDA analyses are illustrated in Figure 5.All of the results demonstrate tangible values, confirming the rationality of the selected parameters.Depending on the shade of the color and the magnitude of the values, the influence of the input parameters on the relevance and importance of the output parameters can be determined.Among the four process parameters, frequency has the lowest value for both taper and the MRR; therefore, it is apparent that the frequency feature has the least relevance and importance for model performance.According to the results of the feature analysis, only the three-dimensional features of power, pulse width and amount of defocus were selected as inputs, excluding the feature of frequency.Three algorithms, including G-SVR, GBR and XGB, were used to establish prediction models based on the findings described in Section 3.2.The dataset was randomly divided using an 8:2 ratio of training set to testing set, and the models were trained 50 times.Figure 6 illustrates the R 2 prediction results with MAE values for the three algorithms (G-SVR, GBR and XGB) for both taper and the MRR.For taper, all three algorithms showed that the mean values of R 2 and MAE were concentrated at 70% and 0.87 • in the training set and 62% and 1.05 • in the testing set, respectively.As for the MRR, it is evident that the models exhibited higher accuracy and greater stability, achieving 90% and 100 µm 3 •pulse −1 , and 85% and 120 µm 3 •pulse −1 , in the training and testing sets, respectively.However, the G-SVR algorithm demonstrated the lowest MAE value for the MRR and the least overfitting for taper.Therefore, the G-SVR model was chosen as the optimal algorithm for the final prediction of taper and the MRR.This was then combined with the GA for the subsequent process optimization. and 100 µm 3 •pulse −1 , and 85% and 120 µm 3 •pulse −1 , in the training and testing sets, respec tively.However, the G-SVR algorithm demonstrated the lowest MAE value for the MR and the least overfi ing for taper.Therefore, the G-SVR model was chosen as the optima algorithm for the final prediction of taper and the MRR.This was then combined with th GA for the subsequent process optimization. Process Design within the Range of the Original Dataset The specific GA procedure used in this study is portrayed in Figure 7, providin an approximate explanation of the basic principles involved, which consist of fou processes [31].First, a population of a certain size is randomly generated including differ ent individuals within the specified ranges.Each individual in the population represent Process Design the Range of the Original Dataset The specific GA procedure used in this study is portrayed in Figure 7, providing an approximate explanation of the basic principles involved, which consist of four processes [31].First, a population of a certain size is randomly generated including different individuals within the specified ranges.Each individual in the population represents a combination of laser power, pulse width and defocusing amount.Second, the objective (taper or the MRR) is calculated using the G-SVR prediction models and sorted based on the fitness function value.Third, genetic manipulation is applied, including selection, crossover and mutation inspired by biological principles, to improve the population by favoring individuals with better objective function values.An elite strategy is introduced to preserve individuals with the highest fitness values in each generation, enhancing the quality of the population.Simultaneously, crossover and mutation introduce new individuals and explore unexplored regions of the solution space.As the number of genetic generations increases, the results of each generation become progressively superior to the previous one until convergence.Finally, the termination condition is met when the results stabilize after several genetic generations. Since this experiment addresses taper and MRR performance, constituting a multiobjective problem, the NSGA-II genetic algorithm (non-dominated sorting GA of the second generation) [32] was proposed.This algorithm follows a similar principle to single-objective optimization, with the only difference being in the calculation and sorting of the objectives in step 2. For single-objective optimization, the objectives are chosen directly based on their values, while for multiple-objective optimization, the objectives are selected based on the Pareto front and congestion option.Furthermore, the hyperparameters for the GA used in this study-for both single-objective and multi-objective outputs-were set as listed in Table 2.The optimal configurations of these hyperparameters were determined through an exhaustive method that considered the population size, number of generations, crossover and mutation.This ensured the efficient and accurate attainment of the optimal solution during the evolutionary process for subsequent performance optimization. Micromachines 2023, 14, x FOR PEER REVIEW 9 of 1 a combination of laser power, pulse width and defocusing amount.Second, the objective (taper or the MRR) is calculated using the G-SVR prediction models and sorted based on the fitness function value.Third, genetic manipulation is applied, including selection crossover and mutation inspired by biological principles, to improve the population by favoring individuals with be er objective function values.An elite strategy is introduced to preserve individuals with the highest fitness values in each generation, enhancing the quality of the population.Simultaneously, crossover and mutation introduce new indi viduals and explore unexplored regions of the solution space.As the number of genetic generations increases, the results of each generation become progressively superior to the previous one until convergence.Finally, the termination condition is met when the re sults stabilize after several genetic generations.Since this experiment addresses taper and MRR performance, constituting a multi objective problem, the NSGA-II genetic algorithm (non-dominated sorting GA of the sec ond generation) [32] was proposed.This algorithm follows a similar principle to single objective optimization, with the only difference being in the calculation and sorting of the objectives in step 2. For single-objective optimization, the objectives are chosen directly based on their values, while for multiple-objective optimization, the objectives are selected based on the Pareto front and congestion option.Furthermore, the hyperparameters for the GA used in this study-for both single-objective and multi-objective outputs-were set as listed in Table 2.The optimal configurations of these hyperparameters were deter mined through an exhaustive method that considered the population size, number of gen erations, crossover and mutation.This ensured the efficient and accurate a ainment of the optimal solution during the evolutionary process for subsequent performance optimiza tion. Type Population Generation Crossover Mutation Single-objective optimization 50 200 0.9 0.09 Double-objective optimization 100 500 0.9 0.1 Based on the results presented in Section 3.3, G-SVR was deemed most suitable for further optimization and design in conjunction with the GA.Using R 2 as the criterion, the G-SVR regression prediction models were filtered out from the initial set of 50 models; only models with high testing-set R 2 values and no overfitting for both taper and the MRR were retained, whereas illogical and less accurate results were discarded.Subsequently, single-objective performance optimization was conducted for both taper and the MRR. Figure 8a,b depicts the results of this optimization compared with the original data.All the taper results were, evidently, smaller than the original values, concentrated around 3 • .Furthermore, the MRR consistently achieved excellent results, with values predominantly around 1800 µm 3 •pulse −1 . mance.Comparing these results to the best values obtained from the original dataset for both taper and the MRR, it is evident that the power remains unchanged, while the pulse width and defocusing amount increase by 167% and 125 µm, respectively.Therefore, the optimization process generated innovative process parameters that are notably distinct from the original data.Furthermore, substantial improvements were observed for both taper and the MRR, with a decrease of 68% and an increase of 42%, respectively. Process Design Outside of the Range of the Original Dataset According to the optimization results within the original data range presented in Section 3.4, impressive optimization and design were achieved considering both taper and the MRR.To further enhance the generalizability of the design in this study, a process design beyond the original data range is proposed, with the selection scope based on the feasibility of the devices. Starting with the single-objective performance optimization for taper and the MRR separately, the results demonstrate a higher degree of optimization.The taper optimization yielded a maximum value of 2.8° and a minimum value of just 0.2°, which is even lower than the minimum achieved within the original data range, let alone that compared with the original data.Regarding the MRR, the externally optimized results fall within the range of 1800~2400 µm 3 •pulse −1 , surpassing the previous internal optimization range of 1700~1800 µm 3 •pulse −1 .This significantly improves and exceeds the maximum MRR of 1888 µm 3 •pulse −1 observed in the original data.Consequently, these outcomes undeniably demonstrate that optimization beyond the original data range can produce higher-quality film-cooling holes. The NSGA-II algorithm was used for the drilling design beyond the original data range, and the optimization results are depicted in Figure 9c,d.It can be clearly seen that within this extended range, the taper of the holes can be reduced to 0~1°, while the MRR can be increased to 2500~2700 µm 3 •pulse −1 .This clearly indicates the formation of a favorable Pareto front, representing an exceptionable result.Subsequently, the optimal dataset was selected from several Pareto results, in the upper right corner.In this case, the power is 36 W, the pulse width is 1000 fs and the defocusing amount is 184 µm, while the taper is reduced by 93% and the MRR is increased by 107% compared to the original data opti- To further explore the synergistic optimization of taper and the MRR, a high throughput multiple-objective GA was employed, enabling quality and efficiency in percussion drilling.Figure 8c,d illustrates the outcomes of the comprehensive optimization, represented by the blue dots, as well as the optimal design result for the process and performance.Comparing these results to the best values obtained from the original dataset for both taper and the MRR, it is evident that the power remains unchanged, while the pulse width and defocusing amount increase by 167% and 125 µm, respectively.Therefore, the optimization process generated innovative process parameters that are notably distinct from the original data.Furthermore, substantial improvements were observed for both taper and the MRR, with a decrease of 68% and an increase of 42%, respectively. Process Design Outside of the Range of the Original Dataset According to the optimization results within the original data range presented in Section 3.4, impressive optimization and design were achieved considering both taper and the MRR.To further enhance the generalizability of the design in this study, a process design beyond the original data range is proposed, with the selection scope based on the feasibility of the devices. Starting with the single-objective performance optimization for taper and the MRR separately, the results demonstrate a higher degree of optimization.The taper optimization yielded a maximum value of 2.8 • and a minimum value of just 0.2 • , which is even lower than the minimum achieved within the original data range, let alone that compared with the original data.Regarding the MRR, the externally optimized results fall within the range of 1800~2400 µm 3 •pulse −1 , surpassing the previous internal optimization range of 1700~1800 µm 3 •pulse −1 .This significantly improves and exceeds the maximum MRR of 1888 µm 3 •pulse −1 observed in the original data.Consequently, these outcomes undeniably demonstrate that optimization beyond the original data range can produce higher-quality film-cooling holes. The NSGA-II algorithm was used for the drilling design beyond the original data range, and the optimization results are depicted in Figure 9c,d.It can be clearly seen that within this extended range, the taper of the holes can be reduced to 0~1 • , while the MRR can be increased to 2500~2700 µm 3 •pulse −1 .This clearly indicates the formation of a favorable Pareto front, representing an exceptionable result.Subsequently, the optimal dataset was selected from several Pareto results, in the upper right corner.In this case, the power is 36 W, the pulse width is 1000 fs and the defocusing amount is 184 µm, while the taper is reduced by 93% and the MRR is increased by 107% compared to the original data optimum.Furthermore, compared to the optimization results within the original data range, the taper is reduced by 79% and the MRR is increased by 46%.This clearly demonstrates the feasibility of applying this process solution in practical percussion drilling.In other words, the process of femtosecond laser percussion drilling can be optimized and designed efficiently and cost-effectively, facilitating the collaborative optimization of taper and the MRR.In order to verify the reliability of the optimized process, experimental validation was performed outside the range of the original dataset.Using the process parameters designed outside the range of the original dataset, a SEM image of the resulting micro-hole was obtained, as shown in Figure 10a.The taper of this hole is 0.7° and the MRR is 2515 In order to verify the reliability of the optimized process, experimental validation was performed outside the range of the original dataset.Using the process parameters designed outside the range of the original dataset, a SEM image of the resulting microhole was obtained, as shown in Figure 10a.The taper of this hole is 0.7 • and the MRR is 2515 µm 3 •pulse −1 .In contrast, using the original dataset with 81 sets of data, the optimum hole drilled by the femtosecond laser was selected and a corresponding SEM image was obtained.The taper of this hole is 6.8 • and the MRR is 1280 µm 3 •pulse −1 , as shown in Figure 10b.Compared with the original micro-hole, the optimized hole was greatly improved both in terms of taper and the MRR; taper was reduced by 90% and the MRR was increased by 96%, which is almost consistent with the optimization results outside the range of the original dataset.Therefore, these experimental results validate the reasonability of the design process and achieve the comprehensive optimization of quality and efficiency. Conclusions In this study, a multi-parameter collaborative optimization design framework for femtosecond laser percussion drilling was successfully established.Additionally, optimization was carried out within and outside the range of the original dataset, resulting in comprehensive improvements in the taper and MRR of micro-holes.This approach provides an efficient and low-cost performance optimization and process design.The primary conclusions are as follows: 1. Through a comparison of generalization ability and overfi ing analysis, six machine learning methods were gradually refined.G-SVR, GBR and XGB were selected for the further establishment of the prediction model owing to their good levels of accuracy and stability.In addition, it was observed that the frequency feature had the lowest Pearson correlation coefficient and MDA value for both taper and the MRR. 2. To enhance the model accuracy and reduce overfi ing, G-SVR, GBR and XGB prediction models were established using three input features, excluding the frequency feature based on the feature analysis.Among these, the G-SVR prediction model demonstrated higher accuracy for taper and the MRR; the R 2 values using the training set and testing set were 69.5% and 62.9% for taper, and 90% and 85.8% for the MRR, respectively, while the MAE values in the training set and testing set were 0.865° and 1.03° for taper, and 97.1 and 115 µm 3 •pulse −1 for MRR, respectively.3. Combined with the GA, the double-objective performance was optimized within the range of original data.The optimal result for taper was reduced by 68%, while the MRR was increased by 42% compared to the original optimal data.4. To achieve more generalized design outcomes, the GA was employed to optimize the Conclusions In this study, a multi-parameter collaborative optimization design framework for femtosecond laser percussion drilling was successfully established.Additionally, optimization was carried out within and outside the range of the original dataset, resulting in comprehensive improvements in the taper and MRR of micro-holes.This approach provides an efficient and low-cost performance optimization and process design.The primary conclusions are as follows: 1. Through a comparison of generalization ability and overfitting analysis, six machine learning methods were gradually refined.G-SVR, GBR and XGB were selected for the further establishment of the prediction model owing to their good levels of accuracy and stability.In addition, it was observed that the frequency feature had the lowest Pearson correlation coefficient and MDA value for both taper and the MRR. 2. To enhance the model accuracy and reduce overfitting, G-SVR, GBR and XGB prediction models were established using three input features, excluding the frequency feature based on the feature analysis.Among these, the G-SVR prediction model demonstrated higher accuracy for taper and the MRR; the R 2 values using the training set and testing set were 69.5% and 62.9% for taper, and 90% and 85.8% for the MRR, respectively, while the MAE values in the training set and testing set were 0.865 • and 1.03 • for taper, and 97.1 and 115 µm 3 •pulse −1 for MRR, respectively. 3. Combined with the GA, the double-objective performance was optimized within the range of original data.The optimal result for taper was reduced by 68%, while the MRR was increased by 42% compared to the original optimal data. Figure 2 . Figure 2. Experimental procedure of this study. Figure 2 . Figure 2. Experimental procedure of this study. Figure 3 . Figure 3. Results of different models using the initial six methods: (a) Results of R 2 for taper; (b) results of MAE for taper; (c) results of R 2 for the MRR; (d) results of MAE for the MRR. Figure 3 . Figure 3. Results of different models using the initial six methods: (a) Results of R 2 for taper; (b) results of MAE for taper; (c) results of R 2 for the MRR; (d) results of MAE for the MRR.Micromachines 2023, 14, x FOR PEER REVIEW 7 of 15 Figure 4 . Figure 4. Number of overfi ing models among various algorithms: (a) Outcomes for taper; (b) outcomes for the MRR. Figure 4 . Figure 4. Number of overfitting models among various algorithms: (a) Outcomes for taper; (b) outcomes for the MRR. Figure 4 . Figure 4. Number of overfi ing models among various algorithms: (a) Outcomes for taper; (b) outcomes for the MRR. Figure 5 . Figure 5. Feature analysis between process parameters and performance parameters: (a) Pearson correlation coefficient; (b) MDA. Figure 5 . Figure 5. Feature analysis between process parameters and performance parameters: (a) Pearson correlation coefficient; (b) MDA. Figure 6 . Figure 6.Mean prediction results for G-SVR, GBR and XGB models using machine learning: (a Results of R 2 for taper; (b) results of MAE for taper; (c) results of R 2 for the MRR; (d) results of MA for the MRR. Figure 6 . Figure 6.Mean prediction results for G-SVR, GBR and XGB models using machine learning: (a) Results of R 2 for taper; (b) results of MAE for taper; (c) results of R 2 for the MRR; (d) results of MAE for the MRR. Figure 7 . Figure 7. Flow chart of the GA. Figure 7 . Figure 7. Flow chart of the GA. Figure 8 . Figure 8. Optimization and design results within the range of the original dataset: (a) Optimization results for taper; (b) optimization results for the MRR; (c) optimization results for both taper and the MRR; (d) design results compared to the original results. Figure 8 . Figure 8. Optimization and design results within the range of the original dataset: (a) Optimization results for taper; (b) optimization results for the MRR; (c) optimization results for both taper and the MRR; (d) design results compared to the original results. Figure 9 . Figure 9. Optimization and design results outside the range of the original dataset: (a) Optimization results for taper; (b) optimization results for the MRR; (c) optimization results for both taper and the MRR; (d) design results compared to the original results. Figure 9 . Figure 9. Optimization and design results outside the range of the original dataset: (a) Optimization results for taper; (b) optimization results for the MRR; (c) optimization results for both taper and the MRR; (d) design results compared to the original results. Table 1 . Inputs and outputs from experimental data.
9,101.4
2023-11-01T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
On Stein's Method for Multivariate Self-Decomposable Laws This work explores and develops elements of Stein's method of approximation, in the infinitely divisible setting, and its connections to functional analysis. It is mainly concerned with multivariate self-decomposable laws without finite first moment and, in particular, with $\alpha$-stable ones, $\alpha \in (0,1]$. At first, several characterizations of these laws via covariance identities are presented. In turn, these characterizations lead to integro-differential equations which are solved with the help of both semigroup and Fourier methodologies. Then, Poincar\'e-type inequalities for self-decomposable laws having finite first moment are revisited. In this non-local setting, several algebraic quantities (such as the carr\'e du champs and its iterates) originating in the theory of Markov diffusion operators are computed. Finally, rigidity and stability results for the Poincar\'e-ratio functional of the rotationally invariant $\alpha$-stable laws, $\alpha\in (1,2)$, are obtained; and as such they recover the classical Gaussian setting as $\alpha \to 2$. Introduction The present notes form a sequel to the works [1,2] where Stein's method for general univariate and multivariate infinitely divisible laws with finite first moment has been initiated. Introduced in [53,54], Stein's method is a collection of methods allowing to control the discrepancy, in a suitable metric, between probability measures and to provide quantitative rates of convergence in weak limit theorems. Originally developed for the Gaussian and the Poisson laws ( [16]), several nonequivalent investigations have focused on extensions and generalizations of Stein's method outside the classical univariate Gaussian and Poisson settings. In this regard, let us cite [7,10,37,43,27,35,38,25,56] and [8,31,29,48,45,15,47,39,40,50,30,2] for univariate and multivariate extensions and generalizations. Moreover, for good introductions to the method, let us refer the reader to the standard references and surveys [24,9,51,18,14]. In all the works just cited, the target probability distribution admits, at the very least, a finite first moment. The very recent [20], where the case of the univariate α-stable distributions, α ∈ (0, 1], is studied, seems to be the only instance of the method bypassing the finite first moment assumption. Below, we develop a Stein's method framework for non-degenerate multivariate self-decomposable distributions without Gaussian component. Let us recall that self-decomposable distributions form a subclass of infinitely divisible distributions. Moreover, they are weak limits of normalized sums of independent summands and, as such, they naturally generalize the Gaussian/stable distributions. Originally introduced by Paul Lévy in [34], self-decomposable distributions and their properties have been studied in depth by many authors (see, e.g., [36,52]). The methodology developed here for multivariate self-decomposable distributions relies on a specific semigroup of operators already put forward in our previous analyses [1,2]. The generator of this semigroup is an integro-differential operator whose non-local part depends in a subtle way on the Lévy measure of the target self-decomposable distribution (see Lemma 4.1). Indeed, the non-local part of this operator differs from the one obtained in [1,2] since the Fourier symbols of the associated semigroup of operators do not exhibit C 1 -smoothness. However, by exploiting the polar decomposition of the Lévy measure of the target self-decomposable distribution together with the monotonicity of the associated k-function, C 1 -regularity is reached and as such natural candidates for the corresponding Stein equation and its solution are put forward. Moreover, this equation reflects the Lévy-Khintchine representation used to express the characteristic function of the target self-decomposable distribution. This naturally induces three types of equations reminiscent of the following classical distinction between stable laws: α ∈ (0, 1), α = 1 and α ∈ (1, 2). With these new findings, we revisit Poincaré-type inequalities for self-decomposable distributions with finite first moment. Initially obtained in [17] (see also [32]), these Poincaré-type inequalities reflect the infinite divisibility of the reference measure (without Gaussian component) and as such put into play a non-local Dirichlet form contrasting with the standard local Dirichlet form associated with the Gaussian measures. Our new proof of these Poincaré-type inequalities is based on the semigroup of operators already put forward (and used to solve the Stein equation) in [2] and is in line with the proof of the Gaussian Poincaré inequality based on the differentiation of the variance along the Ornstein-Uhlenbeck semigroup (see, e.g., [6]). Moreover, in this non-local setting, we compute several algebraic quantities (such as the carré du champs and its iterates) originating in Markov diffusion operators theory in order to reach rigidity and stability results for the Poincaré-ratio (U -) functional defined in (70) and associated with the rotationally invariant α-stable distributions. Rigidity results for infinitely divisible distributions with finite second mo-ment were obtained in [19,Theorem 2.1] whereas the corresponding stability results were obtained in [2,Theorem 4.5] through Stein's method and variational techniques inspired by [22]. Here, for the rotationally invariant α-stable distribution, α ∈ (1, 2), we revisit the method of [22] using the framework of Dirichlet forms. Coupled with a truncation procedure, rigidity and stability of the Poincaré U -functional are stated in Corollary 5.1, Corollary 5.2 and Theorem 5.3. This truncation procedure allows us to build an optimizing sequence for the U -functional. This sequence of functions can be spectrally interpreted as a singular sequence verifying a Weyl-type condition associated with the corresponding Poincaré constant (see Conditions (71) and (86) below). Let us further describe the content of our notes: The next section introduces notations and definitions used throughout this work and prove a characterization theorem for multivariate infinitely divisible distributions with finite first moment. In Section 3, using the previous characterization and truncation arguments, we obtain several characterization theorems for self-decomposable laws lacking finite first moment. These results highlight the role of the Lévy-Khintchine representation of the characteristic function of the target self-decomposable distribution and apply, in particular, to multivariate stable laws with stability index in (0, 1]. In Section 4, a Stein equation for nondegenerate multivariate self-decomposable distributions without finite first moment is at first put forward. It is then solved, under a low moment condition, via a combination of semigroup techniques and Fourier analysis. In the last section, Poincaré-type inequalities for self-decomposable laws with finite first moment are looked a new. Several algebraic quantities originating in Markov diffusion operators theory are computed in this non-local setting. In particular, for the rotationally invariant α-stable laws with α ∈ (1, 2), a Bakry-Émery criterion is shown to hold, recovering as α → 2, the classical Gaussian theory involving the carré du champs and its iterates. Finally, rigidity and stability results for the Poincaré U -functional of the rotationally invariant α-stable distributions, α ∈ (1, 2), are obtained using elements of spectral analysis and Dirichlet form theory. A technical appendix finishes our manuscript. Notations and Preliminaries Throughout, let · and ·; · be respectively the Euclidean norm and inner product on R d , d ≥ 1. Let also S(R d ) be the Schwartz space of infinitely differentiable rapidly decreasing real-valued functions defined on R d , and finally let F be the Fourier transform operator given, for f ∈ S(R d ), by On S(R d ), the Fourier transform is an isomorphism and the following inversion formula is well known Next, C b (R d ) is the space of bounded continuous functions on R d endowed with the uniform norm For µ a probability measure on R d and for 1 ≤ p < +∞, L p (µ) is the Banach space of equivalence classes of functions defined µ-a.e.on R d such that f p L p (µ) = R d |f (x)| p µ(dx) < +∞, f ∈ L p (µ). Similarly, L ∞ (µ) is the space of equivalence classes of functions bounded everywhere and µ-measurable. For any bounded linear operator, T , from a Banach space (X , · X ) to another Banach space (Y, · Y ) the operator norm is, as usual, More generally, for any r-multilinear form F from (R d ) r , r ≥ 1, to R, the operator norm of F is F op := sup |F (v 1 , ..., v r )| : v j ∈ R d , v j = 1, j = 1, ..., r . Through the whole text, a Lévy measure is a positive Borel measure on R d such that ν({0}) = 0 and R d (1 ∧ u 2 )ν(du) < +∞. An R d -valued random vector X is infinitely divisible with triplet (b, Σ, ν) (written X ∼ ID(b, Σ, ν)), if its characteristic function ϕ writes, for all ξ ∈ R d , as with b ∈ R d , Σ a symmetric positive semi-definite d × d matrix, ν a Lévy measure on R d and D the closed Euclidean unit ball of R d . The representation (3) is mainly the one to be used, from start to finish, with the (unique) generating triplet (b, Σ, ν). However, other types of representations are also possible and two of them are presented next. First, if ν is such that u ≤1 u ν(du) < +∞, then (3) becomes where b 0 = b − u ≤1 uν(du) is called the drift of X. This representation is cryptically expressed as X ∼ ID(b 0 , Σ, ν) 0 . Second, if ν is such that u >1 u ν(du) < +∞, then (3) becomes where b 1 = b + u >1 uν(du) is called the center of X. In turn, this last representation is now cryptically written as X ∼ ID(b 1 , Σ, ν) 1 . In fact, b 1 = EX as, for any p > 0, E X p < +∞ is equivalent to u >1 u p ν(du) < +∞. Also, for any r > 0, Ee r X < +∞ is equivalent to u >1 e r u ν(du) < +∞. In the sequel, we are also interested in some distinct classes of infinitely divisible distributions, namely the stable ones and the self-decomposable ones. Recall that an ID random vector X is α-stable, 0 < α < 2, if b ∈ R d , if Σ = 0 and if its Lévy measure ν admits the following polar decomposition where σ is a finite positive measure on S d−1 , the Euclidean unit sphere of R d . When α ∈ (0, 1), then u ≤1 |u j |ν(du) < +∞, for all 1 ≤ j ≤ d, ϕ and so with, again, b 0 = b − u ≤1 uν(du). Now, recall that an ID random vector X is self-decomposable (SD) if b ∈ R d , if Σ = 0, and if its Lévy measure ν admits the polar decomposition where σ is a positive finite measure on S d−1 and where k x (r) is a function which is nonnegative, decreasing in r, (k x (r 1 ) ≤ k x (r 2 ), for 0 < r 2 ≤ r 1 ) and measurable in x. In the sequel, without loss of generality, k x (r) is assumed to be right-continuous in r ∈ (0, +∞), to admit a left-limit at each r ∈ (0, +∞) and +∞ 0 (1 ∧ r 2 )k x (r)dr/r is independent of x. Next, (see, e.g., [44,Chapter 12]) let us denote by V b a (g) the variation of a function g over the interval [a, b] (0, +∞), where the supremum is taken over all subdivisions Since k x (r) is of bounded variation in r on any (a, b) (0, +∞), a > 0, b > 0 and a ≤ b, and right-continuous in r ∈ (0, +∞), the following integration by parts formula holds true for all f continuously differentiable on (a, b) such that lim Let us now introduce some natural distances between probability measures on R d . Let N d be the space of multi-indices of dimension d. For any α ∈ N d , |α| = d i=1 |α i | and D α denote the partial derivatives operators defined on smooth enough functions f , by D α (f )(x 1 , ..., Moreover, for any r-times continuously differentiable function, h, on R d , viewing its ℓth-derivative D ℓ (h) as a ℓ-multilinear form, for 1 ≤ ℓ ≤ r, let For r ≥ 0, H r is the space of bounded continuous functions defined on R d which are continuously differentiable up to (and including) the order r and such that, for any such function f , with M 0 (f ) := sup x∈R d |f (x)|. Then, the smooth Wasserstein distance of order r, between two random vectors X and Y having respective laws µ X and µ Y , is defined by Moreover, for r ≥ 1, d Wr admits the following representation (see [2,Lemma A.2.]) where C ∞ c (R d ) is the space of infinitely differentiable compactly supported functions on R d . In particular, for r ≥ 1, As usual, for two probability measures, µ 1 and µ 2 , on R d , µ 1 is said to be absolutely continuous with respect to µ 2 , denoted by µ 1 << µ 2 , if for any Borel set, B, such that µ 2 (B) = 0, it follows that µ 1 (B) = 0. To end this section, let us state the following characterization result of ID random vectors with finite first moment, valid, for example, for stable random vector with stability index α ∈ (1, 2) has its origin in the univariate result [1, Theorem 3.1]. Theorem 2.1. Let X be a random vector such that E|X i | < +∞, for all i ∈ {1, . . . , d}. Let ν be a Lévy measure on R d such that u ≥1 u ν(du) < +∞. Then, for all f bounded Lipschitz function on R d , if and only if X is an ID random vector with Lévy measure ν (and b = EX − u >1 uν(du)). Proof. Let us assume that X is an ID random vector with finite first moment and with Lévy measure ν. Then, from [32,Proposition 2], for all f and g bounded Lipschitz functions on R d , where (X z , Y z ) is an ID random vector in R 2d defined through an interpolation scheme as in [32,Equation (2.7)]. Now, since X has finite first moment, one can take for g the function g t (x) = t; x , for all x ∈ R d and for some t ∈ R d . Then, by linearity since X z = d X, where = d stands for equality in distribution. This concludes the direct part of the proof. Conversely, let us assume that Then, for all ξ ∈ R d and all t ∈ R, where the equality is understood to be in R d . In particular, one has Denoting by Φ t the function defined by Φ t (ξ) = E e it X;ξ , for all ξ ∈ R d , the previous equality boils down to Moreover, one notes that Φ 0 (ξ) = 1. Then, for all ξ ∈ R d and all t ∈ R, Taking t = 1, the characteristic function of X is then given, for all ξ ∈ R d , by namely, X is ID with Levy measure ν (and b = EX − u >1 uν(du)). , whenever Σ ε , is non-singular for every ε ∈ (0, 1], the following two conditions are equivalent: (a) As ε → 0 + ,X ε = Σ −1/2 ε X ε converges in distribution to a centered multivariate Gaussian random vector with identity covariance matrix. for c α,d given by Clearly, as α → 2 − , X α converges in distribution to a centered Gaussian random vector Z with identity covariance matrix. Next, by Theorem 2.1, for all f bounded Lipschitz function on R d , and observe, at first, that for all f ∈ S(R d ), , and observe now that the Fourier symbol, σ α , of this operator satisfies, for all ξ ∈ R d , Finally, for all f ∈ S(R d ) so that the characterizing identity (21) is preserved when passing to the limit, converging, again, for all f ∈ S(R d ), to Characterizations of Self-Decomposable Laws In this section, we provide various characterization results, for stable distributions and some selfdecomposable ones, not covered by Theorem 2.1. However, the direct parts of these results are simple consequences of Theorem 2.1 together with truncation and discretization arguments. The stable results recover, in particular, the one-dimensional results independently obtained in [20]. Below, and throughout, we will make use of the transformation T c applied to positive (Lévy) measures and defined for all c > 0 and all Borel sets, B, of R d by Theorem 3.1. Let X be a random vector in R d . Let b ∈ R d , α ∈ (0, 1) and let ν be a Lévy measure such that, for all c > 0, Then, , for all f ∈ S(R d ) if and only if X is a stable random vector with parameter b, stability index α and Lévy measure ν. Proof. Let us first assume that X is a stable random vector in R d with parameters b ∈ R d , stability index α ∈ (0, 1) and Lévy measure ν. Then, [52,Theorem 14.3,(ii)], ν is given by where σ is a finite positive measure on the Euclidean unit sphere of R d , and and, let X R be the ID random vector defined through its characteristic function by Note, in particular, that X R is such that E X R < +∞. Then, by Theorem 2.1, for all g ∈ S(R d ), Now, choosing g = ∂ i (f ) for some f ∈ S(R d ) and for i ∈ {1, . . . , d}, it follows that To continue, project the vectorial equality (23) onto the direction e i = (0, . . . , 0, 1, 0, . . . , 0), to get, for all i ∈ {1, . . . , d}, where X R,i and b 0,i are the i-th coordinates of X R and of b 0 respectively. Adding-up these last identities, for i ∈ {1, . . . , d}, leads to Now, observe that X R converges in distribution towards X since by the Lebesgue dominated convergence theorem, Moreover, from the polar decomposition of the Lévy measure ν R , Next, for all z ∈ R d Set H z (r) = S d−1 f (z + rx)σ(dx), for all r > 0 and all z ∈ R d . Moreover, for all r > 0 Thus, A standard integration by parts argument, combined with α ∈ (0, 1), implies that Next, integrating with respect to the law of X R , one gets that Again, since α ∈ (0, 1), f ∈ S(R d ) and σ S d−1 < +∞, Finally, to conclude the direct implication, one needs to prove that To this end, for all R > 1 and all Since α ∈ (0, 1) and f ∈ S(R d ), it is clear that both functions are well-defined, bounded and continuous on R d . Moreover, for all R > 1 and all z ∈ R d Thus, F R converges uniformly on R d towards F . Finally, since X R converges in distribution to X, which concludes the first part of the proof. To prove the converse implication, let us assume that, for all f ∈ S(R d ), Denoting ϕ X the characteristic function of X, the equality (25) can be rewritten as Using standard Fourier arguments and the fact that f ∈ S(R d ), where the left-hand side has to be understood as a duality bracket between the Schwartz function F(f ) and the tempered distribution ξ; ∇(ϕ X ) . Since ϕ X is continuous on R d , for all ξ ∈ R d with ξ = 0 Moreover, ϕ X (0) = 1. Now, in order to solve the previous linear partial differential equation of order one, let us change the coordinates system (ξ 1 , . . . , ξ d ) into the hyper-spherical one (r, θ 1 , . . . , θ d−1 ) where r > 0, θ i ∈ [0, π], for all i ∈ {1, . . . , d − 2} and θ d−1 ∈ [0, 2π). Noting that and using the scaling property of the Lévy measure ν, i.e., (22), one gets For any fixed x ∈ S d−1 , this linear differential equation admits a unique solution which is given by since ϕ X (0) = 1. Then, X is a stable random vector in R d with parameter b, stability index α and Lévy measure ν. This ensuing result deals with the Cauchy case. Theorem 3.2. Let X be a random vector in R d . Let b ∈ R d and let ν be a Lévy measure on R d such that, for all c > 0 Moreover, let σ, the spherical part of ν, be such that Then, for all f ∈ S(R d ) if and only if X is a stable random vector in R d with parameter b, stability index α = 1 and Lévy measure ν. Proof. The proof is similar to the one of Theorem 3.1. The direct part goes with a double truncation procedure together with an integration by parts and, then, passing to the limit. Let us first assume that X is stable with parameter b, stability index α = 1, Lévy measure ν and σ the spherical part. Then, [52, Theorem 14.3, (ii)], Let R > 1 be a truncation parameter, let and, let X R be the ID random vector defined through its characteristic function by Note, in particular, that X R is such that E X R < +∞. Then, by Theorem 2.1, for all g ∈ S(R d ), Performing computations similar to those in the proof of Theorem 3.1, for all f ∈ S R d , Now, since X R converges in distribution towards X, as R tends to +∞, Next, let us study the second term on the right-hand side of (27). First, since R > 1, From the polar decomposition of the Lévy measure ν R , Then, for all z ∈ R d , Setting H z (r) = S d−1 f (z + rx)σ(dx), for all r > 0 and all z ∈ R d , it follows that Thus, A standard integration by parts argument implies that Integrating with respect to the law of X R , one gets Then, since f ∈ S(R d ) and σ S d−1 < +∞, Let us now study the convergence, as R → +∞, of To this end, let F R and F be the bounded and continuous functions on R d respectively defined, by and by Now, note that, for all z ∈ R d and all R > 1, where, Then, by standard inequalities, since f ∈ S(R d ) and σ(S d−1 ) < +∞, which implies that F R converges uniformly to F , as R → +∞. Thus, and also Combining (27), (28) and (29), one obtains which is the direct part of the theorem. To prove the converse, assume that, for all f ∈ S(R d ), Denoting by ϕ X the characteristic function of X, the identity (30) can then be rewritten as Reasoning as in the proof of Theorem 3.1 gives, for all r > 0 and all x ∈ S d−1 , To conclude, note that the previous equality can be interpreted as an ordinary differential equation in the radial variable. Its solution is given, for all r ≥ 0 and all x ∈ S d−1 , by where G and J are defined, for all R > 0 and all x ∈ S d−1 , by Straightforward computations, and the fact that S d−1 xσ(dx) = 0, finally imply that which concludes the proof. Remark 3.1. The quantity S d−1 xσ(dx) reflects the asymmetry of the Lévy measure ν. In case S d−1 xσ(dx) = 0, a careful inspection of the proof of Theorem 3.2 reveals that the identity (26) becomes, for all f ∈ S(R d ), The next results provide extensions of both Theorem 3.1 and Theorem 3.2 to subclasses of selfdecomposable distributions with regular radial part, on (0, +∞), and some specific asymptotic behaviors at the edges of (0, +∞) in any directions of S d−1 . where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 and such that Letν be the positive measure on R d defined bỹ Then, , for all f ∈ S(R d ) if and only if X is self-decomposable with parameter b, Σ = 0 and Lévy measure ν. Proof. Let us start with the direct part. Let X be a SD random vector of R d with parameter b and Lévy measure ν such that u ≤1 u ν(du) < +∞ and whose polar decomposition is given by (31). Let R > 1 and let (σ n ) n≥1 be a sequence of positive linear combinations of Dirac measures which converges weakly to σ, the spherical component of ν. Then, for all R > 1 and all n ≥ 1, let and denote by X R,n the SD random vector with parameter b and Lévy measure ν R,n . Similarly, let, for all n ≥ 1, and denote by X n the SD random vector with parameter b and Lévy measure ν n . Performing computations similar to those in the proof of Theorem 3.1, for all f ∈ S(R d ), all R > 1 and all Now, since, as R → +∞, X R,n converges in distribution to X n , for all n ≥ 1, Moreover, from the polar decomposition of the Lévy measure ν R,n , mutatis mutandis, where, for all R > 1 and all n ≥ 1, Then, since lim Next, one needs to prove that whereν n is given, for all R > 1 and all n ≥ 1, bỹ To this end, for all R > 1, all n ≥ 1 and all z ∈ R d , set (33), and since f ∈ S(R d ), it is clear that both functions are well-defined, bounded and continuous on R d . Moreover, Thus, as R tends to +∞, F R,n converges to F n uniformly on R d , for all n ≥ 1. Finally, since X R,n converges in distribution to X n , for all n ≥ 1, Then, for all n ≥ 1 Now, observe that, (X n ) n≥1 converges in distribution to X since (σ n ) n≥1 converges weakly to σ and since +∞ 0 To conclude the proof of the direct part of the theorem, let us study the convergence of: Now, since (X n ) n≥1 converges in distribution to X, then lim n→+∞ ϕ n (ξ) = ϕ(ξ), for all ξ ∈ R d . In turn, let us prove the following: Observe that, for all ξ ∈ R d and all n ≥ 1 Since (σ n ) n≥1 converges weakly to σ, let us prove that the function H(x, ξ) = +∞ 0 The second term on the right-hand side of (35) converges to 0 as n tends to +∞, by the Lebesgue dominated convergence theorem since +∞ 0 (1 ∧ r)dk x (r) < +∞. For the first term of (35), observe that For the second term on the right-hand side of (36), for all n ≥ 1, so that by (32), this term converges to 0. Finally, integrating by parts, for all n ≥ 1, (1)). Now, the second term on the right-hand side of (37) converges to 0, as n tends to +∞ and, by the Lebesgue dominated convergence theorem, the first term does converges to 0, as n tends to +∞. This proves that lim Now, reasoning as in the second part of the proof of Theorem 3.1, Let us develop the second term inside the above parenthesis a bit more. First, The radial equation (39) then becomes, for all r > 0 and all x ∈ S d−1 , For any fixed x ∈ S d−1 , this linear differential equation admits a unique solution which is given by since ϕ X (0) = 1. Then, X is a SD random vector with parameter b and Lévy measure ν. The next result is the SD pendant of the Cauchy characterization obtained in Theorem 3.2. Theorem 3.4. Let X be a random vector in R d . Let b ∈ R d and let ν be a Lévy measure on R d with polar decomposition where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 , and such that Letν be the positive measure on R d defined bỹ Then, for all f ∈ S(R d ) if and only if X is self-decomposable with parameter b, Σ = 0 and Lévy measure ν. Proof. The proof is a direct extension of the proof of Theorem 3.2 so that it is only outlined by highlighting the main differences. Let us start with the direct part. Let X be a SD random vector with parameter b and Lévy measure ν. Let R > 1 and let (σ n ) n≥1 be a sequence of positive linear combinations of Dirac measures converging weakly to σ, the spherical component of ν. Then, for all R > 1 and all n ≥ 1, let and denote by X R,n the SD random vector with parameter b and Lévy measure ν R,n . Similarly, for all n ≥ 1, let and denote by X n the SD random vector with parameter b and Lévy measure ν n . As in the proof of Theorem 3.2, for all f ∈ S R d and all R > 1, Now, since, as R → +∞, X R,n converges in distribution to X n , for all n ≥ 1 Moreover, for all R > 1 and all n ≥ 1, From the limiting behavior of k x at +∞ and at 0 + , for all n ≥ 1, Next, consider the term defined, for all z ∈ R d and all n ≥ 1, by By a standard integration by parts, for all n ≥ 1, Then, observe that, for all x ∈ S d−1 , lim (1), and, for all n ≥ 1, Finally, for all n ≥ 1, Now, since (σ n ) n≥1 converges weakly to σ and since +∞ 0 converges in distribution to X. Hence, To conclude the direct part of the proof, let us consider the following terms: First, for all n ≥ 1 Since (X n ) n≥1 converges in distribution to X, as n tends to +∞, (ϕ n (ξ)) n≥1 converges to ϕ(ξ), for all ξ ∈ R d . Moreover, Then, by the Lebesgue dominated convergence theorem, Similarly, for all n ≥ 1, and proceeding as in the proof of Theorem 3.3, The direct part of the theorem is proved. For the converse part, mutatis mutandis, based on (42), for all r > 0 and all x ∈ S d−1 whereG andJ are respectively defined, for all R > 0 and all x ∈ S d−1 , bỹ Finally, straightforward computations together with Fubini's Theorem and the fact that lim Remark 3.2. (i) Let us recast the previous results in dimension one, i.e., for d = 1. In this case, the Lévy measure of a SD law is absolutely continuous with respect to the Lebesgue measure and is given by where k is a nonnegative function increasing on (−∞, 0) and decreasing on (0, +∞). Now, assume, for simplicity only, that k is continuously differentiable on (−∞, 0) and on (0, +∞) and that Theorem 3.4 gives the following characterizing identity when X is a SD random variable with parameter b ∈ R and Lévy measure ν: In a similar fashion, it is possible to provides a characterization result for SD random variables with Lévy measure ν such that |u|≤1 |u|ν(du) < +∞ and such that k is continuously differentiable on (−∞, 0) and on (0, +∞) with via Theorem 3.3. (ii) From [52,Theorem 28.4], under the assumptions that the function k is continuously differentiable on (−∞, 0) and on (0, +∞) and satisfies (43) Then, the associated SD distribution admits a Lebesgue density infinitely differentiable on R. If the function k is continuously differentiable on (−∞, 0) and on (0, +∞) and satisfies (44), then c can be either finite or infinite, implying different types of regularity for the Lebesgue density of the associated SD distribution. (iii) Let X be a SD random vector with Lévy measure ν as in Theorem 3.4 and such that u ≥1 u ν(du) < +∞. Then, integrating by parts, for all f ∈ S(R d ), Let us now present a simple example for which Theorem 3.3 and Theorem 3.4 apply and which is not covered in the relevant existing literature. Rotationally invariant self-decomposable distributions are covered by Theorem 3.3 and Theorem 3.4. Indeed, let λ be the uniform measure on S d−1 and let with u ≤1 u ν(du) < +∞ and with k satisfying the assumptions of Theorem 3.3. Then, the corresponding self-decomposable distribution is rotationally invariant. The Stein Equation for Self-Decomposable Laws Throughout this subsection, X is a non-degenerate self-decomposable random vector in R d , without Gaussian component, with law µ X , characteristic function ϕ given by (3) with parameter b ∈ R d and Lévy measure ν given by where k x (r) is a nonnegative function decreasing in r ∈ (0, +∞) and where σ is a finite positive measure on S d−1 . The following assumptions are assumed to hold true throughout this subsection: These assumptions insure that the positive measureν given bỹ is a well defined Lévy measure on R d . Let us introduce next a collection of ID random vectors, X t , t ≥ 0, defined through their characteristic function, for all t ≥ 0 and all ξ ∈ R d , by By changing variables, this function is, for all ξ ∈ R d and all t ≥ 0, equal to which is a well-defined characteristic function since X is SD. Denoting by µ t the law of X t , let us introduce the following continuous family of operators ( For t = 0, set µ 0 = δ 0 , with δ 0 the Dirac measure at 0, so that P ν 0 is the identity operator. Based on the computations of [2, Lemma 3.1], observe that the continuous family of operators ( The next lemma identifies the generator of (P ν t ) t≥0 on S R d . be the semigroup of operators defined by (49). Letν be the Lévy measure on R d given by (47). The generator of (P ν t ) t≥0 is given, for all f ∈ S(R d ) and all x ∈ R d , by Proof. Let f ∈ S(R d ). By Fourier inversion, for all x ∈ R d and all t ∈ (0, 1), Then, a direct application of the Lebesgue dominated convergence theorem together with Fourier duality imply that which concludes the proof of the lemma. Based on the previous lemma, it is natural to consider the following Stein equation for selfdecomposable distributions with polar decomposition given by (45) (under appropriate assumptions on the function k x (r)) : . By semigroup theory, a candidate solution to (50) is given by, The next proposition proves the existence of the function f h given by (51), studies its regularity and proves that this function is a strong solution of (50) on R d . Theorem 4.1. Let X be a non-degenerate SD random vector without Gaussian component, with law µ X , characteristic function ϕ, Lévy measure ν having polar decomposition given by (45) where the function k x (r) is continuous in r ∈ (0, +∞), continuous in x ∈ S d−1 and satisfies (46). Assume that there exists ε ∈ (0, 1) such that E X ε < +∞ and that there exist β 1 > 0, β 2 > 0 and β 3 ∈ (0, 1) such that the function k x (r) in (45) satisfies and that, Let X t , t ≥ 0, be the random vector defined through the characteristic function ϕ t given by (48) and assume that, Let (P ν t ) t≥0 be the semigroup of operators defined by (49). Then, for any h ∈ H 1 , the function f h , given, for all x ∈ R d , by is well defined and continuously differentiable on whereν is given by (47) and whereb = b − S d−1 k y (1)yσ(dy). Proof. To start with, let us prove that, for any h ∈ H 1 , the function f h defined by (51) does exist. For all x ∈ R d and all t > 0, where we have used Proposition A.1 in the last line. Then, the function f h is well defined on R d . Moreover, reasoning as in [2, Proposition 3.4], one gets that and with M 2 (f h ) ≤ 1/2. To conclude let us prove that f h is a strong solution of (50) on R d . Set u(t, x) = P ν t (h)(x), for t ≥ 0 and x ∈ R d . First, let us prove that, for all t ≥ 0 and all x ∈ R d , Since h ∈ C ∞ c R d , by Fourier inversion, for all t ≥ 0 and all x ∈ R d , Moreover, for all x ∈ R d , all ξ ∈ R d and all t ≥ 0, Then, for all t ≥ 0 and all x ∈ R d , where the Fourier symbol of A and the Fourier representation of u(t, x) have been used in the last equality. To pursue, let 0 < T < +∞ and let us integrate out the equation (53) between 0 and T . Then, for all x ∈ R d , then, letting T → +∞ and the ergodicity of the semigroup (P ν t ) t≥0 give: Next, let us prove that +∞ 0 |A(P ν t (h))(x)| dt < +∞, for all x ∈ R d . To do so, one needs to estimate ∇(P ν t (h))(x) and From the commutation relation and the fact that h ∈ H 2 , Now, let us bound (I). For all x ∈ R d and all t > 0 Let us start with the second term on the right-hand side of (54). Again, via the commutation relation and an integration by parts Note also that +∞ 0 k y (e t )dt = +∞ 1 k y (r)dr/r < +∞, for y ∈ S d−1 . This concludes the bounding of the second term on the right-hand side of (54). For the first term, for all x ∈ R d and all t ≥ 0 where, Then, by commutation and a change of variables ; ry (−dk y (e t r))σ(dy) . Next, let us discuss the condition (52) in the particular case α ∈ (0, 1). (A similar discussion can be performed in the case α = 1 but requires different estimates. ) Since α ∈ (0, 1), the random vector X t , t ≥ 0, defined through (48) has the characteristic function given, for all ξ ∈ R d and all t ≥ 0, by with ν as in (6). Then, 1 αX whereX is α-stable with b 0 = 0 and α ∈ (0, 1). It is then straightforward to check that E X t ε is uniformly bounded in t for any ε ∈ (0, α). (ii) Now, let X be a non-degenerate SD random vector as in Theorem 4.1 such that Let f h be the solution to the Stein equation (50) defined by (51), for h ∈ H 2 ∩ C ∞ c (R d ). Then, by an integration by parts, for all so that, in this case, f h is a solution to the following Stein equation In particular, if X is α-stable with α ∈ (0, 1), then the equation (58) boils down to (iii) Next, let X be a non-degenerate SD random vector as in Theorem 4.1 and such that Let f h be the solution to the Stein equation (50) defined by (51), for h ∈ H 2 ∩ C ∞ c (R d ). Then, integrating by parts twice, for all so that f h is a solution to the following Stein equation Applications to Functional Inequalities for SD Random Vectors This section discusses Poincaré-type inequalities for self-decomposable random vectors, providing in particular new proofs based on the semigroup of operators (P ν t ) t≥0 defined in (49). This proof is in line with the standard proof of the Gaussian Poincaré inequality based on the differentiation of the variance along the Ornstein-Uhlenbeck semigroup. In the literature, standard references regarding Poincaré-type inequalities for infinitely divisible random vectors are [17,32]. In [17], the proof is based on stochastic calculus for Lévy processes and the Lévy-Itô decomposition whereas in [32], the proof is based on a covariance representation for infinitely divisible random vectors. Let us also mention that Poincaré-type inequalities for stable random vectors have been obtained in [49,55]. Proposition 5.1. Let X be a centered SD random vector with Lévy measure ν such that where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 with lim r→+∞ k x (r) = 0, lim Then, for all f ∈ C ∞ c R d with Ef (X) = 0 Proof. Let X be a SD random vector with characteristic function ϕ and Lévy measure ν satisfying the hypotheses of the proposition. Let (P ν t ) t≥0 be the semigroup of operators given by (49). In particular, on C ∞ c R d , for all t ≥ 0 and all x ∈ R d , This operator admits the Fourier representation, i.e., for all x ∈ R d , Next, let f ∈ C ∞ c R d be such that Ef (X) = 0. Then, for all t ≥ 0, where A is defined, for all f ∈ C ∞ c R d and all x ∈ R d , by Thus, for all t ≥ 0 Next, from Theorem 2.1, observe that, for all f ∈ C ∞ c R d and all t ≥ 0, and so, for all t ≥ 0, Next, using Fourier arguments as in the proof of [33,Proposition 4.1], for all f ∈ C ∞ c R d and all x ∈ R d , Moreover, an integration by parts in the radial coordinate gives, for all ξ ∈ R d and thus, for all x ∈ R d , Then, for all t ≥ 0 But, from a change of variables, Jensen inequality and invariance, With an integration by parts, observe that, for x ∈ S d−1 , Finally, integrating with respect to t (between 0 and +∞) leads to Remark 5.1. (i) Let X be a rotationally invariant α-stable random vector, α ∈ (1, 2), with characteristic function ϕ given by Then, by Proposition 5.1, for all f ∈ S(R d ) with Ef (X) = 0 where c α,d = −α(α − 1)Γ((α + d)/2) 4 cos(απ/2)Γ((α + 1)/2)π (d−1)/2 Γ(2 − α) . (ii) Throughout the proof of Proposition 5.1, the following integration by parts formula has been obtained and used, for all f ∈ C ∞ c (R d ), where µ X is the law of X and Γ is a bilinear symmetric application defined, for all f, g ∈ C ∞ c (R d ) and all x ∈ R d , by with σν(ξ, ζ) = R d e i u;ξ − 1 e i u;ζ − 1 ν(du), for ξ, ζ ∈ R d . A straightforward computation in the Fourier domain shows that this bilinear symmetric application is the "carré du champs" operator associated with the generator A of the semigroup (P ν t ) t≥0 (see, e.g., [6] for a thorough exposition of these topics in the setting of Markov diffusion operators). Namely, for all f, g ∈ C ∞ c (R d ) and all x ∈ R d , Standard objects of interest in the setting of Markov diffusion operators are iterated "carré du champs" of any orders n ≥ 1 defined through the following recursive formula, for all f, g ∈ C ∞ c (R d ) and all with the convention that Γ 0 (f, g)( , and x ∈ R d . The forthcoming simple lemma provides a representation of the Γ 2 as a pseudodifferential operator whose symbol is completely explicit. where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 with Let A be the operator defined, for all f ∈ S(R d ) and all x ∈ R d , by Then, for all f, g ∈ S(R d ) and all x ∈ R d where σν(ξ, ζ) and ρν(ξ, ζ) are given, for all ξ, ζ ∈ R d , by σν(ξ, ζ) = Proof. First, by definition, for all f, g ∈ S(R d ) and all x ∈ R d , Let us compute Γ 1 (A(f ), g)(x). Using the Fourier representation, for all x ∈ R d , so that, for all ξ ∈ R d , Thus, for all x ∈ R d , Similarly, for all x ∈ R d , At first, observe that, Next, by straightforward computations, Then, for all x ∈ R d , where, for all ξ, ζ ∈ R d ρν (ξ, ζ) = This concludes the proof of the lemma. The next proposition asserts that the Bakry-Emery criterion still holds for the rotationally invariant α-stable distribution with α ∈ (1, 2). Proposition 5.2. Let α ∈ (1, 2) and let X α be a rotationally invariant α-stable random vector of R d with law µ α and with Lévy measure given by where λ is the uniform measure on S d−1 and where . where Γ and Γ 2 are respectively the "carré du champs" operator and the iterated "carré du champs" operator of order 2 associated with ν α . Proof. By Remark 5.2, observe that, for all ξ, ζ ∈ R d , where, Then, by Lemma 5.1 and Fourier inversion, for all f ∈ C ∞ c R d , and all x ∈ R d , Thus, for all f ∈ C ∞ c R d and all x ∈ R d , Let us study rigidity and stability phenomena for the rotationally invariant α-stable distributions with α ∈ (1, 2) based on the Poincaré-type inequality of Proposition 5.1. To reach such results let us adopt a spectral point of view. This is a natural strategy to obtain sharp forms of geometric and functional inequalities as done, e.g., in [12,23,13]. First, observe that, since α ∈ (1, 2) and since X α considered in Proposition 5.2 is centered, the function g(x) = x, x ∈ R d , is an eigenfunction of the semigroup of operators (P ν t ) t≥0 given in (49) with ν = ν α as in (19). Indeed, for all x ∈ R d and all t ≥ 0 Then, by its very definition, A(g)(x) = −g(x), for x ∈ R d , so that g is an eigenfunction of A with associated eigenvalue −1. However, since α ∈ (1, 2), g does not belong to L 2 (µ α ), with µ α being the law of X α . To circumvent this fact, let us build an optimizing sequence by a smooth truncation procedure. For all j ∈ {1, . . . , d} and all R ≥ 1, let g R,j be defined, for all x ∈ R d , by with ψ ∈ S(R d ), ψ(0) = 1 and 0 ≤ ψ(x) ≤ 1, for x ∈ R d . Take, for instance, ψ(x) = exp(− x 2 ), for x ∈ R d . Now, let us state some straightforward facts about the functions g R,j : for all j ∈ {1, . . . , d}, Eg R,j (X α ) = 0 and, as R → +∞, Next, by studying precisely the rate at which both the last two terms diverge, we intend to prove that, for all j ∈ {1, . . . , d}, The first technical lemma investigate the rate at which Eg R,j (X α ) 2 diverges as R tends to +∞. , for x ∈ R d . Let α ∈ (1, 2) and X α be a rotationally invariant α-stable random vector of R d with characteristic function ϕ given, for all ξ ∈ R d , by Then, for all j ∈ {1, . . . , d}, as R tends to +∞, where Proof. First, for all R ≥ 1, set ψ R (x) = ψ(x/R), for x ∈ R d , and, for all j ∈ {1, . . . , d}, where X α is a rotationally invariant α-stable random vector with α ∈ (1, 2) and X α,j is its j-th coordinate. By Fubini's theorem, standard Fourier analysis, two integrations by parts and a change of variables, it follows that where ∂ 2 ξ j is the partial derivative of order 2 in the ξ j coordinate. Moreover, since α ∈ (1, 2) and since ψ 2 ∈ S(R d ), all the following integrals converge Hence, as R −→ +∞, which concludes the proof of the lemma. This second technical lemma provides the rate of divergence, as R tends to +∞, of for all j ∈ {1, . . . , d}. ∈ (1, 2) and X α be a rotationally invariant α-stable random vector of R d with characteristic function ϕ given, for all ξ ∈ R d , by Then, for all j ∈ {1, . . . , d}, as R tends to +∞, where g R,j (x) = x j ψ(x/R), for x ∈ R d , and where Γ is the "carré du champs" operator associated with X α . From the above lemma, and from a spectral point of view, the correct functional to observe rigidity phenomenon for the rotationally invariant α-stable distribution, α ∈ (1, 2), is the functional defined, for all µ ∈ M 1 (R d ) (M 1 (R d ) is the set of probability measures on R d ), by where X ∼ µ and where H α is the set of functions f from R d to R such that Var(f (X)) < +∞ and 0 < E R d |f (X + u) − f (X)| 2 ν α (du) < +∞. Therefore, the next result is a direct consequence of the Poincaré-type inequality for the rotationally invariant α-stable distribution, α ∈ (1, 2), and of the existence of an optimizing sequence as built above. To continue, let us state and prove a converse to the above corollary. In particular, note that, for all j ∈ {1, . . . , d}, Indeed, this is a direct consequence of (67) and (69) since the divergent terms cancel out and the remaining terms converge to 0 as R → +∞. To end this section, let us investigate stability results for rotationally invariant α-stable laws. A natural strategy to reach stability put forward in [22,2] is to use Stein kernels. This strategy relies on the Lax-Milgram theorem to ensure the existence of Stein kernels under appropriate assumptions. More precisely, the Stein kernel is seen as the solution to a variational problem linked to the covariance identity characterizing the target probability measure. In the sequel, we develop an approach based on Dirichlet forms to obtain the existence of Stein kernels. Adopting the notations, the definitions and the terminology of [26, Chapter 1], let us start with an abstract result which then leads to the existence of Stein kernels in known and in new situations. Note that this result as well as its geometric generalizations and consequences will be further analyzed in the ongoing work [3]. Theorem 5.1. Let H be a real Hilbert space with inner product ·; · H and induced norm · H . Let E be a closed symmetric non-negative definite bilinear form in the sense of [26] with dense linear domain D(E). Let {G α : α > 0} and {P t : t > 0} be, respectively, the strongly continuous resolvent and the strongly continuous semigroup on H associated with E. Moreover, assume that, there exists a closed linear subspace H 0 ⊂ H such that, for all t > 0 and all u ∈ H 0 , for some C P > 0 independent of u and of t. Let G 0 + be the operator defined by where the above integral is to be understood in the Bochner sense. Then, for all u ∈ H 0 , G 0 + (u) belongs to D(E) and, for all v ∈ D(E), Moreover, for all u ∈ H 0 , Proof. First, from [26, Theorem 1.3.1], there is a one to one correspondence between the family of closed symmetric forms on H and the family of non-positive definite self-adjoint operators on H. Then, let A, {G α : α > 0} and {P t : t > 0} be, respectively, the generator, the strongly continuous resolvent and the strongly continuous semigroup on H associated with E such that, for all α > 0 and all u ∈ H, (Again the above integral is to be understood in the Bochner sense.) Then, from [26, Lemma 1.3.3], for all α > 0, all u ∈ H and all v ∈ D (E), Then, in order to establish (74) from (76), one needs to pass to the limit in (76) as α −→ 0 + . First, since (72) holds, G 0 + given by (73) is well defined on H 0 . Moreover, for all α > 0 and all u ∈ H 0 , Then, G α (u) converges strongly in H to G 0 + (u), as α tends to 0 + . It therefore follows that, for all u ∈ H 0 and all v ∈ H, Next, let us prove that, for all u ∈ H 0 , First, note that, for all α, β > 0, Then, from (76), and similarly for E(G β (u), G β (u)), as β tends to 0 + . Now, for the crossed term, The closedness of E then ensures that G 0 + (u) belongs to D(E) and that This gives (74), while the inequality (75) follows from (74), the Cauchy-Schwarz inequality, the triangle inequality and (72), concluding the proof of the theorem. The next remark explores how the absract Theorem 5.1 recovers various known results and provides new ones. Remark 5.3. (i) First, let γ be the centered Gaussian probability measure on R d with the identity matrix as its covariance matrix. Let H be the space of R d -valued square-integrable functions on R d with respect to γ, let H 0 be the functions in H with mean 0 with respect to γ and let E be the symmetric non-negative definite bilinear form defined, for all f, g ∈ C ∞ c (R d , R d ), by where ·; · HS is the Hilbert-Schmidt product for real matrices of size d × d. It is a standard fact of Gaussian analysis that the above form is closable and its closed extension gives rise to the Ornstein-Uhlenbeck generator and its semigroup. Moreover, note that the function, h(x) = x, x ∈ R d , belongs to H 0 and that γ satisfies the following Poincaré inequality: for all smooth f : Then, by Theorem 5.1, for all f ∈ D(E) where G 0 + (h) is given, for all x ∈ R d , by where div is the standard divergence operator. Thus, (78) is the integration by parts formula associated with γ. (ii) Let µ be a centered probability measure on R d with finite second moment such that, for all smooth f : for some C P > 0 independent of f . Moreover, assume that the bilinear symmetric non-negative is closable (sufficient conditions for the closability of the above form have been addressed in [26, Chapter 3.1] and in [11,Chapter 2.6]). Note that the function h defined by, h(x) = x, x ∈ R d , belongs to H, the space of square integrable functions on R d with respect to µ, and that R d h(x)µ(dx) = 0. Then, by Theorem 5.1, for all f ∈ D(E), so that a Gaussian Stein kernel of µ exists (in the sense of [22, Definition 2.1]) and is given by Moreover, with X ∼ µ, (75) reads Thus, one retrieves the results of [22]. (iii) Let α ∈ (1, 2) and let µ α be a rotationally invariant α-stable probability measure on R d with Lévy measure defined by where c α,d is given by (20). Let H be the space of square-integrable functions on R d with respect to µ α . Let E be the symmetric non-negative definite bilinear form defined, for all f, g ∈ C ∞ c (R d ), by Since ν α * µ α is absolutely continuous with respect to µ α , it is standard to check that the above form is closable and its smallest closed extension gives rise (see [26,Theorem 1.3.1]) to a non-positive definite self-adjoint operator A on H with corresponding symmetric contractive semigroup (P t ) t>0 on H. Moreover, from Theorem 5.1, for all smooth f : where H 0 is the space of square-integrable functions on R d with respect to µ α having mean zero. Then, by Theorem 5.1, for all f ∈ D (E) and all h ∈ H 0 Next, observe that the function h(x) = x, for x ∈ R d , does not belong to L 2 (µ α ). The next technical lemma describes the link between the semigroup of operators obtained from the form E given by (79) and the semigroup of operators (P ν t ) t≥0 given by (49) with ν = ν α as in (19) and with α ∈ (1, 2). With the help of this lemma, it is then possible to obtain the spectral properties of this semigroup of symmetric operators based on those of (P ν t ) t≥0 . Lemma 5.4. Let α ∈ (1, 2), let ν α be the Lévy measure given by (19) and let µ α be the corresponding rotationally invariant α-stable probability measure on R d . Let E be the smallest closed extension of the symmetric non-negative definite bilinear form given by (79). Let (P t ) t>0 be the strongly continuous semigroup of symmetric contractions on L 2 (µ α ) associated with E. Let (P να t ) t≥0 be the semigroup of operators defined by (49) and let ((P να t ) * ) t≥0 be its dual semigroup in L 2 (µ α ). Then, for all f ∈ L 2 (µ α ) and all t > 0, Moreover, for all x ∈ R d and all t > 0, Proof. Since the form E is the smallest closed extension of the bilinear symmetric non-negative definite form, given by (79) where D(A) is the domain of the operator A. Let us denote by (P t ) t>0 the corresponding strongly continuous semigroup on L 2 (µ α ) whose existence and uniqueness is ensured by [26,Lemma 1.3.2]. Now, recall that the semigroup of operators (P να t ) t≥0 extends to every L p (µ α ), p ≥ 1, as seen using the representation (49) and the bound, Moreover, it is a C 0 -semigroup on L p (µ α ) and its L p (µ α )-generator A α,p coincides with A α on S(R d ) which is now defined, for all f ∈ S(R d ) and all x ∈ R d , by and for which the following integration by parts formula holds, Then, by polarization, for all f, g ∈ S(R d ), Moreover, since S(R d ) is dense in L 2 (µ α ), the adjoint of A α,2 is uniquely defined so that, for all f ∈ S(R d ) and all g ∈ S(R d ) ∩ D A * α,2 , where D(A * α,2 ) is the domain of the operator A * α,2 . Then, S(R d ) ∩ D A * α,2 ⊂ D(A) and, for all f ∈ S(R d ) ∩ D A * α,2 and all x ∈ R d , Thus, which implies (thanks to [46,Theorem X.51]), for all t > 0 and all f ∈ L 2 (µ α ), where (P να t nα ) t≥0 is the extension to L 2 (µ α ) of the semigroup of operators given by (49), after the time change t → t/(nα), while ((P να t nα ) * ) t≥0 is its dual semigroup in L 2 (µ α ) (see, e.g., [42,Chapter 1.10]). Next, by Fourier duality and since α ∈ (1, 2), for all f ∈ S(R d ) and all j ∈ {1, . . . , d}, This implies that, for all j ∈ {1, . . . , d} and for all t ≥ 0, where g j (x) = x j , for x ∈ R d . This last observation concludes the proof of the lemma. The following long remark summarizes some basic properties of the semigroups. (iv) As noticed above, for j ∈ {1, . . . , d}, the functions g j (x) = x j , x ∈ R d , do not belong to L 2 (µ α ) so that Theorem 5.1 does not directly apply with u = g j . To circumvent this fact, one can apply a smooth truncation procedure as in (iii). Thus, by Theorem 5.1, for all R ≥ 1, all j ∈ {1, . . . , d} and all f ∈ D(E), and, as R −→ +∞, for all f bounded on R d . Moreover, from Lemma 5.4, for all j ∈ {1, . . . , d} and all x ∈ R d , G 0 + (g j )(x) = x j . Then, since µ α * ν α << µ α , as R −→ +∞, for all f bounded and Lipschitz on R d , Putting together these last two facts into (82) gives, for all f bounded and Lipschitz on R d , Next, let us state a result ensuring the existence of a Stein kernel with respect to the rotationally invariant α-stable distributions, α ∈ (1, 2), for appropriate probability measures on R d . Before doing so, recall that a closed, symmetric, bilinear, non-negative definite form on L 2 (µ) is said to be Markovian if [26, (E.4)] holds. Now, from [26,Theorem 1.4.1], this is equivalent to the fact that the corresponding semigroup P t is Markovian for all t > 0, namely, for all 0 ≤ f ≤ 1, µ-a.e., then 0 ≤ P t (f ) ≤ 1, µ-a.e. To finish this section, a stability result for probability measures on R d close to the rotationally invariant α-stable ones, α ∈ (1, 2), is presented. ∈ (1, 2), let ν α be the Lévy measure given by (19) and let µ α be the associated rotationally invariant α-stable distribution. Let β ∈ (1, α) and let µ be a centered probability measure on R d with R d x β µ(dx) < +∞ and with µ * ν α << µ. Let E µ be the closable, Markovian, symmetric, bilinear, non-negative definite form defined, for all f, g ∈ S(R d ), by Moreover, assume that, Then, for some C α,d > 0 only depending on α and on d. Proof. The proof partly relies on the methodological results contained in [2]. First, as in [2,Proposition 3.4], for all h ∈ H 1 ∩ C ∞ c (R d ), let f h , be defined, for all x ∈ R d , by with (P να t ) t≥0 given in (49) with ν = ν α and X α ∼ µ α . Next, let X ∼ µ. Then, for all h ∈ where g j (x) = x j , x ∈ R d and j ∈ {1, . . . , d}. Let g R,j be the smooth truncation of g j as defined by (65) with ψ(x) = exp(− x 2 ), x ∈ R d . Moreover, g j − g R,j L p (µ) → 0, as R tends to +∞, for all p ≤ β. Since, (see [2,Proposition 3.4]) Now, for all j ∈ {1, . . . , d} and all R ≥ 1, Cutting the integral on u into a small jumps part and a big jumps part and using M 1 (f h ) ≤ 1 and M 2 (f h ) ≤ C α,d , for some C α,d > 0 depending only on α and on d, imply Since g j − g R,j L p (µ) → 0, as R tends to +∞, and since µ * ν α << µ, along a subsequence, Now, for all x ∈ R d , all u ∈ R d , all R ≥ 1 and all j ∈ {1, . . . , d}, for some constant C j,d > 0 depending only on j and on d. Thus, Lebesgue's dominated convergence theorem implies that, for all j ∈ {1, . . . , d}, along a subsequence Finally, for all R ≥ 1 and all j ∈ {1, . . . , d}, The first term on the right hand-side of (87) is bounded, for all R ≥ 1 and all j ∈ {1, . . . , d}, by To conclude the proof, let us deal with the second term on the right-hand side of (87). Then, by the Cauchy-Schwarz inequality, for all j ∈ {1, . . . , d} and all R ≥ 1, Now, for all j ∈ {1, . . . , d}, The condition (86) concludes the proof of the theorem. A Appendix Lemma A.1. Let ν be a Lévy measure with polar decomposition given by (45) where the function k x (r) is continuous in r ∈ (0, +∞), is continuous in x ∈ S d−1 and satisfies (46). Then, for all ξ ∈ R d , the function t → ψ t (ξ) where, is continuously differentiable on [0, +∞) and for all ξ ∈ R d and all t ≥ 0, whereν is given by (47). Proof. First, for all ξ ∈ R d and all t ≥ 0, Now, observe that, for all ξ ∈ R d and all t ≥ 0 Then, by Leibniz's integral rule, for all ξ ∈ R d and all t ≥ 0 Moreover, by Fubini's theorem, for all ξ ∈ R d and all t ≥ 0 Then, by Leibniz's integral rule, for all ξ ∈ R d and all t ≥ 0 x; ξ (e t k x (e t ) − k x (1))σ(dx). Thus, for all ξ ∈ R d and all t ≥ 0 x; ξ (e t k x (e t ) − k x (1))σ(dx) Finally, straightforward computations conclude the proof of the lemma. Proof. The strategy of the proof is similar to the one of [2, Theorem A.1] but without the first moment assumption. The proof of [2, Theorem A.1] is divided into 3 steps; the last two depending on the finiteness of the first moment. First of all, from Step 1 of the proof of [2, Theorem A.1], for Z and Y two random vectors of R d and for all r ≥ 1, while H r is the set of functions which are r-times continuously differentiable on R d such that D α (f ) ∞ ≤ 1, for all α ∈ N d with 0 ≤ |α| ≤ r. Step 2 : This last step also follows the lines of the proof of Step 3 of [2, Theorem A.1] so that only the main differences are highlighted. Let h ∈ C ∞ c (R d ) ∩ H d+3 . Let Ψ R be a compactly supported infinitely differentiable function on R d , with support contained in the closed Euclidean ball centered at the origin and of radius R + 1, with values in [0,1], and such that Ψ R (x) = 1, for all x such that x ≤ R. First, for all t > 0 A similar bound holds true for |Eh(X)(1 − Ψ R (X))| since E X ε < +∞. Then, combining (96) together with the previous bounds implies for someC d,ε > 0 depending only on d and on ε. Next, as in Step 3 of the proof of [2, Theorem A.1], observe that hΨ R ∞ , ∂ d+1 j (hΨ R ) ∞ , ∂ d+2 j (hΨ R ) ∞ and ∂ d+3 j (hΨ R ) ∞ are uniformly bounded in R and in h for R ≥ 1 and since h ∈ C ∞ c (R d ) ∩ H d+3 (for an appropriate choice of Ψ R ). The last step is an optimization in R which depends on the behavior of R ε +C d,1 b e −t (R + 1) d + 2C d,2 (R + 1) d γ 1 e −β1t +C d,3 (R + 1) d γ 2 e −β2t +C d,4 (R + 1) d γ 3 e −β3t , for someC d,ε > 0,C d,1 > 0,C d,2 > 0,C d,3 > 0 andC d,4 > 0. Set β = min (1, β 1 , β 2 , β 3 ). Choosing R = e βt/(d+1) and reasoning as in the last lines of [2, Theorem A.1] concludes the proof of the proposition.
16,703.4
2019-01-01T00:00:00.000
[ "Mathematics" ]
Photon Background in DIRC Fused Silica Bars The DIRC (acronym for Detection of Internally Reflected Cherenkov radiation) is the ring imaging Cherenkov detector of the BaBar detector at the Pep-II ring of SLAC. The Cherenkov radiators consist of 4.9 m long rectangular fused silica bars each glued together from four equal pieces. The photon detector is a water tank equipped with an array of 10,752 conventional photomultipliers. The current study attempts to identify sources of photonic background generated in the DIRC bars. A conclusion of this work is that there are two major sources: one such component consists of photons created by the delta-ray electrons in the fused silica, which in turn can produce Cherenkov light. The second component comes from the reflections of photons from the EPOTEK-301-2 glue-fused silica interface while they are traveling in the bars. The reflection occurs because of a slight mismatch of the refraction indices. Introduction The background in the fused silica bars has been studied several times [2,3]. It has been argued that there are two dominant contributions: the delta ray initiated radiative processes and the scintillation mechanism. It was also suggested that there is also a third component, which is caused by the randomly scattered Cherenkov photons on the bar imperfections, i.e., it depends on the quality [2]. Our study confirms that the delta ray contribution is indeed very significant process. However, we show that the scintillation mechanism is negligible in fused silica. Instead, the second dominant component comes from the reflections of photons from the EPOTEK-301-2 glue/fused silica interfaces. Although the reflection coefficient is relatively small, the effect is significant because one deals with a very large number of the Cherenkov photons at large dip angles (easily more than 1000 photons can bounce in the bar). We offer these arguments in favor of our interpretation of the background: • The direct measurement of the scintillation in the bar measurement with the Fe 55 X-ray source placed directly on the bar surface shows a small rate of the photon activity in the bar, which cannot explain the measured rate of the background photons in the 4m-long bar tests. • The measured rate of the photon background generated by cosmic ray muons in the 4m-long bar test, presented in this paper, can be explained with just two dominant processes present on the Monte Carlo program: delta rays and reflections from the glue joints. The 4m-long bar measurement "separates" the background photons from the Cherenkov signal photons in time by choosing a large incident muon angle relative to the bar axis so that all Cherenkov signal photons travel away from the PMT. When the bar is equipped with a mirror, there is about 30ns available to study the background time structure before a large Cherenkov signal returns to the PMT. When the bar has a photon trap, the Cherenkov signal disappears completely and one has a time window of about 70ns to study the background. • With a detailed Monte Carlo simulation we explain the photon background in the cosmic ray muon test with the 4m-long bar. This program uses the Fluka generator of delta-electrons, and includes effects such as the multiple scattering of the delta rays, the Fresnel reflection on the fused silica/glue interface including the polarization effect, etc. • The measurement of the EPOTEK-301-2 refraction index and a direct measurement of the reflection from the glue/fused silica interface [see Ref. 4 for details]. This improves our knowledge of the refraction index compared to information from the Epotek Co., which was used in Ref. 5. Refraction Index of EPOTEK-301-2 Glue and the Fresnel Reflection. Ref. 4, motivated by the present work, provided the measurement of the refraction index of the EPOTE-301-2 glue, and also a direct measurement of the reflectivity of the EPOTEK-301-2 glue/fused silica interface. In this paper, we present only a summary of this study. Figure 1 shows the EPOTEK refraction index [4] together with similar fits to data of fused silica and water. Based on this result, one can calculate the Fresnel reflection on the EPOTEK-301-2 glue/fused silica interface. Figure 2 shows an example of such calculation as a function of incident angle for two laser polarizations at 442nm. In principle, the measurement of the refraction index of fused silica and the glue as a function of wavelength is sufficient to calculate the reflectivity according to the Fresnel theory. Nevertheless, Ref. 4 measured this reflectivity directly at 442nm and obtained somewhat surprising result. The measured reflectivity was significantly higher than what was predicted by the Fresnel theory. Figure 3 shows the result. It is not presently clear why the reflection from this interface seems to be more complicated than the theory would suggest. The Monte Carlo program, used to explain the 4m-ling bar test data, agrees with the Fresnel prediction up to an angle of ~50 o , as one can see in Fig. 3 (see more details in chapters 4 and 5). Fig. 3 -Fit two different direct measurements of the reflectivity per single bounce from the EPOTEK-301-2 glue/fused silica interface. The graph also shows the Fresnel reflectivity curve, which is based on the measurement of the EPOTEK-301-2 glue refraction index [4], and also the Monte Carlo curve, which is used to explain the 4m-long bar photon background data in this paper (see chapter 5). It agrres with the Fresnel up an angle of about 50 o . Measurement of Scintillation in DIRC Fused Silica Bar. The most direct and cleanest way to look for the scintillation in fused silica is, perhaps, to use the Fe 55 source, which emits 5.9 keV X-rays primarily. Its contamination by more energetic Gamma rays 1 is negligible (probability of less than 1.28x10 -7 ). The Gamma rays would produce the Compton electrons, which can have enough energy to be above the Cherenkov threshold, and this would confuse the photon counting results of this particular study. On the other hand, the 5.9keV X-rays from the Fe 55 source can create scintillation photons only via the photoeffect mechanism in fused silica. The photoeffect (γ + " bound e -" --> " free e -") produces a soft electron (~5.9keV) with enough energy to only excite nearby atoms, which can then emit scintillation photons, infrared photons, phonons, etc. Our photomultiplier detector can detect only scintillation photons of energy between 300 and 800nm. The photons outside of this energy range are not detected. The same applies to the BaBar DIRC photon detector. The electron energy is of a similar magnitude as average dE/dx excitations by a fast particle, and therefore we think the Fe 55 source simulates the problem reasonable well. There are many other sources of soft X-rays, for example Cu, Rb, Mo, Ag, Ba and Rb. Unfortunately, they are not useful for our scintillation study because all these sources have a substantial contamination from more energetic Gamma rays produced in parallel to softer X-ray production. The calculated external activity at the Fe 55 source exit window, in terms of the E γ = 5.9keV X-rays, is calculated to be R o ~3.6x10 7 counts/min, based on the source age (~12 years), its initial activity when purchased (20mC), and a knowledge of the detailed geometry of the source. The Fe 55 source was placed directly on the DIRC fused silica bar surface (no absorber in between), and the observed background-subtracted signal rate in PMT was R s ~ (1.96±0.2) x 10 3 counts/min -see Fig. 4. We assume that this rate is caused entirely by the scintillation mechanism in fused silica, and that its production is isotropic. According to the Monte Carlo simulation the PMT acceptance is ~5%. Therefore, the detector-independent probability that a single 5.9keV Xray entering the DIRC fused silica bar will produce a scintillation photon is [(1.96x10 3 /3.6x10 7 )/0.05] ~1.1x10 -3 . It is a small number but certainly not zero. The question is how does the scintillation mechanism using the 5.9 keV X-rays relate to that of a passing muon through the fused silica bar. In the absence of a more sophisticated dE/dx model, one possible method is to make the relation using energy equivalence. From the measured scintillation rate of ~(1. Measurement of the Photon Background using the Cosmic Ray Muons. The measurement is an indirect one. We want to "isolate" the background photons from the Cherenkov signal photons, and simulate the background component with the Monte Carlo. The best method to separate the two groups of photons is by time measurement. This is achieved by choosing a large incident muon angle relative to the bar axis so that all Cherenkov photons travel away from the PMT towards the mirror -see Fig. 5. When a bar is equipped with the mirror, one has about 30ns available to study the background time structure before a large Cherenkov signal returns. On the other hand, when a bar has the photon trap, the Cherenkov signal disappears completely and one has a time window of about 70ns to study the background activity. Our Monte Carlo program has many new features compared to the standard DIRC Monte Carlo program. These include the proper treatment of the glue/fused silica interfaces, the delta ray generation using the FLUKA program, the multiple scattering of delta ray electrons, Cherenkov photon polarization, etc. In the former case, a window of ~30ns is available to study the "early" photon background activity before the Cherenkov signal returns; in the latter case a 70ns window is available. The lead shield is ~12" thick. Top bar had also a veto counter to eliminate cosmic ray showers. Picture is not to scale. Figure 5 shows schematically the experimental arrangement to study the photon background using the cosmic ray muons in the natural quartz and the synthetic fused silica bars, located directly above each other. The particle trajectory is defined by two entrance scintillation counters limiting the angular acceptance to ± 5 degrees with respect to the mean of 56.5 degrees. The natural quartz material is Vitrosil type. The full size bar is glued together from three bars with dimensions ~15 x ~46 x ~1200 mm. The synthetic fused silica bar material is identical to BaBar DIRC bars. The full size bar is produced by gluing three individual bars with dimensions 17 x 31.86 x ~1220 mm (Bars #247, 248 and 249 from the DIRC production). These bars are nearly perfect in terms of chips, scratches, etc. The Quantacon XP2020 Philips PMT was attached directly to the bar with the EPOTEK-301-2 glue. In the Monte Carlo program we simulate the PMT's end as a composition of four interfaces: (a) fused silica bar's end, (b) EPOTEK-301-2 glue, (c) Borosilicate glass and (d) bi-alkali photocathode with their respective refraction indices. The bars were glued following procedure as the DIRC production, i.e., the EPOTEK-301-2 glue thickness was 0.001" thick. The mirror was just air coupled with a spring applying a pressure, as is done in the DIRC. Initially, both bars were equipped with the mirrors. The Cherenkov signal was therefore reflected back to PMT, allowing to study the photon background for 30ns after a passage of cosmic ray muon. For some portion of the run, the synthetic bar was connected to a photon trap, which entirely absor bed the Cherenkov signal. The photon trap is shown in Fig. 6. This allowed extending the length of the time window to 70ns and estimating the total rate of the background photons. The PMTs signal was first amplified 10x with LeCroy fast PMT amplifier. Its output was digitized with the HP digital scope, which was read out by a MAC IIcx computer with the CAMAC-based GPIB interface. The on-line program was Fortran-based, the off-line analysis used a PAW-based programming. We applied a sophisticated pulse finding algorithm using several methods: (a) a simple peak finding method, (b) peak finding with the mathematically correct de-convolution algorithm, and (c) a software-based leading edge "single hit" algorithm. The simple peak finding method discarded the single photon pulse shape and just followed the waveform contour. The de-convolution algorithm took into account the correct single photon pulse shape. In addition, the hardware used LeCroy TDC, which allowed one to make a simple leading edge "single hit" measurement, which was compared to the software-based leading edge "single hit" algorithm. Figure 7 shows typical digital scope waveforms recorded by the experiment when both bars were equipped with the mirrors. One clearly sees a large Cherenkov signal on both traces (top trace corresponds to the natural quarz, the bottom trace to the synthetic fused silica). One also clearly sees the photon background in front of the Cherenkov signal. One has only 30ns available to study the background in this case. The background consists of tightly overlapping pulses, which requires a waveform analysis treatment. The simple peak finding algorithm follows a waveform channel by channel and finds a peak if the waveform starts dropping for at least 5 consecutive channels. Figure 8a shows the time distribution of pulses found with the simple peak finding method. Fused Silica Bar with a Mirror. The Cherenkov peak appears at the scope channel ~150. The very first peak near channel ~115 correspond to the delta ray electrons traveling fast enough to produce the Cherenkov signal. A shoulder near channel ~130 corresponds to photons reflecting from the very first bar-to-bar glue joint; the bump near channel ~150 corresponds to the second glue joint. A three bin-wide "hole" just prior to channel ~150 is an artifact of imperfect pulse finding algorithm trying to avoid counting the huge Cherenkov signal. Figure 8b shows the time distribution using the single hit threshold software-based time finding algorithm. In this case the differentiation The angle of the track is such that all Cherenkov photons travel first away from the PMT, which allows to see a background photon activity before a saturating Cherenkov signal comes back from the mirror reflection. One has about ~30ns to perform a meaningful pulse finding algorithm to measure the background. The horizontal axis is time in terms of scope channels (1 scope channel = 0.4 ns) and the vertical axis is the amplitude. between various background contributions is even more apparent. It is interesting to compare the softwarebased time distribution of Fig. 8 with the hardware single hit time distribution using the LeCroy TDC, shown on Fig. 9. Figure 9a corresponds to a 35mV threshold on the PMT amplifier discriminator, Figure 9b corresponds to a 300mV threshold. In the latter case, we detect only large pulses, which are the Cherenkov signal pulses arriving ~30ns after the first delta-ray signal. One clearly recognizes the similar features between the hardware and software-based spectra, the most dominant feature being that the second peak follows the first one bỹ 15ns; the second peak corresponds to the reflection from the first bar-to-bar joint. However, the relative strength of individual contribution depends on the discriminator threshold, as is clearly seen comparing Figures 9a and 9b. The natural quartz data show similar features. The de-convolution algorithm will be discussed in the next chapter. Fused Silica Bar with a Photon Trap. The photon trap, shown in Fig. 6, proved to be a very effective tool to kill the Cherenkov signal traveling in the direction away from the PMT. Figure 10a shows the measured raw scope waveforms. One can easily notice the absence of otherwise dominant Cherenkov signal. Figure 10b shows the de-convoluted waveforms. The deconvolution algorithm assumes a standard PMT's amplifier pulse shape of a form t*exp(-t/τ), where the shaping constant τ is assumed to be 20ns. The method converts the raw scope waveform into the de-convoluted one using the following equation: where T is the scope sampling time (0.4ns), r i is the i-th PMT output sample, and d i is the i-th de-convoluted sample. The de-convoluted waveform is then subject to a threshold cut to eliminate the unwanted noise pulses. Figure 10 also shows the peak values found by both the simple-minded peak finding and the de-convolution method. One can see that both methods agree on average, perhaps, in more busy regions the de-convolution method may do slightly better. Fig. 7). All Cherenkov signal photons are absorbed in the photon trap, which enables one to study what otherwise would be under the Cherenkov peak, which gives a total range of ~70ns. Figure demonstrates the pulse finding method using (a) a simple peak finding algorithm, or (b) a peak finding using the pulse de-convolution algorithm. The horizontal axis is time in terms of scope channels (1 scope channel = 0.4 ns) and the vertical axis is amplitude. Figure shows numbers indicating how many and where the algorithms found the peaks. Figure 11a shows the time distribution using the de-convoluted peaks. One notices the absence of the dominant Cherenkov signal, and one can notice that the measured background rate is slowly diminishing as one approaches the end of the 70ns time window. Again, one clearly recognizes the first, the second and the hint of the third peak, corresponding to the delta rays, the first and the second bar-to-bar joints. Figure 11b shows the time distribution using the single hit threshold software-based time finding algorithm. The second peak, corresponding to the reflection from the bar-to-bar interface is again clearly visible. Figure 12 shows the hardware single hit time distribution using the LeCroy #2229 TDC, where Fig. 12a corresponds to a 30mV threshold on the PMT amplifier discriminator, and Fig. 12b corresponds to a 300mV threshold. Again the second peak is clearly visible. Notice the absence of the Cherenkov signal in Fig. 12b between 35-40ns, which confirms a very good efficiency of the photon trap. Figure 13a shows the multiplicity distribution of background photon pulses found in the time window 0-70ns for the synthetic fused silica bar equipped with the photon trap and using the simple peak finding Figure 13b shows the same for the de-convolution algorithm. Figure 14a shows the distribution of total number of pulses found in the time window 0-30ns for the natural quartz bar equipped with the mirror using the de-convolution algorithm. Figure 14b shows the same for the synthetic fused silica bar. Table 1 summarizes these results. One can see that we measure about five background photons in a 70ns time window. In the first 30ns, we typically detect about two background photons. Figure 15 shows a comparison of the time distribution from the combined data and the Monte Carlo simulation. The time stamp for each entry into the plot was created using the simple peak finding algorithm. The comparison is done only in the very first time interval of 0-30ns, i.e., before the Cherenkov signal arrives (mirror data). The first peak in the data corresponds to the Cherenkov radiation created by the energetic delta ray electrons. The first peak is then followed by the second peak, which is caused by the reflection of the Cherenkov signal photons from the first bar-to-bar joint. The third peak is less noticeable. It is caused by the reflection of the Cherenkov signal photons from the second bar-to-bar joint. The Monte Carlo curve near the first peak is not fit to the data, but shows the predicted absolute number of photons for the corresponding number of tracks. The height of the second and the third peak in the Monte Carlo has been "tuned" empirically with the reflectivity of the EPOTEK-301-2 glue/fused silica interface. This "tuned" reflectivity is shown in Fig. 3. Figure 15 indicates that the Monte Carlo simulation explains the basic features of the data very well. One can see that the MC overestimates somewhat a number of hits in the first peak. Figure 16 shows a comparison between the photon trap data only from the 4m-long bar and the Monte Carlo simulation over the entire available time interval of 0-70ns. The time stamp for each data entry into the plot was created using the de-convolution algorithm in this case. The Monte Carlo simulates the photon arrival as a delta-function in time (no pulse simulation is performed). In this case, the Monte Carlo prediction seems to fade sooner compared to the data, however, again, the basic features are reproduced very well. Fig. 13 and Table 1 is very good in terms of the most probable number. Figure 17 shows the Monte Carlo multiplicity distribution of background photon pulses found in the time window 0-70ns for the synthetic fused silica bar equipped with the photon trap. Its most probable value agrees very well with the data shown in Figure 13b, which indicates that the multiplicity of the background photons is 4.6 ± 1.8. Of course, in BaBar we do not have the photon trap, and therefore Fig. 17 can be used only to confirm that the simulation in Monte Carlo is realistic because it agrees with the equivalent data. One should point out that the present Monte Carlo does not treat the individual photons as pulses with a finite shaping time, and therefore it tends to enhance a tail in the distributions compared to data. Figure 18 shows the Monte Carlo prediction for the multiplicity of pulses found in the time window of 0-30ns. This is to be compared to Fig. 14. Table 1 is reasonable in terms of the most probable number. Detailed comparison of the 4m-long bar data with the Monte Carlo. In the Monte Carlo, we switch off the Cherenkov signal at the mirror end. Figure 19a shows the multiplicity distribution of all background photon pulses in the time window 0-70ns for the synthetic fused silica. We see that the most probable multiplicity is about five. However, the distribution has a long tail. Figures 19b,c shows the breakdown of this multiplicity count into either the delta-ray contribution only or the Cherenkov signal-induced background from the glue reflections. We expect ~96 photoelectrons from the Cherenkov signal at this dip angle and the time interval. We conclude that the most probable number of background hits represents ~5% compared to the "proper" DIRC Cherenkov signal at this dip angle, however, the distribution has a long tail, i.e., some events have much larger number of background hits. Up to this point, all comparisons between the test data and the Monte Carlo simulations neglected a contribution from the scintillation. In fact, Fig. 15 indicates that we are explaining data quite well already with this assumption, which is based on our scintillation rate measurements in chapter 3. It would be, perhaps, of some theoretical interest to investigate if the scintillation would have effect on the result, if we artificially increase the detected rate of the scintillation photons compared to results presented in chapter 3. We want to show that it is not possible to explain the data with just contributions from the delta-rays plus the scintillation alone, as was originally suggested [2,3]. To make this point, we switch off the glue reflections. Figure 20a shows a total multiplicity distribution of the background photons in the time window of 0-70ns for the synthetic fused silica, and Figures 20b,c show breakdown of this multiplicity count into either the delta-ray only or the scintillation only contributions. The scintillation rate is set artificially to be the same as in Fig. 19c, i.e., about four photons per muon, and it is assumed to be distributed randomly along the muon track and with a random photon direction. The result is shown in Fig. 21, which proves that we cannot reproduce the second peak observed in the data. It is clear that one cannot differentiate between the scintillation photons and the delta-ray photons using the timing information. Therefore, the rate of the scintillation photons is determined separately, as described in chapter 3. Fig. 19c). Finally, we will try to explain the data using all three contributions assuming that the scintillation rate is artificially increased by a factor of five compared to the experimental results presented in chapter 3. We will try to explain the photon background data using three contributions: (a) the delta ray-generated Cherenkov photons, (b) the scintillation rate of ~0.6photons/muon (5 times larger than what was measured in chapter 3) and (c) the Cherenkov photons, which reflected back from the glue/fused silica interfaces. We tune the reflectivity of the glue/fused silica interface to get agreement between the data and Monte Carlo on the second peak.. We eliminate the scintillation mechanism from our problem using two arguments, (a) one that the experimental results of chapter 3 show that the scintillation rate is negligible, and (b) second that Monte Carlo explains the test data very well without a need to introduce the scintillation (Fig. 15). Figure 15 using the delta ray and the scintillation mechanisms only, i.e., the Cherenkov photons do not participate via the reflection mechanism. The data are crosses, the Monte Carlo prediction is a smooth line. The reflections from the glue/fused silica interfaces were switched off. The scintillation rate was adjusted to be the as that of Fig. 19c. Since number of MC events in the first peak would be overestimated, we apply a fudge factor of 0.45 to get an agreement with the data. Figure 15 using all three mechanisms, i.e., using the delta ray, the scintillation (~0.6 photons/muon ~ 5 times larger rate than what was measured in chapter 3) and the reflected Cherenkov photon. The data are crosses, the Monte Carlo prediction is a smooth line. Because of the addition of the scintillation, the reflection coefficient had to be adjusted (curve #2 in Figure 3). Monte Carlo Program developed for this study. We started from a FORTRAN Monte Carlo program "DIRC Bar Simulation" which already had a basic simulation of the Cherenkov photon production, photon propagation through the bar geometry, including the PMT detection. In this program, photons are emitted randomly along the track of the traversing charged particle. The wavelength dependent production of the Cherenkov photons follows the equation given by Ref. 6. It gives a total number of photons N 0 = 1450 for a wavelength of λ = 400nm and a dip angle of about 56.5 0 . The acceptance of the photon is given by the quantum of the photomuliplier. The quantum efficiency is obtained from the Electron Tube Ltd. Photons are generated with random polar Cherenkov angles Φ c with respect to the track. The wavelength dependence of the Cherenkov angle Θ c enters via the refraction index n = n(λ) of fused silica, which was provided by the fused silica manufacturers. 4 The photons are transformed into the bar coordinate system. If the photon vector hits one of the six surfaces the algorithm decides on base of the refraction index if the photon is reflected or exits the bar. The program successively bounces photons through the bar allowing for the incorporation of detailed effects such as scratches, chips or other deterioration along the bar. If the photon reaches the mirror end, it is reflected. For a photon reaching the readout end a total survival probability is calculated as a product of: • attenuation in fused silica bulk material was assumed to be 0.997 per meter • internal reflection coefficient value of 0.99968 per bounce was used • mirror reflectivity value of 0.937 was used. In the following we describe additions to the "DIRC Bar Simulation" by the present effort. Production of Delta Ray Electrons Using the FLUKA Program. When a muon passes through the fused silica bar, among the many interactions that take place is the generation of delta rays. Some of these delta rays (electrons) are traveling at speeds that are greater than the phase velocity of light in fused silica and so generate their own Cherenkov photons, which belong to a category of the background photons. We generate these electrons using the FLUKA (FLUcuating Kaskade) program [7]. FLUKA determines how long a particle travels before an interaction takes place. For particles in which the cross sections is constant between two consecutive processes, the interaction points are exponentially distributed as: where sigma is the total macroscopic cross section and is given by Here the first sum over i is for the different types of atoms that are present in a given material (N A is Avogadro's number, ρ i is the partial density, and P Ai is the atomic weight of the atom of type i). The second sum is for the k possible kinds of interactions. As usual, σ ij (E) is the microscopic cross section pertaining to the i atom and the j kind of interaction. The second term in the equation is the macroscopic decay cross section, where m is the mass of the object, p is the momentum, and τ is the mean lifetime. Once the interaction point is determined, the kind of proces s must be computed according to the relative probabilities Σ ij /Σ tot for atomic/nuclear interactions and Σ d /Σ tot for decay. However, for charged particles the situation is more complicated because the cross section changes between two consecutive interactions. The solution that is adopted is to sample the interaction points according to the above equation, but instead of using the macroscopic cross section that corresponds to the particle being at the beginning of the step, it uses the maximum value of the macroscopic cross section along the step length. Then, its new energy and total cross section will be completed determined by the processes that occurred during the step. This interaction, which is initially picked at random, is accepted with the probability This algorithm (called the "fictitious sigma") method is an example of a rejection method, which can be used to exactly distribute the interaction points. Once the interaction point is sampled and a delta ray is chosen as the type of interaction, the kinetic energy of the knock on electron is chosen according to the cross section given by were T-max is the maximum kinetic energy transfer to the electron determined by kinematics. It is defined as follows: Although the muon energy was not fixed in this test, a lower limit can be estimated because the particle must traverse through about ~30 cm of lead before triggering the bottom scintillator (see Fig. 5). Using an average value of the dE/dx for lead (1.123 MeV/g/cm2), one can calculate the minimum muon energy as approximately ~0.4 GeV. A value of about 2 GeV, and a muon angle of 56.5 o , was used as an input to FLUKA. The procedure, therefore, is to have FLUKA produce tracks through the quartz bars. Then a file is created with all the necessary information that can be used in the Monte Carlo simulation of the bars. Even though the above two numbers (energy and angle) were used in the majority of simulations, several different energies (0.4 and 100 GeV) and angles (61 and 51 degrees) were also simulated to understand how the simulation results varied when these two parameters were changed. Delta Ray Treatment in the Monte Carlo. The MonteCarlo program first reads the data generated by FLUKA, it then decides which electrons are above the Cherenkov threshold (typically ~4 per each muon). It then proceeds to transport these electrons through the fused silica bar until their energy falls below the Cherenkov threshold. The stepping is done in small 3µm-long steps and energy is degraded using the Bethe energy loss formula [8], and it also included the multiple scattering [8,9]. If a photon is emitted in a given step, the program determines the Cherenkov angle and the corresponding direction cosines. One should say that the addition of the multiple scattering had little effect on our results. The delta rays are responsible for the first peak appearing in time spectrum of Fig. 15. New effects added since the version of this note from Sept. 19, 2001. Previous version of our Monte Carlo program did not take into account the secondary electron interactions, such as are bremsstrahlung and Moller scattering, properly. The main effect of this problem was to generate more photons than expected. To rectify the problem, one had to determine which delta ray had a secondary interaction and do proper accounting of all electron track segments to create the correct amount of the Cherenkov radiation. As one might imagine, this reduced the number of photoelectrons recorded in the PMT. Previously, one had to normalize the Monte Carlo to the data with an arbitrary fudge factor, but now this is not necessary any more. In addition, the curve for the reflection probability changed, making it more in agreement with the Fresnel theory. Basic wavelength-dependent effects We have included the absolute wavelength dependency of the internal reflection coefficient, mirror reflectivity according to Ref. 4. The wavelength dependency of the photon attenuation in the bulk material of fused silica was taken from Ref. 10. Simulation of Glue / Fused Silica Interfaces. The next step is to understand why there is the second peak in Fig. 15, which appears before arrival of the Cherenkov signal. It was thought that this could be the result of reflection from the glue/fused silica interfaces due to the difference in the respective refraction indices. We tried two methods of calculations: the first one was based on our measurement of the refraction index and the second one was based on a direct measurement of the reflectivity from the glue/fused silica interface [4]. Let's first discuss the first method. A fit to our data of the refraction index is where λ is given in nm. Figures 23 and 24 show the simulated distribution based on this dependency assuming the Fresnel reflection theory, one assuming TM mode and the other TE mode of reflection. In Fig. 23 (TM mode) there a second peak at approximately 10ns after the first and a third at approximately 25ns. In the data (see Fig. 15), the time between the first and second is about 15ns, while the third is not measured well. An important number is the ratio between the first and second peaks (defined as ratio of the peak integrals over the ranges between 0-10ns and 10-20ns of the histogram in Fig. 23). This number is 7.03, while in the data it is approximately 1.1, i.e., the second peak is much more pronounced in data. Figure 24 shows the same simulation assuming the TE mode reflection (the E-field vector perpendicular to the plane of incidence). Notice that the deep valley between the second and the third peaks is more pronounced in the TM mode compared to the TE mode. In the data (see Fig. 15) we have, of course, contributions from modes. Delta ray photons account solely for the first peak, and the last two peaks result from the reflection from glue/fused silica interfaces. The glue reflection is modeled by Fresnel's theory using the TM mode (E-field vector parallel to the plane of incidence). The time between the first and second peak is about 10ns and the ratio between the first and second (defined as the ratio of the integral between 0-10ns and 0-20ns) is 7.03. These numbers should be compared to data, which are 15ns and 1.1. Fig. 24 -The glue reflection is modeled by Fresnel's theory using the TE mode (E-field vector perpendicular to the plane of incidence). In this histogram, the valley is absent as a result of the qualitative difference between the TE and TM mode curves. The time between the first and second peak is about 10ns and the ratio between the first and second is 1.97. Finally, the most accurate reflection model was developed in which the polarization vector was determined at the photon's creation and carried through as the photon propagates down the bar. Therefore, when the photon hits the glue planes, the reflection probability is determined by a linear combination of TE and TM modes. The result of this simulation is shown in Fig. 25. Up to this point the reflection on the glue/fused silica interface was calculated using the Fresnel reflection theory using our measured refraction index [4]. However, we have also measured this reflection directly at the wavelength of 442nm (for TE mode) [4]. Using this measured result and normalizing it to the reflection probability at zero degree incident angle as calculated by the Fresnel theory, one obtains a polynomial fit to the data as follows: reflection probability = 1.56 × 10 The peak corresponding to the first glue/fused silica interface is now at approximately 14ns, and also the amount of photons reflecting from the first and second glue planes is greatly increased compared to the Fresnel theory discussed so far. It is evident that the result of the shift is due to the fact that more photons at steeper incident angle are being reflected. In the final simulation, which is shown in Fig. 22, we used a scintillation rate of 0.6 photons/track (see chapter 3). After some empirical tuning of the 1-st coefficient, the reflection probability from the glue/fused silica interface was found to be the following: Based on Fig. 22, we conclude that the simulation of the timing spectrum agrees well with the data as far as the normalization of the second peak. However, the time between the first and second peaks is about 1 to 2 ns off from the data, and this discrepancy was never really satisfactorily explained. This histogram has the same 10 ns separation between peaks and a ratio between the first two peaks of 3.106. Fig. 26 -Instead of using Fresnel theory to model the reflection from glue, this histogram is a result of using data from an independent experiment that measured reflection from the glue [4]. A polynomial fit was obtained to the data and inserted into the Monte Carlo program. The amount of photons reflecting from the first glue plane is greatly increased, and in fact ratio of the first and second peaks is .528, which is an overshoot as compared to the data. However, the timing is improved, as now the time separation of the first two peaks is 14ns. Group Velocity. We use group velocity to determining the arrival of time the photon at the PMT was used: This had a significant effect for a long propagation times above ~35ns. Figs. 28 -The muon entrance angle was changed to the specified values. Both timing and the ratio of peak values of the 1-st and the 2-nd peak are somewhat sensitive to the angle change. Variation of Muon Angle. The muon angle was changed by 5 o to check the results if the measurement would have such systematic error. The expected statistical error in angle is less than 1 o . Figure 28 shows a result of running at two incident angles of 61.44 o and 51.44 o . It is interesting to note that the number of photons coming into the PMT is significantly affected, but the actual time is not. Variation of Muon Energy. In addition to the muon energy of 2 GeV, two other energies (.4 and 100 GeV) were simulated. The lower limit of 0.4 GeV was chosen because this was the minimum energy that could trigger the counters. An upper limit of 100 GeV was chosen because this was a reasonable upper bound on the energy of muons at the earth surface. Figure 29 shows that the change of muon energy did not significantly affect the results. Figs. 29 -The muon energy was changed to the specified values. Both timing and the ratio of peak values of the 1-st and the 2-nd peak are insensitive to this change. Comparison of Data and Monte Carlo Program in BaBar. Figs. 30 -The single photon Cherenkov angle distribution for muon tracks from the reaction e + e -→ µ + µas reconstructed in the BaBar DIRC (dots) and the Monte Carlo prediction using the "official" DIRC MC program, which includes the delta rays only (no scintillation or glue reflections). It does not have the improvements mentioned in this note. Figs. 31 -The effect of various contributions on the MC tail distribution of the single photon solutions of the Cherenkov angles in BaBar muon tracks. Figure 30 shows the single photon Cherenkov angle distribution for muon tracks from the reaction e + e -→ µ + µas reconstructed in the DIRC of BaBar. Combinatorial background is removed by requiring that a photon lies closest to the expected Cherenkov angle and arrival time. The remaining background under the signal peak MC Data Probability (log scale) contributes up to 15%. Only about 60% of the background level is presently explained by the BaBar DIRC Monte Carlo. The only significant contribution is generated by delta rays. However, one should point out that we have used in this work the FLUKA [7] generator to simulate the delta rays. It remains to be yet seen if the BaBar generator of the delta rays and their subsequent use to generate the Cherenkov photons is equivalent to our procedure. The BaBar DICR Monte Carlo does not foresee scintillation in the quartz presently. We have shown in this work that a significant source of secondary light in addition to delta ray is the reflection of Cherenkov photons on glue joints with actual comparable size. We have demonstrated experimentally that the scintillation contribution is negligible, and it is not necessary to explain our data. Figs. 32 -The MC distribution for the single Photon solutions of the Cherenkov angles in BaBar muon tracks. The graphs show a difference distribution between glue reflection "in" and "out" (the glue "removes" ~3.1% of photons in the Cherenkov peak area only). We added glue reflection to the "official" DIRC BaBar MC program now, as motivated by this work. Figures 31 and 32 show the effect of the glue reflection on the BaBar DIRC MC distribution of the single photon solutions for the Cherenkov angles and time of arrival in muon tracks from the reaction e + e -→ µ + µas reconstructed in the DIRC of BaBar. However, the BaBar DIRC MC program does not include the "Fluka way" of generating photons, as well as many other improvements described in this note. Conclusions • We show experimentally that the scintillation mechanism induced by passing muon is negligible in DIRC fused silica bars. It represents less than 1% of total photon background in the fused silica bar. • Instead, we argue that there are two other major components in the DIRC photon background. One component consists of photons created by the delta-ray electrons in the fused silica, which in turn produces the Cherenkov light. The second component comes from the reflections of all photons from the EPOTEK-301-2 glue/fused silica interfaces. The reflection occurs because of a slight mismatch of the refraction index between the optical glue and the fused silica. • We conclude that the most probable number of background hits represents ~5% relative to the number of the "proper" DIRC Cherenkov photon measured at a dip angle of 56.5 o . However, the distribution has a long tail due to the delta-ray contribution, i.e., some events have much larger number of background hits. • The note will be useful to estimate the photon background multiplicity and provides a recipe how to simulate it in the BaBar Monte Carlo for each passing minim-ionizing particle through the DIRC bar.
10,042.2
2001-11-04T00:00:00.000
[ "Physics" ]
EARTHQUAKE HAZARD AND RISK ASSESSMENT STUDY FOR THE CANTERBURY REGION, SOUTH ISLAND, NEW ZEALAND: OUTLINE OF PROGRAMME DEVELOPMENT In recognition of the earthquake threat to Canterbury, and its statutory responsibilities, Environment Canterbury initiated a comprehensive, staged multi-year earthquake hazard and risk assessment study programme in 1997. In this paper the general framework and philosophy behind Environment Canterbury's Earthquake Hazard and Risk Assessment Programme is outlined. The results of the stage l A earthquake source characterisation, and stage 1B probabilistic seismic hazard assessment for the Canterbury region are presented in companion papers in this volume. The programme participants have ongoing earthquake hazard research projects, and also are involved as practitioners in land-use planning and development of relevance to the Canterbury region. The coordinated programme is primarily designed to facilitate the integration of a diverse range of independent studies, so making relevant earthquake hazard and risk information readily available to a wide range of end-users, including other professionals (engineers and scientists), planners, civil defence and emergency management staff, utility operators, and developers. In addition the programme provides up to date, relevant information for public education and awareness purposes. The first stage of the programme has been completed, and includes identification and characterisation of earthquake sources, probabilistic hazard assessment, and formulation of earthquake scenarios. The long-term staged study programme will address the earthquake hazard, the risks posed, possible mitigation options and mitigation implementation methods available. INTRODUCTION Environment Canterbury has developed a comprehensive earthquake hazard and risk assessment study programme for the Canterbury region.The programme was developed in consultation with the Institute of Geological and Nuclear Sciences Ltd (IGNS) with input from other key stakeholders in the region including the Natural Hazards Research Centre (NHRC), University of Canterbury. Wellington), and a similar programme was considered appropriate for the Canterbury region (Figure 1). In developing the programme for Canterbury it was recognised that there are many potential earthquake sources located throughout a geographically large region, and that significant (and vulnerable) urban centres and infrastructure are also located throughout the region. The staged programme (Table l) allows for the progressive and logical assessment of the various earthquake hazard components followed by an earthquake risk assessment and an economic impact assessment.The final part of the programme will be to prepare an earthquake hazard mitigation strategy. The first stage of the programme, divided into three component reports (Stages lA-lC) have been completed over a period of three years.The results of Stage lA (earthquake source characterisation) are summarised in a following companion paper in this issue of the bulletin (Pettinga et al., this volume), and the results of Stage 1 B (probabilistic hazard assessment) are also presented (Stirling et al., this volume). Successful and cost-effective regional-scale earthquake hazard mitigation programmes have been completed in other parts of New Zealand (for example, Auckland and In addition to the strategy, an important output of the study will be a series of informative, innovative and user-friendly products including maps, explanatory booklets and brochures.The first of these, based on the completed stages l A and 1B are now published and available from Environment Canterbury. AIM OF PROGRAMME The fundamental outcome of any seismic hazard and risk assessment study is to reduce the vulnerability of the regional community to the impact of earthquakes by providing local authorities and other organisations, individuals, and politicians with sufficient and accurate information to make logical, justifiable, and defendable decisions.The main aim of the study is to make available information that will lead to increased public awareness of the earthquake vulnerability and risk in the Canterbury region.The desired outcome is better decision making by local authorities and the community thereby reducing exposure to earthquake risk. The overall objectives of Environment Canterbury's longterm earthquake hazard and risk assessment study are to: ( 1) Define the nature and extent of earthquake hazards in the region, including active faulting, fault-propagated active folds, ground shaking, liquefaction, slope stability, and tsunami; (2) Identify and quantify the earthquake risk to the regional community; (3) Present earthquake hazard information in a format that will encourage the regional community to take steps to reduce their vulnerability; (4) Ensure that adequate information in an appropriate format is available to Environment Canterbury as well as the territorial local authorities in the region in order to make logical, defendable and justifiable decisions for land-use planning, development, and emergency management; and (5) To ensure all engineering and science pract1t10ners have relevant and up to date information available or know where to source such information, so ensuring that, as far as is practical, sound and consistent professional advice is provided to end-users. WHAT IS DRIVING THE PROGRAMME? While it is not possible to reduce the incidence of earthquakes in the Canterbury region, Environment Canterbury recognised that steps need to be taken to reduce the vulnerability of the community to their impacts.Earlier studies have highlighted aspects of the earthquake hazard either with respect to the region as a whole (e.g.Owens et al., 1994), or more specifically to the Christchurch area (Elder et al., 1991; Centre for Advanced Engineering, University of Canterbury, 1997).Over the last decade a significant amount of new research data has become available regarding the active tectonic setting and the related earthquake activity in the Canterbury region.Accordingly this earthquake hazard and risk assessment study is timely and is needed in order to position the community to take full advantage of the new knowledge now available from scientific and engineering investigations. Effectively, the driving force for Environment Canterbury's programme includes: (1) Canterbury's susceptibility to significant damaging earthquakes; (2) The general public perception that the earthquake threat is low; (3) Local government responsibilities under the Resource Management Act 1991 and the potential consequences of failing to fulfil statutory functions; (4) The recognition given to natural hazards in Environment Canterbury's "Regional Policy Statement"; (5) The lack of a co-ordinated approach to earthquake hazard mitigation work in Canterbury; (6) The need to resolve several significant scientific issues and in particular the probability of occurrence of damaging earthquakes; and (7) The lack of earthquake hazard information for urban areas in the region other than Christchurch. In its capacity of regional planning, environmental management, and emergency management, Environment Canterbury can int1uence community decision-making.For this reason the Environment Canterbury believes it is well placed to take a lead role in promoting the availability and use of earthquake hazard research and hazard mitigation initiatives throughout the region. PROGRAMME OUTLINE AND PROGRESS TO DATE The earthquake hazard and risk assessment programme comprises five main stages (Table 1) and reflects the application driven (planning, environmental management, emergency management, and public education) information requirements of Environment Canterbury. Stage 1 (Part A) of the study is complete (Pettinga et al. 1998).The aim of Stage 1 (Part A) is to identify and characterise the active geological structures in Canterbury as well as the immediate surrounding regions, capable of generating moderate to large earthquakes likely to impact on Canterbury.This involved: ( 1) Compiling ex1stmg records of historical and instrumental seismicity in the region; (2) Compiling existing information on active or potentially active faults and other tectonic structures in Canterbury and nearby that may impact on the region.As part of this stage a preliminary compilation of offshore data was also included from a review of the published literature.However, it was realised that this did not adequately account for all the major seismogenic structures offshore, especially in the light of ongoing geological oceanographic research by the National Institute for Water and Atmospheric Research (NIWA).Consequently it is planned to undertake a more comprehensive review of all known offshore earthquake source structures at a later stage in the programme.This work was planned for 2000, but has now been deferred; (3) Undertaking aerial photograph studies and reviewing map databases for south Canterbury to determine location of active faults and other structures; (4) Developing a methodology for a probabilistic seismic hazard assessment; (5) Developing a methodology for defining appropriate earthquake scenarios; (6) Outlining additional work that could be undertaken to better identify and characterise earthquake sources in Canterbury. The significant achievements of Stage 1 (Part A) are not reviewed here, but are presented in a following companion paper (Pettinga et al. this volume). Stage 1 (Part B) of the study, now also completed (Stirling et al. 1999), built on the results of Part 1 A, and involved three components of work: (1) A detailed probabilistic seismic hazard assessment was undertaken in order to provide estimates of Modified Mercalli Intensity (MMI), Peak Ground Acceleration (PGA), and response spectral ordinates throughout the Canterbury region for return periods of 50, 142 (nominally 150), 475 (nominally 500), and 1000 years; (2) Because of the wide geographic extent of the region, it was decided to prepare an outline of three typical earthquake scenarios likely to impact on the region.The three scenario events selected include: i). a local moderate magnitude (-M5-6) earthquake; ii). a large (-M7-7.5)event located in the eastern foothills of the Southern Alps; and iii).a great (-MS) earthquake rupture of the Alpine Fault.These three scenarios are required for later stages of the programme in order to provide the basis for impact analysis and defining the implications for disaster preparedness and emergency management in the region; and (3) To undertake a review of historic earthquakes which have impacted on Christchurch. The significant achievements of Stage I (Part B) are also not reviewed here, but are presented in a following companion paper (Stirling et al., this volume). The aim of Stage 2 is to identify and quantify for the selected urban and surrounding areas the geographic variation in site conditions with respect to ground shaking and liquefaction potential during future earthquakes.The focus of this work will be on the main urban areas including Kaiapoi-Woodend (Stage 2A in 2000 and now completed) and Christchurch (Stage 2B planned for 2001).Further studies at other centres such as Timaru and Kaikoura may also be warranted, based on further assessment of the geological and geotechnical conditions indicative of site amplification and liquefaction susceptibility. Stage 3 of the study will address other earthquake hazards including slope instability and tsunami.The slope instability study will be restricted to identifying and quantifying the slope failure potential in main urban areas, along significant transport and other lifeline corridors, and river gorges.The scope of the tsunami study has not been formulated at this time, but will probably include analysis of near-field and farfield tsunami hazard. Stages 1-3 provide the information needed to undertake an assessment of earthquake risk (Stage 4).The earthquake risk assessment will involve combining hazard information with vulnerability information such as building replacement costs, building occupancies, value of domestic properties and replacement costs for lifeline services.These data will then be combined to determine monetary losses and casualty rates during earthquake scenario events. Stage 5 of the study will look at the economic and social impact of an earthquake on the Canterbury region.This study should be of significant value to key community decision-makers.The results of the study will help set priorities for allocation of resources for future technical studies, emergency service planning, ownership and operation of community services and level of investment in community education. The programme will culminate with the preparation of a detailed earthquake hazard mitigation strategy.The strategy will contain a series of actions or initiatives to ensure that the risks associated with earthquakes are explicitly recognised, quantified, and either accepted or mitigated.The strategy will help to define the role of Environment Canterbury with respect to earthquake hazard mitigation and the relationship it seeks with other relevant organisations in the region.It is hoped that a common framework can be developed within which priorities for action can be identified, responsibilities and accountabilities accepted and, where appropriate collaborative work programmes developed.This should lead to better communication and information exchange, and efficiencies in the use of limited resources. The implementation of the full earthquake hazard and risk assessment programme is dependent on the allocation of financial resources through Environment Canterbury's annual plan process.The staged programme is suited to the annual funding allocation process, providing for some flexibility in terms of scheduling the multi-year work plan, and also providing for progressive accountability in terms of satisfactory standards for work completion, with clear flowon benefits to the regional community. As outlined earlier, one of the key aims of the programme is to ensure that information compiled at each stage of the programme is made widely available.A critical element of this strategy is to proactively develop public awareness of the earthquake hazard.The approach taken by Environment Canterbury includes: • The preparation and publication of comprehensive technical reports at each stage of the programme; • The formal presentation of the results contained in the report to all Canterbury region territorial local authorities, emergency management and educational organisations, as well as the media.This has been facilitated by holding formal meetings to launch each completed stage of the programme.This has proved to be a particularly successful approach, achieving excellent attendance and feedback from those attending these meetings, and a high profile in the local and national news media; and • The preparation of information for public educational purposes.For example the large colourful Canterbury earthquake source poster (Canterbury Regional Council, 1999) which is based on the results contained in the Stage lA and 1B reports, was widely circulated throughout the region. A further anticipated development is the preparation of an earthquake web site, targeted especially for schools to access relevant regional information about the earthquake hazard, and provide up to date readily available information in a non-technical format suitable as a science information resource. HAZARD INFORMATION AND ITS USE -A COMMENT It is an unfortunate fact that we do not al ways make full use of available hazard information.The reasons for this are varied, but may include factors such as staff time, financial resources, as well as information which may not be presented in a language or format that is easily understood or usable.The facilitation of improved communication between earthquake hazard experts and the community is necessary if research is to be effectively translated into actions that mitigate hazards. Hazard information prepared by scientists or engineers is often unsuitable or unusable for immediate use by nontechnical users.Most local authority planners and civil defence/emergency management staff do not have the necessary training or experience to apply earthquake hazard information.Furthermore, their experience with natural hazards is often restricted to flood related issues.Equally, users who are unfamiliar with or not proficient in using technical hazard information are likely to misuse it or, as is more common, not use it at all.Clearly there is a need for further training and improved communication in order to facilitate the use of hazard data.While technical hazard data may exist, its availability may be dependent on the provision of staff and financial resources to ensure it is fully utilised by regional and local government organisations. Planners, civil defence/emergency management staff, utility operators, and developers all use hazard information in different ways to scientists and engineers.Therefore, there is considerable scope to be innovative, and by breaking new ground, in the way information is translated and transferred. Providers of hazard information and those responsible for its dissemination are beginning to recognise the difficulty of applying technical hazard information for practical mitigation purposes.The New Zealand Building Code is an excellent and most effective example where this is already being done.Progress is being made and the gap between scientists and end-users is closing.Scientists have improved understanding of the potentially wide application of their findings, and planners are gaining an improved level of technical knowledge and understanding of scientific information.This process is assisted by a contestable funding regime whereby applicants for research funds benefit from showing that their work has practical application and is supported by hazard information users. Even when hazard information is available and it has been translated and used for hazard reduction, it may still not be used effectively.Key reasons include: • The limited available staff time; • The limited available funding; • The perception that the hazard was so low that the existing effort was adequate; • The perception of potential public opposition to politically sensitive programmes; • A lack of leadership, as well as a lack of attention from management and elected representatives due to competing day-to-day issues; • A lack of interest or commitment. Environment Canterbury's earthquake hazard mitigation strategy has identified the importance of having high quality scientific information as a prerequisite for effective hazard mitigation. The strategy recognises the importance of translating information, in partnership with the science providers, into a useable form and its effective transfer to non-technical users.Actions or initiatives likely to improve the effective use of scientific information by non-scientists are also being addressed by the programme, and several of the developments adopted have been outlined in the previous section above. CONCLUDING REMARKS In this paper we have outlined Environment Canterbury's multi-year co-ordinated programme which addresses the earthquake hazard and risk assessment for the Canterbury region.The process of establishing a long-term co-ordinated earthquake hazard and risk assessment programme has provided an ideal opportunity for research and consultancy organisations to work closely and effectively with local government. The approach to the study programme hinges on bringing together complementary databases from different organisations for • the purpose of earthquake hazard mitigation.Because of the scope of the project and the size of the Canterbury region it is considered essential that the work be staged over a period of about five to seven years, dependent on annual levels of funding support provided.The long-term framework provides flexibility for setting objectives for each future stage.The successful conclusion of the programme is dependent on performance achievements at each stage and continued funding via the annual planning process of Environment Canterbury. Figure 1 : Figure 1: (A) The geographic extent of the Canterbury region in South Island.(B) The eleven Territorial Local Authoritieswithin the Canterbury region, and major urban centres are also shown.
4,139.8
2001-12-31T00:00:00.000
[ "Geology" ]
Generalised Fractional Indexes Approximation with Application to Discrete-Time Generalised Weyl Symbol Computation Dynamical, linear discrete-time system can be described by finite set of coefficients of difference equations or state space model. The set define dynamics of the linear time-invariant system for all times k (infinite time horizon). In contradistinction the description of discrete-time linear time-varying systems requires in general definition of an infinite number of coefficients. In order to describe dynamics of time-varying discrete-time systems one can use following state space description equations with time-dependent matrices [1, 2, 16, 18]: Introduction Dynamical, linear discrete-time system can be described by finite set of coefficients of difference equations or state space model.The set define dynamics of the linear time-invariant system for all times k  (infinite time horizon).In contradistinction the description of discrete-time linear time-varying systems requires in general definition of an infinite number of coefficients. In order to describe dynamics of time-varying discrete-time systems one can use following state space description equations with time-dependent matrices [1,2,16,18]: where Above model can be converted into more general operators description [1,2,16,18] with transfer operator defined by set of impulse responses 0,0 Nevertheless analyzing or processing data with infinite dimensional size is impossible.Additional simplifying assumptions allow one to describe the timevarying system with finite set of coefficients.Linear timevarying systems can be classified with respect to the simplifying assumption.Generally following classes of time-varying systems can be distinguished [3]: general time-varying, periodic time-varying, almost periodic timevarying, almost time-invariant.Independently on the class of the system, but especially for time-varying systems in the general form analysis can be realized only on finite time horizon.It mean that accessible system data is limited by two constraints for indexes There are no assumptions about past and future system behaviour. Time-frequency methods for continuous time systems are well known [4][5][6][7][8][9][10] as well as frequency methods for discrete-time systems [11][12][13][14].Many investigations has been made until now.Recently there are also known successful applications of time-varying approximations for nonlinear systems [17].The time-frequency transform is formulated as parameterized extension of Laplace transform.General form of the transform for continuous time systems can be defined by generalised Weyl symbol [10,15]. Discrete-time formula of the Generalised Weyl Symbol can be written using digital set of parameterised impulse responses (5) and the Discrete Fourier Transform (DFT) in following way where   R is arbitrary real number, usually bounded such that 0.5 Time-frequency transformation can be computed directly from eq. ( 6) only for =  0.5 (time-varying Zadeh transfer function [4] 0.5   ).For 0   one can apply fractional indexes approximation introduced in [15].Main aim of the paper is to develop new generalised fractional indexescomputational method which allows to determine generalised Weyl symbol for arbitrary real 0.5,0.5 ,   not only for 0.5   (integer indexes method) and 0   (fractional indexes [15]).Parameter  allows to shape the set of parameterised impulse responses. The selection of the parameter  in the generalised Weyl symbol enables selection of the best accuracy region for the time-frequency transform. Generalised fractional indexes approximation Definition.Generalised fractional index discrete time is defined as linear interpolation of h taken in following way Definition.Generalised fractional indexes discrete time response value of two variables h p m h p m h p m h p m h p m Taking account (5) we have: where floor denotes round toward minus infinity. Generalized fractional impulse response can be written as follows kn h h p m h p m h p m h p m Thus generalised discrete-time Weyl symbol approximation can be defined by substituting (10) in (6), in the following form: where variables , , , , , a a b b  are defined above (9). Application of the generalised fractional indexes for generalised Weyl symbol computation. Time-invariant systems are always defined on infinite time horizon, thus all elements of the impulse response are always definite.Responses for time-varying systems do not need to be definite in general for all k  .The system is defined only on some bounded time horizon (4).The high accuracy are in the beginning-middle part of the time horizon.Accuracy for the end and beginning part of the time horizon is worse.Applying for computations generalised Weyl symbol with fractional indexes it is possible to choose precisely the part of the time horizon to compute with the high accuracy. Conclusion Time-frequency transformation is well known tool for systems and signals analysis.Accuracy of discrete-time, time-frequency diagrams depends mostly on the length of the time window.For systems defined on finite time horizon the length samples outside the time horizon are inaccessible.Generally there are two ways to analyse the system: use very short time-window, at least 2 times shorter then time horizon, or analyse the system on the full time horizon with incomplete data (without data outside the defined time horizon. Short time horizons results in boundary effects (boundary distortions) on the time-frequency diagramthe beginning and the end of time horizon.Using additional parameter  one can continuously choose the best accuracy region from the finite time horizon.Negative values of the transformation parameter close to 0.5   results in the best accuracy at the beginning of the time horizon, whereas positive values close to 0.5   gives the best accuracy at the end of the time horizon.Middle values of  close to zero ensures the best accuracy in the middle of the time horizon. Fig. 3 . 5  Fig. 3. 3D Magnitude-Time-Frequency diagram calculated for 0.5   using fractional indexes impulse responses for system defined on finite time horizon Applying parameterized impulse response (5) for the discrete-time low-pass filter mentioned above defined on finite horizon, three following 3D time-frequency diagrams are calculated and plotted in Fig. 2, Fig. 3 (integer indexes 0.5   ) and 4 (generalised fractional indexes Fig. 4 . Fig. 4. 3D Magnitude-Time-Frequency diagram calculated for 0.3   using fractional indexes impulse responses for system defined on finite time horizon Fig. 2 shows Kohn-Nirenberg symbol [9] with 0.5   and the high accuracy at the beginning part of the time horizon while in fig. 3 is plotted time-varying transfer function [4] with 0.5   and the high accuracy at the end part of the time horizon.Fig. 4 is 3D magnitude of Generalised Weyl Symbol with 0   calculated using fractional indexes approximation for impulse responses.  and is system response at time k 1 for shifted by time k 0 Kronecker delta
1,499.4
2012-04-09T00:00:00.000
[ "Mathematics" ]
Using a whole-body 31P birdcage transmit coil and 16-element receive array for human cardiac metabolic imaging at 7T Purpose Cardiac phosphorus magnetic resonance spectroscopy (31P-MRS) provides unique insight into the mechanisms of heart failure. Yet, clinical applications have been hindered by the restricted sensitivity of the surface radiofrequency-coils normally used. These permit the analysis of spectra only from the interventricular septum, or large volumes of myocardium, which may not be meaningful in focal disease. Löring et al. recently presented a prototype whole-body (52 cm diameter) transmit/receive birdcage coil for 31P at 7T. We now present a new, easily-removable, whole-body 31P transmit radiofrequency-coil built into a patient-bed extension combined with a 16-element receive array for cardiac 31P-MRS. Materials and methods A fully-removable (55 cm diameter) birdcage transmit coil was combined with a 16-element receive array on a Magnetom 7T scanner (Siemens, Germany). Electro-magnetic field simulations and phantom tests of the setup were performed. In vivo maps of B1+, metabolite signals, and saturation-band efficiency were acquired across the torsos of eight volunteers. Results The combined (volume-transmit, local receive array) setup increased signal-to-noise ratio 2.6-fold 10 cm below the array (depth of the interventricular septum) compared to using the birdcage coil in transceiver mode. The simulated coefficient of variation for B1+ of the whole-body coil across the heart was 46.7% (surface coil 129.0%); and the in vivo measured value was 38.4%. Metabolite images of 2,3-diphosphoglycerate clearly resolved the ventricular blood pools, and muscle tissue was visible in phosphocreatine (PCr) maps. Amplitude-modulated saturation bands achieved 71±4% suppression of phosphocreatine PCr in chest-wall muscles. Subjects reported they were comfortable. Conclusion This easy-to-assemble, volume-transmit, local receive array coil combination significantly improves the homogeneity and field-of-view for metabolic imaging of the human heart at 7T. Results The combined (volume-transmit, local receive array) setup increased signal-to-noise ratio 2.6-fold 10 cm below the array (depth of the interventricular septum) compared to using the birdcage coil in transceiver mode. The simulated coefficient of variation for B 1 + of the wholebody coil across the heart was 46.7% (surface coil 129.0%); and the in vivo measured value was 38.4%. Metabolite images of 2,3-diphosphoglycerate clearly resolved the ventricular blood pools, and muscle tissue was visible in phosphocreatine (PCr) maps. Amplitude- Introduction Phosphorus magnetic resonance spectroscopy ( 31 P-MRS) plays an important role in the assessment of tissue metabolism, through measurement of high-energy metabolites, such as phosphocreatine (PCr) and adenosine triphosphate (ATP), in vivo [1,2]. The PCr/ATP ratio is of particular interest in cardiovascular medicine, serving as a valuable biomarker that changes in most major heart diseases [3][4][5], and which even predicts mortality in patients with dilated cardiomyopathy [3]. Impaired cardiac PCr/ATP ratios also occur in systemic diseases, such as type-II diabetes [6] and obesity [7]. Yet 31 P-MRS is still applied in clinical medicine, principally because of its comparatively long scan times, signal-to-noise ratios that are too low for single-subject comparisons, and the challenges of recording regionally-resolved spectra across the heart. Moving to ultra-high field (i.e. 7T) from clinical MR systems (i.e. 3T) leads to a 2.5-fold increase in SNR [8,9] and increased spectral resolution [10,11]. The use of dedicated receivearrays further improves the SNR and extends coverage to more of the heart at 7T [5]. However, so far receive array coils for cardiac 31 P-MRS have used surface coils for transmission. Surface coils' radiofrequency (RF) transmit field strength (B 1 + ) inherently drops-off rapidly with increasing distance from the coil to the volume of interest [12,13]. This makes spatiallyresolved 31 P-MRS imaging ( 31 P-MRSI) across the heart challenging. Even with custom-built surface transmit coils, and optimised adiabatic B 1 + -insensitive excitation pulses, regionallyresolved measurement across the whole heart in clinically feasible times remains elusive [14]. Recently, Löring et al inserted a whole-body-sized (52 cm in diameter) 31 P birdcage RF-coil into the bore of a Philips 7T MR system (Philips Healthcare, Best, Netherlands). This showed relatively uniform spectral profiles for 31 P-MRSI examinations with a rectangular RF pulse in a cylindrical phantom and in a human subject [15]. However, their coil had to be inserted into the magnet bore after complete removal of the original patient bed. They also used the RF screen inside the patient tube decreasing the patient space significantly and affecting subject comfort and study inclusion criteria. An alternative design, not requiring complex preparation, and allowing fast installation/removal would be preferred. Furthermore, the previously described proof-of-concept coil operated only in transceiver mode, which led to low resolution MRSI matrixes in order to compensate for the inherently low SNR. In this study, we report initial results of a collaborative project to design, build and test a new, easily-removable, high-pass birdcage, whole-body (55cm-diameter) 31 P transmit RF-coil for use on a Magnetom 7T MR scanner (Siemens Healthcare, Erlangen, Germany), integrated into an extension of the scanner's motorized patient bed; we use it in conjunction with a 16-element anterior receive array (Rapid Biomedical, Rimpar, Germany) for cardiac 31 P-MRSI at 7T. Materials and methods The underlying design of our whole-body coil was similar to that previously reported [15]. However, to allow the desired easily removable setup, the design of the whole-body coil was adjusted by MR Coils (MR Coils BV, Zaltbommel, Netherlands) so that the lower rungs of the birdcage were integrated into an extension of the scanner's motorised patient bed. This is driven onto custom-built support rails at the service-end of the magnet for subject access ( Fig 1E). As the coil had to be inserted without removing the patient tube, while maximizing space for the subject, the existing RF shield integrated in the scanner's gradient coil was used as the RF shield for the body coil. Therefore, only the rings and rods of the birdcage had to be shifted inside the bore of 58 cm, leading to a setup with inner coil diameter of 55 cm. The upper part of the birdcage coil was made detachable to ease patient positioning. The coil was tuned to 120.3 MHz, the frequency for 31 P-MR on a Magnetom 7T MR scanner. The standard Siemens 8-kW RF power amplifier was used at this stage. Simulations of electro-magnetic fields were performed out using CST Studio Suite 2016 (CST Computer Simulation Technology AG, Darmstadt, Germany). A whole-body coil matching the proposed design, i.e. a 40-cm long, 20-rung birdcage coil with a 55 cm diameter, was simulated inside an RF shield matching the diameter and length of the shield of the scanner. Two voxel models, "Gustav" and "Laura", were simulated in order to identify the 10g local, global head and global body specific absorption rate (SAR) values for 1W input power. In order to get the worst case for the 10g local SAR values, one arm of the voxel model was placed in direct contact with the inner lining of the transmit coil. A rectangular (26×28 cm 2 ) surface transmit coil, previously used in our lab for 31 P excitation at 7T [5], was also simulated. The coil performance was first tested in a two compartment phantom consisting of a 18 L container (outer dimensions 46×24×17 cm 3 ), filled with NaCl (aq) (73 mM), and a 2cm cube, filled with KH 2 PO 4(aq) (1.8 M), set at a 10cm depth. A series of fully relaxed (TR = 10 s), non- localized, single average 31 P-MRS spectra were acquired increasing the peak voltage of a 10 ms rectangular pulse from 25 V to 400 V in 25 V steps. The SNR and peak B 1 + were compared between a quadrature surface 31 P RF-coil in transceiver mode (two 15 cm diameter loops, with overlap decoupling, as described in [14]), the new whole-body (55-cm diameter) birdcage coil in transceiver mode; and the combination of the whole-body coil for transmission and a 16-element receive array, consisting of 4×4 matrix of overlapping 8×5.5 cm 2 flexible receive loops [16], (Rapid Biomedical) for reception. Additional tests were performed to check the compatibility of the whole-body coil with the 16-channel receive array, e.g., scattering parameters measurements, and a heating test using fibre-optic temperature probes (Neoptix Inc, Quebec, Canada) and an infrared camera (FLIR Systems Inc, Wilsonville, Oregon). Ultimately, scanning human subjects with the receive array inside the whole-body transmit coil was found to be safe. Eight healthy volunteers (one female, mean age 28 ± 5 years, ages ranging from 21 to 35 years) were approached between October 2016 and January 2017 and all were consecutively recruited for our three in vivo experiments. A written informed consent was obtained in compliance with ethical and legal requirements. Oxford Central University Research Ethics Committee provided approval for this technical development work. No further demographic characteristics were recorded for the recruited volunteers. The individual whose photograph is shown in Fig 1 has given written informed consent (as outlined in PLOS consent form) to publish this photograph. All subjects were positioned supine inside the whole-body coil and CINE 1 H FLASH images were acquired using a single 1 H, transmit/receive, fractionated dipole antenna RF-coil (MR Coils) positioned over the heart. The 1 H RF-coil was then replaced with the 16-channel receive array for the 31 P experiments. The first experiment recorded high-quality 3D-resolved spectra using our established cardiac 31 P-MRS protocol. Specifically: acquisition-weighted 3D-UTE-CSI [17] spectra were acquired over a 16×16×8 matrix covering a 500×500×400 mm 3 field-of-view (FOV) in three volunteers, using a 1 ms long amplitude-modulated excitation pulse [8]. Eight averages using TR = 1 s were acquired in 23 minutes 52 seconds. The second experiment recorded metabolite maps and tested the performance of amplitude-modulated RF "saturation bands". Four volunteers were examined in the second experiment using two acquisition-weighted, transverse 2D-UTE-CSI experiments with a 24×24 matrix over 500×500 mm 3 FOV, and 60 mm slice thickness. Slice selective 2.5 ms long sinc pulses were used for excitation. Relatively short TR = 300 ms was used allowing for 32 averages within 19 minutes 26 seconds. Two amplitude-modulated saturation bands (10 ms duration) were used to suppress signal from chest muscles in one of the acquisitions. The third (and final) experiment recorded B 1 + field maps to quantify the transmit B 1 + homogeneity of the new whole-body RF-coil in vivo. Four subjects underwent B 1 + field mapping using a Bloch-Siegert sequence [13] with similar 2D resolution and excitation as in the second experiment. The Fermi pulse (8 ms duration), placed at ±2 kHz from PCr, required a minimum TR = 400 ms. The number of averages was 32 or 48, leading to a scan time of 26 minutes or 38 minutes for each of the ±2 kHz Fermi pulse frequency offsets. Signals from individual receive elements were combined using whitened singular value decomposition [18], and the combined spectra were fitted using a Matlab (MathWorks, Natick, MA) implementation of the AMARES time domain fitting routine [19]. The B 1 + field maps were calculated in all voxels with sufficient SNR, as defined in [13]. Results The results of our simulations are depicted in Fig 1A-1D. The simulated local 10 g SAR efficiency, global SAR efficiency and global head SAR efficiency of the designed coil were 3.4, 0.28 and 0.25 W/kg/μT 2 , respectively. Our simulated worst-case local 10 g SAR in the body was 0.145 W/kg and global body SAR was 0.013 W/kg. The mean simulated B 1 + for 1 W delivered power was 0.160 ± 0.075 μT for the heart and 0.122 ± 0.068 μT for liver (mean ± standard deviation for voxels of tissue type "heart" or "liver" in the 3D results). The coefficients of variation (CV), i.e. the standard deviation divided by the mean, for simulated B 1 + across the "heart" type voxels were 46.7% for the new birdcage coil (55cm-diameter) and 129.0% for the rectangular surface coil. The results of the phantom SNR and peak B 1 + comparison between the coils are given in Table 1. The SNR of the combined whole-body coil transmit and 16-channel receive was 2.6 times higher in comparison to the whole-body coil in transceiver mode. The peak B 1 + of the combined whole-body transmit and local receivers at a 10 cm depth below the coil was 3.5 times lower compared to the quadrature surface coil, however, the SNR achieved was comparable. Fig 2 depicts transverse 2D spatial distribution in vivo maps of PCr and 2,3-diphosphoglycerate (2,3-DPG) acquired using the body coil in transceiver mode and with the combined whole-body transmit and 16-channel receive setup. While the detection of signal from the heart region is challenging if the whole-body coil is used in transceiver mode, i.e. no 2,3-DPG signal visible, the heart is clearly delineated when the combined setup is used. Representative in vivo spectra acquired with the 3D-UTE-CSI protocol in human heart and liver using the combined setup are depicted in Fig 3. The applied non-adiabatic saturation bands suppressed an average of 71 ± 4% of the PCr signal in the chest muscles (target region), while suppressing an average of 19 ± 3% of the PCr signal in the heart (not targeted); the suppression efficiency is depicted in Fig 3C and 3D Discussion In our study, we propose a new easily removable whole-body RF transmit coil combined with a 16-element receive array for cardiac 31 P-MR at 7T. This setup provides homogeneous excitation through the whole chest without the need for adiabatic excitation pulses, as demonstrated by our CST simulations and 31 P B 1 maps measured in vivo. In addition, the receive array increases the SNR of our setup compared to the body coil in transmit/receive mode, allowing high-resolution 31 P-MRSI experiments in vivo. The increased 31 P B 1 + homogeneity allows the use of conventional amplitude-modulated pulses to suppress chest wall muscles instead of the SAR demanding BISTRO approach [20] that has previously been used with surface transmit coils [8]. Our new coil was designed to easily move in and out of the bore of a Magnetom 7T MR scanner using the system's motorized patient table. Therefore, the maximal inner diameter of the transmit coil was limited by the scanner bore diameter (58 cm). Our constructed wholebody coil had an inner diameter of 55 cm. Although this means reduction of the available patient space, it is still larger than the diameter of the previously reported whole-body coil that had a 52 cm diameter [15], and since it is integrated with the extension of the patient table, this allows us to scan a range of subjects in comfort. Additionally, the upper part of the birdcage coil is easily detachable, which facilitates straightforward positioning of subjects before their scan and rapid evacuation if a patient became acutely unwell. Using a volume transmit coil improved the transmit uniformity as expected, quantified by simulations and confirmed by our Bloch-Siegert 31 P measurement in vivo. Our simulated SAR efficiency values were similar to the previously reported ones [15], i.e. 3.4 vs. 3.8 (local), 0.28 vs. 0.24 (global body) and 0.25 vs. 0.33 W/kg/μT 2 (global head). Volume transmit and receive array for cardiac 31 P-MRSI at 7T To increase receive sensitivity, we combined the whole-body transmit coil with a 16-element receive array. No changes in the S 11 and S 12 parameters of whole-body coil ports were observed using a network analyser (8712C, Hewlett Packard, Palo Alto, California) when the 16-channel array was inserted (regardless of its position). Adding the receive array led to a 2.6-fold increase in achieved SNR in phantom experiments, in comparison to the use of the whole-body coil in transceiver mode alone. This is comparable to the improvement seen on a 3T TIM Trio scanner (Siemens) between body coil 1 H MRI and using half of a 32-channel cardiac receive array (Invivo, Gainesville, Florida). A further increase in SNR might be gained by placing another receive array beneath the volunteer. Active detuning of the body coil during signal acquisition could also potentially lead to an increase in SNR. Every increase in singleelement SNR will also lead to an increase in the precision of the complex "weights" used in coil combination algorithms, e.g., WSVD. The final SNR will therefore increase both due to the Volume transmit and receive array for cardiac 31 P-MRSI at 7T increased single-element SNR, but even more because of the better complex "weights" estimation in the combination step [18]. On the other hand, the peak B 1 achieved by the whole-body transmit coil in the phantom experiments at a depth of 10 cm was 3.5 times lower than that of our dedicated quadrature surface transmit coil as expected. However, the B 1 + homogeneity of the whole-body transmit coil, allowing for short-TR scans with uniform Ernst-angle excitation or multi-echo readout [21] compensates for this. Furthermore, it may be feasible to drive the whole-body coil with a highpower 31 P RF-amplifier, such as the 35 kW 123 MHz amplifiers used for 1 H-MRI on Siemens 3T scanners. Dedicated low-loss transmit cabling could also be used. We estimate that these changes would give peak B 1 + output comparable to our quadrature surface coil at 10 cm depth, while also retaining the coverage and uniformity advantages of the whole-body coil. The in vivo data demonstrate that while our whole-body RF-coil in transceiver mode could be used to acquire PCr signal from skeletal muscles, its SNR is too low to allow detection of metabolites of lower concentration, e.g., 2,3-DPG. However, in combination with the 16-element receive array, good quality spectra and well-resolved maps of PCr and 2,3-DPG can be acquired in vivo. As expected, the PCr signal was localized to the skeletal muscle wall as well as the heart, while the 2,3-DPG signal was restricted only to the ventricular blood pools. This confirms the high spatial resolution of the acquired 31 P-MRSI maps. The acquired in vivo 31 P B 1 + maps showed a high level of uniformity with a coefficient of variation <39% across all voxels with sufficient SNR. To demonstrate the use of this B 1 + uniformity of the whole-body coil, we showed the effectiveness (>70%) of conventional amplitude-modulated saturation bands to suppress the signal from chest and abdominal muscles that can otherwise contaminate cardiac 31 P-MRS data [8,22]. In conclusion, we have designed, constructed and tested an easily removable, whole-body transmit RF-coil and its combination with 16-element receive array that is straightforward and comfortable to use for cardiac 31 P-MR at 7T. This apparatus allowed us to measure in vivo the homogeneity of the 31 P transmit field, confirming the results of electromagnetic field simulations. It allows us to record anatomically-consistent 31 P metabolite maps, and to use saturation bands with amplitude-modulated pulses (and hence low SAR demands) to suppress signals from skeletal muscle. This combination of hardware is a step towards regionally-resolved, whole-heart cardiac 31 P-MRS studies at 7T.
4,370.8
2017-10-26T00:00:00.000
[ "Medicine", "Physics" ]
Effect of market information quality, sharing and utilisation on the innovation behaviour of smallholder pig producers Abstract Although pig farming can accelerate Uganda’s economic development, the value chain is undeveloped with poorly organized informal markets. Buyers take advantage of farmers paying low prices, pointing to the poor quality of pigs and pork. Farmer innovation can remedy this situation by enabling farmers to reduce costs, improve pig productivity and quality of pigs and pork. Leveraging farmer innovation behaviour calls for appropriate agricultural information. However, the effect of market information quality, sharing, and utilization on the innovation behaviour of pig producing farmers is not fully known. This study sought to determine the effect of information quality, sharing, and utilisation on the innovation behaviour of pig producing farmers in Northern Uganda. A cross-section survey of 239 respondents selected through multiple stages of purposive and random sampling was done. Data were analysed by Structural Equation Modeling (SEM). The results show that information quality contributes significantly to innovation behaviour directly (β = 0.247; P < 0.01) as well as indirectly through the partial mediation of information utilization (β = 0.176; 95% CI = 0.040∼0.349). Therefore, interventions that seek to enhance smallholder farmer innovation should provide quality information and support farmers to utilise it. J. Mugonya is the founder and Managing Director of Mugagga Holdings Limited, a private company engaged in agricultural marketing, research and consultancy in Uganda. His research interest is smallholder farmer's innovation behaviour and transitions. He is interested in the innovations that smallholder farmers make in response to biotic and abiotic shocks and their transitions from dependency on state support or humanitarian aid for their production and consumption to self-reliance. Currently, he is working with ICRISAT and WFP to study the transition readiness of cash transfer beneficiaries in Mogadishu and Puntland, Somalia. The present paper focusing on information quality shades light on how information can be used to enhance the innovation behaviour of smallholder farmers. PUBLIC INTEREST STATEMENT Although information system can accelerate the development of agriculture in developing countries, the quality of the information provided by the system can be a limiting factor. If farmers get inaccurate information regarding market demand, price, growing problems and weather conditions, they make wrong production and marketing decisions which affect the profitability of their enterprises. This paper focuses on the effect of information quality, sharing and utilisation on the innovation behaviour of pig producers. It emerges that the quality (timeliness, cost-effectiveness, usability, accuracy and relevance) of information significantly affects the innovation behaviour of pig producers. Farmer innovation only succeeds if their information quality needs are met. Therefore, interventions that seek to enhance smallholder farmer innovation should provide quality information to farmers for success. Introduction The pig sector can accelerate Uganda's economic development through the improvement of the welfare of smallholder farmers and the provision of employment (Mulindwa, 2016;Tatwangire, 2013). However, this potential is undermined by systemic market barriers, including limited access to market information, poor market linkages, inadequate access and high cost of feeds, credit and extension services (Muhanguzi et al., 2012). Scholars have suggested that to remedy these challenges; pig producing farmers need to innovate ways of reducing costs and dependence on external inputs, improve organization, production and productivity, and quality of pigs and pork (Baliwada et al., 2017;Creaney et al., 2015;Reij & Waters-Bayer, 2001;Wiskerke & Roep, 2007). This innovation would enable farmers to increase the competitiveness and performance of their piggery enterprises and earn an income commensurate to their work in the value chain (López-fernández et al., 2016;Rojo-Ramírez et al., 2020). Still, innovation would enhance the ability of farmers to react and adapt to risks, market failures and environmental distresses (Chopeva et al., 2015). However, producer market information needs have to be met before such innovation behaviour takes root (Sousa et al., 2016). For instance, (Kante et al., 2019) find that the use of an ICT model is highly predictive of the increased adoption of farm input information by small-scale farmers in developing countries. Several other studies have confirmed that access to market information significantly affects farmer innovation (Arshad et al., 2016(Arshad et al., , 2017Ullah et al., 2020;Zulfiqar & Thapa, 2017). However, these studies did not consider the quality of the information and the mediating effect of information utilisation on innovation behaviour. Opposed to the linear approach to innovation (transfer of technology), a central notion of the innovation system (IS) approach is the fact that innovation is not spontaneous, a "one-time off event" but rather a process that takes place over some time (Hermans et al., 2013;Knickel et al., 2009;Schut et al., 2015). It does not only involve technology uptake by the recipient farmers but also influences within the environment of farmers such as social support structures, markets, and other factors (Dolinska & D'Aquino, 2016). Particularly, the IS approach demands that in contributing to the innovation process, joint problem-solving and therefore participatory technology development is necessary through a discursive space for all stakeholders, namely the beneficiary farmers, scientists, change agents, and support services (Läpple et al., 2015;Leitgeb et al., 2011;Naouri et al., 2020;Reij & Waters-Bayer, 2001;Röling, 2009). It places the farmers as beneficiaries at the centre of determining their destiny in the innovation process. This is because integrating farmers, like the smallholder pig producers, in the innovation process has been linked to stimulating farmer learning and strengthening relationships within the value chain (Vecchio et al., 2020). That is, if pig producing farmers gain a strong bargaining position in the value chain, they can develop durable, mutually beneficial social and economic relationships with other players in the market as well as demand for support services. Therefore, quality market information can enable pig producing farmers to maintain and reap from existing market linkages. For example, it enables the farmers to know and orient their production towards specific needs and wants of target buyers, who will likely reward them by paying a better price for the better quality pig products generated through innovation. The long term impact will be improved livelihood of farmers arising from enhanced specialization and efficiency in creating market value. Although access to quality information has been said to enable farmers to improve their livelihoods through several ways including, increasing efficiency and productivity, and supporting innovation behaviour (Ofuoku et al., 2008); there is a paucity of empirical evidence on its effect on innovation behaviour. Therefore, this study sought to examine the effect of the market information quality, sharing and utilisation on the innovation behaviour of smallholder pig producers in northern Uganda. The findings will contribute to the understanding of the effect of different constructs of agricultural information on farmer innovation behaviour. Hypotheses development and operationalization The model links market information quality to the innovation behaviour of pig producers both directly and indirectly via the utilisation and sharing of information ( Figure 1). Innovation refers to the generation or use of a new method, idea or practice to create greater value for own satisfaction or for the customer from available resources (Brugere et al., 2020;Lowitt et al., 2020). Innovation behaviour is defined as the tendency of an individual or a business to introduce or adopt something new or the attitude towards information and market demand of innovations (Chopeva et al., 2015;Tirfe, 2014). Farmer innovation is a technology, practice or organization along a given value chain that is different from common or traditional practice and is developed primarily by a farmer or a group of farmers with or without external assistance by extension agents, researchers, or development workers (Wu¨nscher & Tambo, 2016). Therefore, Farmer innovation behaviour is the tendency of a farmer or a group of farmers to develop new technology, practice, or organization along a given value chain with or without external support from extension agents, researchers or development workers. The subject of investigation for this study regarding information quality was on five attributes for convenience and clarity to the respondents. The attributes were operationalized as timeliness [the extent to which information is up to date and is accessible on time for its planned use], costeffectiveness [the degree to which information access is affordable by the user], usability [the ease of understanding information and thus being able to apply it], accuracy [the extent of user's perception of information as correct and reliable] and relevance [the level of match between supplied information and that which is required to make a decision] Kumar & Jakhar, 2010;Ofuoku et al., 2008). The utilisation of information for the intended purpose is hampered if the source does not give credible and quality information. For example, in an adoption study, (Schipmann-Schwarze et al., 2014) contend that access to extension officers is not the major barrier to farmers being informed about improved varieties. It is rather the quality of the information provided by extension officers that play a role in adoption. If extension officers are accessible but are not well informed about improved varieties, awareness and by extension adoption of improved varieties will remain low. Therefore, information quality is posited to have a direct positive effect on information utilisation H 2 : Information utilisation positively affects innovation behaviour In most cases, information utilisation tends to ignite the spirit of curiosity and creativity among users (Keh et al., 2007;Tadesse, 2008). Therefore farmers with high levels of innovation behaviour are likely to be active in information utilisation. For this reason, information utilisation is hypothesized to positively impact innovation behaviour. Also, because the acquisition of quality information is irrelevant without its application by receivers, information utilisation is postulated to be a partial mediator of the relationship between information quality and innovation behaviour. H 3 : Market information quality has a direct positive relationship with innovation behaviour Information is a vital resource in farming practice, and quality information enables farmers to improve their livelihoods in several ways, including supporting innovation. Therefore quality information is expected to have a direct positive effect on pig producers' innovation behaviour. H 4 : Market information quality positively affects information sharing Information sharing is the exchange of critical information amongst chain partners (Li et al., 2005). It has two dimensions, connectivity and willingness (Marinagi et al., 2015) and delivers value based on four features including content, frequency, direction, and modality (Jonsson & Myrelid, 2016). Quality information is more likely to be shared and applied by intended users (Marinagi et al., 2015). Therefore it is postulated that information quality has a positive relationship with information sharing. H 5 : Information sharing mediates the relationship between market information quality and innovation behaviour (Chindime et al., 2017) argue that information sharing enhances the innovation performance of farmers. Therefore, information sharing is expected to positively impact pig producers' innovation behaviour. Description of the study area The study was conducted in Paicho sub-county, Gulu district, and Koro sub-county, Omoro district, Northern Uganda, between October and November 2018. The geographical coordinates are 2.8186° N, 32.4467° E and 2.7152° N, 32.4920° E for Gulu and Omoro respectively. In this area, most farmers are smallholders with an average landholding of two acres per household and the majority of pig rearing households keep between 6 to 20 pigs. Pigs are largely managed through tethering, feeding on locally available fodder and/or domestic food residues with 60% of the labour provided by women (Ikwap et al., 2014). The study took place in a setting in which pigs have been prioritised by many interventions for transforming farmer livelihoods. As such, several government programs have been supplying pigs to farmsteads as a way of achieving increased production. Some of these programs include the National Agricultural Advisory Services (NAADS), Northern Uganda Social Action Fund (NUSAF) and the Youth Livelihood Program (YLP). Research design and sampling The study employed a cross-sectional survey because time and financial resources available could not support longitudinal or experimental study. A multi-stage sampling technique was used to select study participants. First, two districts and then one sub-county per district were all selected purposively. Sub-counties of Paicho (Gulu district) and Koro (Omoro district) were selected. With only 8.9% of households keeping pigs, Gulu district (mother district of Omoro) was rated among the districts with the lowest number of pig rearing households in the northern region of Uganda (Tatwangire, 2014). Yet, pigs and pork in these administrative units have been reported to have a lucrative market and high turnover (Ikwap et al., 2014). Paicho and Koro sub-counties were both identified by their respective district production offices as the sub-counties with the highest number of pig rearing households in the district. Second, three parishes 1 that benefited from the NAADS program were purposively selected from each sub-county. In Paicho sub-county, Pagik, Kal-umu and Kal-ali parishes were selected, while in Koro sub-county, Pageya, Labwoch and Guna parishes were selected. Third, a complete list of all pig rearing households in the selected parishes that benefited from the NAADS program was obtained from the respective sub-county headquarters. The list had 393 farmers from Paicho and 201 from Koro constituting a sampling frame to 594 pig producing farmers. Fourth, systematic sampling was used to select the study sample of 239 respondents; the number which was determined using Yamane's formula (Yamane, 1967): Where; N = population, n = Sample size, e = Degree of confidence level at 95%. The 239 respondents were distributed between Paicho and Koro in portions of 143 and 96 pig producing farmers, respectively. Data collection Before data collection, the study obtained approval from Gulu University Research Ethics Committee (GUREC) under application number GUREC-094-18. Face to face interviews were conducted using a pre-tested structured questionnaire. Pre-testing was done on ten pig producing farmers in Unyama Sub-county because the area had many pig producing farmers and was near the study area, yet it was not part of the study. After the pretest, some amendments were made in the questionnaire, such as re-wording, and re-ordering of some questions to ensure clarity, logical question sequence, and instruction adequacy. Informed consent was sought from every respondent before commencing the interview. The questionnaires were administered in the local dialect (Acholi), but responses were recorded in English. The questionnaire comprised of closed and Likert scale questions in which participants were requested to rate various items to ensure the clarity of the questions to the respondents for easy answering. The data collection tool consisted of three sections. Section one captured household socio-economic data including, age of the household head, education level, sex, household size, marital status, non-farm employment and group membership. Section two focused on information quality, sharing and utilisation. Market information quality data were collected using 15 items capturing the five attributes of the construct. The farmers' rating concerning cost-effectiveness, usability, timeliness, accuracy and the relevance of market information was recorded on a fivepoint Likert scale. A sample item from the dimension of timeliness reads as; "in case of a disease outbreak, the information reaches me fast enough to enable me to take appropriate actions". The data on information sharing were collected with six items capturing the two dimensions of information sharing which include connectivity and willingness as adapted and modified from (Marinagi et al., 2015;Yusuf, 2012). A sample item from the dimension of willingness reads as; "I share pig price information with peer farmers". Data on information utilisation were gathered by six items that captured the action-oriented use of information by respondents as adapted and modified from (Marinagi et al., 2015;Yusuf, 2012). A sample item from this construct reads as follows; "I use buyer information to make decisions on where to sell pigs." All items were rated on a 5-point Likert scale where 0 = not at all, 1 = rarely, 2 = occasionally, 3 = frequently and 4 = always as adapted with modifications from (Sullivan & Artino, 2013). Lastly, in the third section, farmer innovation behaviour was captured under the four dimensions of innovation behaviour namely: i) exploration, ii) experimentation, iii) adaptation of new pig rearing techniques/ practices, and iv) modification of existing farm practices as adapted with modification from previous research (Ajayi et al., 2018;Aubert et al., 2012;Coussy, 2015;P. Wilson et al., 2014). A total of 12 items were used to collect data on innovation behaviour. Each item was rated on a 5-point Likert scale where 0 = not at all, 1 = rarely, 2 = occasionally, 3 = frequently and 4 = always as adapted with modifications from (Sullivan & Artino, 2013). Sample items on innovation behaviour include: i) from the dimension of exploration "I am very curious about learning how to appropriately feed pigs"; ii) experimentation "I like to experiment new ways of erecting pig housing structures"; iii) adaptation "I adjust new parasite and disease control practices to suit my farming situation" and modification "I use new knowledge to modify existing pig feeding practices on the farm". Data analysis The exploratory factor analysis was done to reduce the number of items for each construct to obtain the best fit model. This was achieved by principal component analysis using Varimax rotation with Kaiser Normalization, a criterion of eigenvalue over one and suppressing items with a factor loading of 0.4 in SPSS. Correlations were run among the specified constructs to test for the existence of postulated associations and rule out the possibility of multicollinearity, which would impede the use of SEM for analysis (Swati & Rajib, 2015). The tight-fighting set of components was imported to AMOS for subsequent analysis using Structural Equation Modeling (SEM) for Confirmatory Factor Analysis (CFA) of hypothesized the relationships. SEM was used because it enables simultaneous estimation of multiple cause-effect relationships among various predictor, mediating, and response variables (Kalule et al., 2019;Swati & Rajib, 2015). Convergent validity that measures the contribution of each observable variable to the total variance of a construct was tested using factor loadings and Average Variance Extracted (AVE). Construct reliability was assessed using composite reliability (CR). Discriminant validity was tested by comparison of the square root values of AVE with values of construct correlations. The independent variables, in this case, were information quality, information sharing, and information utilisation while the dependent variable was the innovation behaviour of pig producing farmers. The mediation analysis was done to ascertain whether the hypothesized mediation among the variables existed. Results Most respondents (73.64%) were males and a small number had access to credit (34.31%) and extension services (35.98%) [ Table 1]. Farmers attributed the low access to extension service to the limited number of government agricultural extension staff covering a large area of the administrative units in this study. The KMO value was 0.831, and Bartlett's Test of Sphericity was significant (Chi-Square: 1496.415, df: 153, Sig. 0.000). A KMO value above 70% suggests that the data is fit for factor analysis (Kaiser, 1974). The total variance explained was 68.245%. The Average Variance Extracted of the independent variables and the dependent variable reached the threshold of 0.5. All constructs had composite reliability of above 0.7 (Table 2). Therefore acceptable convergent validity and measurement reliability of all the constructs in the model was achieved (Hair et al., 2017) All the square root values of AVE were greater than the correlations, which confirmed the discriminant validity of the constructs. The correlates ranged from weak to moderate (r = 0.057 to r = 0.476) pointing to the existence of relationships amongst the study variables. Since there was no high correlation amongst the variables, then the assumption of "no multicollinearity" was confirmed. Altogether, the construct items achieved the required threshold of factor loading of 0.5 [ Table 3] (Hair et al., 2009). The model exhibited an acceptable level of fit as observed from the fit indices ( Figure 2) going by (Hair et al., 2017;Awang, 2015;Hair et al., 2010) who recommended that for a good fit, the Goodness of Fit Index (GFI) > 0.8; Adjusted Goodness of Fit Index (AGFI) > 0.8; Normed Fit Index (NFI) > 0.8 and Root Mean Square Error of Approximation (RMSEA) < 0.08. Table 4 show that market information quality (β = 0.245; P < 0.01) is a positive and significant predictor of marketing information utilisation for innovation. Market information utilization (β = 0.794; P < 0.01) positively and significantly predicts farmer innovation behaviour. As predicted, information quality directly affects innovation behaviour (β = 0.247; P < 0.01). Information quality positively affects information sharing (β = 0.194; P < 0.05). The path from information sharing to innovation behaviour is positive but non-significant. Results of hypothesis testing in The mediation effect of information utilisation (β = 0.176; 95% CI = 0.040 0.349) between information quality and farmer innovation behaviour was significantly different from zero ( Table 5). The biggest significant total effect on innovation behaviour is exerted by information utilisation (β = 0.643; 95% CI = 0.482 0.782) followed by information quality (β = 0.375; 95% CI = 0.137 0.527). The two findings meet the criterion of practical relevance of β ≥ 0.2 (Kalule et al., 2019). Also, the causal relationship between information quality and information utilisation (β =0.244; 95% CI = 0.057 0.442) was statistically significant and satisfied the requirement of practical meaningfulness. The direct relationship between information quality and farmer innovation behaviour (β = 0.199, 95% CI = 0.137 0.527) was significant and met the criterion of practical relevance while that between information sharing and innovation behaviour (β = 0.096; 95% CI = −0.051 0.255) was not significant. Discussion As predicted by (Jonsson & Myrelid, 2016), results of hypothesis testing presented in Table 4 show that information quality positively affects information utilisation (P < 0.01), implying that farmers who receive quality information are likely to use it to alter their pig production and marketing activities for better gains than those who do not get quality information. Also, the results revealed that market information utilisation positively affects innovation behaviour (P < 0.01). Farmers who use market information to make decisions on how to rear pigs tend to have a higher innovative activity, which translates into better competitiveness. This finding is in line with the study by (Uwandu et al., 2018) in which agricultural information utilisation was found to enhance farmer innovativeness in Imo State, Nigeria. Related studies by Adetimehim et al., 2018in Indo State, Nigeria, Acheampong et al., 2017 in Ejisu-Juaben Municipality of Ghana and (Aonngernthayakorn & Pongquan, 2017) in central Thailand confirmed a relationship between access to extension service and utilisation of agricultural information. Therefore, to enhance farmer innovation through information utilisation, policymakers need to improve access to extension service by farmers. Predictably, information quality was found to have a direct positive relationship with innovation behaviour (P < 0.01). This suggests that farmers who access quality information are more able to explore new ideas, experiment, adapt new practices, and improve existing pig rearing practices than those who do not have access to quality information. This is attributable to the fact that quality information is relevant, accurate, timely and usable which prompts users to utilise it. Consistent with previous research by (Marinagi et al., 2015), information quality was positively related to information sharing (P < 0.05). This result points to the fact that quality information is more likely to be shared amongst the users of the information. This could be attributed to the fact that quality information is relevant, timely and usable which makes recipients trust it and share it with their peers. In contrast to a previous study by (Chindime et al., 2017), information sharing among pig producers had no significant effect on their innovation behaviour. (Chindime et al., 2017) reported that information sharing among farmers through networking had a positive significant effect on their innovation behaviour. This discrepancy is perhaps because the previous study did not consider the quality and utilisation of the information being shared. This finding indicates that sharing of quality information among farmers does not necessarily affect their innovation behaviour unless the information is put into use. The result further supports the argument that ambiguous information-sharing causes information overload which reduces the potential of small producers to use the information for innovation (Jonsson & Myrelid, 2016;Wesseler & Brinkman, 2002). Therefore, information sharing performs no mediation role in the relationship between information quality and innovation behaviour as shown by bootstrapping results (Table 5). Interestingly, information utilisation predicts up to 64.3% (Table 5) of the variance in farmers' innovation behaviour which implies that information utilisation is the single most important information factor affecting farmer innovation behaviour. Therefore, interventions to boost farmer innovation should enhance information utilisation by farmers through institutional support such as extension service provision. Conclusion and recommendations Market information quality enhances farmer innovation behaviour both directly and indirectly through the partial mediation of information utilization. It can be argued that the quality of market information received and its utilisation at the farm level is important for the kind of innovation behaviour that smallholder pig producers exhibit. Therefore, interventions that seek to enhance smallholder farmer innovation should provide quality information and support farmers to utilise it. An extension of the study would be to analyse the effect of other dimensions of information utilisation such as knowledge enhancing use and affective use of information on the innovation behaviour of farmers. It may also be worthwhile to put the study in a longitudinal perspective to understand how these influences play out in the long run and discern how the interventions to improve the pig value chain should be tackled using information systems and farmer innovation approach.
5,871.6
2021-01-01T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
Sepsis: from bench to bedside. Sepsis is a syndrome related to severe infections. It is defined as the systemic host response to microorganisms in previously sterile tissues and is characterized by end-organ dysfunction away from the primary site of infection. The normal host response to infection is complex and aims to identify and control pathogen invasion, as well as to start immediate tissue repair. Both the cellular and humoral immune systems are activated, giving rise to both anti-inflammatory and proinflammatory responses. The chain of events that leads to sepsis is derived from the exacerbation of these mechanisms, promoting massive liberation of mediators and the progression of multiple organ dysfunction. Despite increasing knowledge about the pathophysiological pathways and processes involved in sepsis, morbidity and mortality remain unacceptably high. A large number of immunomodulatory agents have been studied in experimental and clinical settings in an attempt to find an efficacious anti-inflammatory drug that reduces mortality. Even though preclinical results had been promising, the vast majority of these trials actually showed little success in reducing the overwhelmingly high mortality rate of septic shock patients as compared with that of other critically ill intensive care unit patients. Clinical management usually begins with prompt recognition, determination of the probable infection site, early administration of antibiotics, and resuscitation protocols based on “early-goal” directed therapy. In this review, we address the research efforts that have been targeting risk factor identification, including genetics, pathophysiological mechanisms and strategies to recognize and treat these patients as early as possible. INTRODUCTION The word sepsis is derived from the Greek term for rotten or "to make putrid". Sepsis, defined as the systemic host response to microorganisms in previously sterile tissues, is a syndrome related to severe infections and is characterized by end-organ dysfunction away from the primary site of infection. To meet the definition of sepsis, patients need to satisfy at least two of the Systemic Inflammatory Response Syndrome (SIRS) criteria in association with having a suspected or confirmed infection. 1,2,3,4,5 The severity and mortality increase when this condition is complicated by predefined organ dysfunction (severe sepsis) and cardiovascular collapse (septic shock). 6 The normal host response to infection is complex, aiming to both identify and control pathogen invasion and start immediate tissue repair. Both the cellular and humoral immune systems are activated, giving rise to anti-inflammatory and proinflammatory responses. Exacerbating these mechanisms can cause a chain of events that leads to sepsis, promoting massive liberation of mediators and the progression of multiple organ dysfunction. 7 Morbidity and mortality remain unacceptably high despite increasing knowledge about the pathophysiological pathways and processes involved in sepsis. It still is one of the most prevalent causes of intensive care units (ICU) morbidity and mortality worldwide. 1,8 More than 750,000 sepsis cases occur in the United States every year, leading to approximately 220,000 deaths. 9,10 Consistent data on the incidence, outcome and costs of sepsis patients in Brazil and Latin America are scarce. However, a recent Brazilian study showed that up to 25% of ICU patients will meet sepsis diagnostic criteria during an ICU stay. 11 Even with the best treatment available, the mortality rates of septic shock could be as high as 50% 1,6,8,9,11 or up to 75% on longer follow-ups. 12 In an attempt to find an efficacious anti-inflammatory drug that reduces mortality, a large number of immunomodulatory agents have been studied in experimental and clinical settings. However, the vast majority of these trials showed little success in reducing the overwhelmingly high mortality rates of septic shock patients as compared to those other critically ill ICU patients, 9,13,14,15 despite promising preclinical results. Clinical management usually begins with prompt recognition, determination of the probable infection site, early administration of antibiotics, and resuscitation protocols based on "early-goal" directed therapy. 1,10 In this review, we address the research efforts that have been targeting both risk factor identification, including genetics, pathophysiological mechanisms and strategies to recognize and treat these patients as early as possible. Definitions The syndrome currently known as sepsis has had many definitions over the years. 2 In 1991, a consensus conference organized by the American College of Chest Physicians and the Society of Critical Care Medicine clinically defined the terms SIRS, sepsis, severe sepsis and septic shock (Table 1). 1,2,3,4 Even though the definition has high sensitivity and low specificity, it has been helpful in improving patient care, enrollment in clinical trials and communication between ICUs. A second conference held in 2001 attempted to refine the definitions, increase specificity by emphasizing prompt recognition and add a list of common symptoms and signs of sepsis. 2,3,5,10 The current definitions are as follows: • Infection: pathologic process caused by invasion of normally sterile tissue, fluid or body cavity by pathogenic or potentially pathogenic microorganisms. EPIDEMIOLOGY Sepsis has been recognized as a major public health problem in population-based and ICU-based epidemiological studies. Two studies have reported the sepsis incidence in the United States 8,9,11,13,14 . Briefly, the methodology used by these reports is primarily based on the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for principal hospital discharge diagnosis. The databases are linked with state and national population data from the US Census for the same year to generate population-based incidence rates. In order to iden- tify cases of severe sepsis, the authors selected all cases with ICD-9 for bacterial or fungal infection and a diagnosis of acute organ dysfunction. Then, the authors compared those patients selected by ICD-9 with the standard clinical criteria for the definitions of severe sepsis 9 and sepsis. 13 Martin et al. 13 estimated the incidence of sepsis in the US as 240 cases per 100,000 people, and Angus et al. 9 reported 300 cases of severe sepsis per 100,000 people. The inicidence was projected to increase by 1.5% per annum. The mortality rate reported in these studies was also similar, ranging from 17.9% for sepsis 13 to 28.6% for severe sepsis. 9 These numbers translate into approximately 750,000 new episodes of severe sepsis, with an annual mortality rate of 220,000 (29%) in the US. It is the tenth most common cause of death in the US, more so even than Acquired Immune Deficiency Syndrome, breast cancer, colon cancer and a first episode of acute myocardium infarction. 9,13 A French study found that septic shock was the reason for ICU admission in 8.4% of cases, with a mortality rate of 60%. 14 Recently, "Sepsis Occurrence in Acutely Ill Patients (SOAP)" in Europe reported more than 35% of ICU patients meet sepsis criteria during their ICU stay, with a mortality rate of 27%. 17 Silva et al., in the "Brazilian Sepsis Epidemiological Study (BASES study)", 11 evaluated 1383 consecutive patients in 5 ICUs of tertiary care hospitals distributed over two different regions of the country. They collected daily data on SIRS, sepsis, severe sepsis and septic shock. The median age was 65.2 years-old, and the overall 28-day mortality rate was 21.8%. The incidence density for sepsis, severe sepsis and septic shock were 61.4, 35.6 and 30.0 per 1000 patient-days, respectively. The mortality rates were 24.3% (SIRS), 34.7% (sepsis), 47.3% (severe sepsis) and 52.2% (septic shock). The respiratory tract was the most common source of infection. 8,11 This study also showed that there are important regional differences in mortality rates, as well as differences related to the type of hospital administration (public or private). This was accounted for by the fact that Brazil is a continental country with a heterogeneous population and unequal access to health care facilities; thus, social factors may play a more significant role in determining infection patterns and mortality of septic patients. These findings suggest that issues related to the care of septic patients need to be discussed by the whole society. 11 The recent microbiological patterns of infections that cause septic shock and severe sepsis have changed significantly. Gram-negative bacteria used to be the most frequent germs involved in the pathogenesis of these syndromes. However, Gram-positive bacteria are currently as common as Gram-negative, and fungi are also responsible for a large portion of infections. It is not possible to isolate a patho-gen in about a third of sepsis episodes; in some patients, it is difficult to obtain material for culture. Cultures are often not positive after the initiation of antibiotics. 2,8,9,13,14 Upward trends in sepsis incidence and patient survival are consistent in the medical literature; however, more people are exposed every year, and the total number of deaths is actually increasing. Patients who have met diagnostic criteria for severe sepsis and septic shock have higher overall mortality rates than other critically ill patients. PATHOPHYSIOLOGY Sepsis describes a complex clinical syndrome that develops when the initial, appropriate host response to an infection becomes amplified and then dysregulated. Determining the structural components of the bacteria that are responsible for initiating the septic process has been important, not only in understanding the underlying mechanisms, but also in identifying potential therapeutic targets. These bacterial motifs, which are recognized by the innate immune system, have been called pathogen-associated molecular patterns (PAMPs), although it might be more accurate to call them microorganism-associated molecular patterns as it is by no means clear how the host distinguishes between signals from pathogens and commensals. 18,19 In Gram-negative bacteria, lipopolysaccharide (LPS; known also as endotoxin) plays a dominant role. LPS is embedded in the outer membrane, and the lipid A portion of the molecule anchors LPS in the bacterial cell wall. Conformational changes of LPS seem to correlate with its ability to activate host cell membranes. There is no endotoxin in Gram-positive bacteria, but an important feature is the production of potent exotoxins. These Grampositive exotoxins are of great interest because they exhibit the properties of superantigens, that is, they are able to bind promiscuously to major histocompatibility complex class II and a restricted repertoire of T-lymphocyte receptor (TCR) Vb domains. In so doing, they cause massive T-cell activation and release of pro-inflammatory lymphokines. The best known examples are toxic shock syndrome caused by toxic shock syndrome toxin-1 (TSST-1)-producing strains of Staphylococcus aureus and the pyrogenic exotoxins from Streptococcus pyogenes. Peptidoglycan and lipoteichoic acid from the gram-positive cell walls can bind to cell-surface receptors and are pro-inflammatory, although they are much less active on a weight-for-weight basis than LPS. Their role in the pathogenesis of clinical sepsis remains uncertain. 20,21,22 Host recognition of microbial components The inability to identify an 'LPS receptor' was for many years a barrier to understanding how Gram-negative bac-teria initiate the septic response; activation of host cell is dependent on the presence of LPS-binding protein (LBP) and the opsonic receptor CD14. Although CD14 was originally identified as the essential co-receptor that mediated LPS activation of monocytes, subsequent work has shown that it also participates in the activation by Gram-positive cell wall components, such as peptidoglycan, mediates macrophage apoptosis and is important in shuttling LPS between serum proteins that have the capacity to bind LPS, such as LBP and serum lipoproteins. 23,33 Although the discovery of CD14 represented a significant step forward in understanding host responses to LPS, the fact that mCD14 had no intracellular tail meant that it remained unclear how ligation of the LPS-LBP complex led to cellular activation. This uncertainty was resolved by the discovery of the family of Toll-like receptors (TLRs). The TLRs have an intracellular domain that is homologous with the IL-1 and IL-18 receptors. Adapter proteins facilitate binding to IL-1 receptor-associated kinase, which in turn induces TNF receptor-associated factor-6, leading to nuclear translocation of nuclear factor-κB (NF-κB) and ultimately to activation of cytokine gene promoters. 24,25 A family of (currently) ten TLRs has been identified with a wide range of ligand specificity, including bacterial, fungal and yeast proteins. Thus, TLR4 is the LPS receptor, TLR2 is predominantly responsible for recognizing Gram-positive cell-wall structures, TLR5 is the receptor for flagellin and TLR9 recognizes CpG elements in bacterial DNA 25,26,27 . An additional layer of complexity has been provided by the discovery that there are several additional pathways by which cells recognize microbial components. The triggering receptor expressed on myeloid cells (TREM-1) and the myeloid DAP12-associating lectin (MDL-1) are two recently identified receptors involved in monocytic activation and inflammatory response. TREM-1 is upregulated in the presence of various microorganisms, although its ligand unknown 28,29,30 . Monocytic intracellular proteins NOD1 and NOD2 (for nucleotide-binding oligomerization domain), which seem to have the ability to bind to and confer responsiveness to LPS, were recently described. Genotypic variations in NOD2 might be associated with phenotypic variations in LPS responsiveness. Peptidoglycan-recognition proteins (PGRPs) were identified in moths, and a family of PGRP genes was subsequently found in humans. Different PGRPs can distinguish between Grampositive and Gram-negative bacteria 31,32 . Signal amplification Following the initial host-microbial interaction, there is widespread activation of the innate immune response, which coordinates a defensive response involving both humoral and cellular components 33 . Mononuclear cells release the classic pro-inflammatory cytokines IL-1, IL-6 and TNFα, but in addition, an array of other cytokines, including IL-12, IL-15 and IL-18, and a host of other small molecules are released. TNF-α and IL-1 are the prototypic inflammatory cytokines that mediate many of the immunopathological features of LPS-induced shock. They are released during the first 30-90 minutes after exposure toLPS, activate a second level of inflammatory cascades in turn, including cytokines, lipid mediators and reactive oxygen species and up-regulate cell adhesion molecules, which initiates inflammatory cell migration into tissues. 34,35 . One of the most intriguing concepts related to host recognition and signal amplification after a challenge with microbes is tolerance. Exposure of macrophages to LPS or other proinflammatory stimuli, such as cytokine TNF-α, can induce a state of tolerance, in which reduced activation is found upon subsequent exposure to LPS or the proinflammatory mediator. Among the proposed mechanisms, reduced TLR expression has been speculated. Brunialti et al. 36 have elegantly demonstrated that the expression of TLRs 2 and 4 in monocytes from septic patients is preserved, although they found a lower production of cytokines after inflammatory stimuli. These findings suggest that the down-regulation observed in patients with severe sepsis and septic shock appears to be related to intracellular pathways and not due to TLR expression. Indirect evidence from the authors had already demonstrated this, using biotinylated LPS and flow cytometry to study LPSmonocyte interaction and LPS-induced cellular activation in whole blood from septic patients 37 . Furthermore, the same group 38 has demonstrated that neutrophils from septic patients preserve their capacity for phagocytosis and generate reactive oxygen species. Taken together, these findings suggest that tolerance is a phenomenon linked to macrophage response and is not related to TLR expression. More recently, a novel cytokine has been extensively evaluated. High mobility group B1 (HMGB1) has been identified as a cytokine-like product of macrophages that appears much later after LPS stimulation. It stabilizes nucleosomes, facilitates gene transcription and modulates the activity of steroid hormone receptors 39 . Subsequently, patients with sepsis have elevated serum levels of HMGB1, and higher levels, in some studies are associated with an increased mortality, suggesting that clinical intervention by blocking or neutralizing HMGB1 might be a viable treatment option 41,42 . Recently, we have shown that neutrophils from volunteers and septic patients show a different pattern of gene expression after culture with HMGB1. How-ever, neutrophils from septic patients preserve intracellular activation and express proinflammatory genes, suggesting that neutrophils are not anergic or tolerant, in contrast to macrophages 42 . These data are in agreement with those mentioned before. Another macrophage-derived cytokine that has been identified as a potential therapeutic target in sepsis is macrophage migration inhibitory factor (MIF). It mediates shock caused by Gram-positive and Gram-negative bacteria. MIF has a curious relationship with glucocorticoids, which are normally thought of as being anti-inflammatory, because low doses of glucocorticoids paradoxically induce macrophage MIF. Once released, MIF then acts as a proinflammatory agent, overriding the ability of glucocorticoids to prevent shock in animal models of sepsis 43,44,45 . Neutrophil migration Neutrophils have a dual role in sepsis. On one hand, these cells are crucial for local control of bacterial growth and, consequently, for the prevention of bacterial dissemination. On the other hand, neutrophils play an important role in the endothelial activation and organ failure development. Cunha's group has advocated that impaired neutrophil migration to the infectious focus is associated with high mortality and increased numbers of bacteria in peritoneal exudate and blood in a cecal-ligation and puncture CLP model in rats. Conversely, in sublethal sepsis, neutrophil migration was not suppressed, and the bacterial infection was restricted to the peritoneal cavity; consequently, no significant mortality was observed. This group has also addressed the mechanisms of neutrophil migration impairment. They have demonstrated that the nitric oxide pathway 46 and TLR signaling 47 are both involved in this process. The coagulation cascade Cytokines are also important in inducing a procoagulant effect in sepsis. Coagulation disorders are common in sepsis, and 30-50% of patients have the more severe clinical form, disseminated intravascular coagulation. Coagulation pathways are initiated by LPS and other microbial components, inducing expression of tissue factor in mononuclear and endothelial cells. Tissue factor activates a series of proteolytic cascades that result in the conversion of prothrombin to thrombin, which in turn generates fibrin from fibrinogen. Simultaneously, normal regulatory fibrinolytic mechanisms (fibrin breakdown by plasmin) are impaired because of high plasma levels of plasminogen-activator in-hibitor type-1 (PAI-1) that prevent generating plasmin from the precursor plasminogen. The net result is enhanced production and reduced removal of fibrin, leading to the deposition of fibrin clots in small blood vessels, impairing tissue perfusion and organ function. Proinflammatory cytokines, in particular IL-1 and IL-6, are powerful inducers of coagulation; conversely, IL-10 regulates coagulation by inhibiting the expression of tissue factor in monocytes. An additional cause of the procoagulant state in sepsis is the down-regulation of three naturally occurring anticoagulant proteins: antithrombin, protein C and tissue factor pathway inhibitor 48,49,50 . These natural anticoagulants are of particular interest because they have anti-inflammatory properties in addition to their effect on thrombin generation. These effects include the release of monocyte-derived TNFα by inhibiting activation of the transcription factors NF-κB and activator protein (AP)-1. Particular attention has focused on Protein C, which is converted to the activated form (aPC) when thrombin complexes with thrombomodulin, an endothelial transmembrane glycoprotein. Once aPC is formed, it dissociates from an endothelial protein C receptor (EPCR) before binding protein S, resulting in inactivation of factors Va and VIIIa and, thus, blockade of the coagulation cascade. It has been shown recently that aPC uses EPCR as a co-receptor for cleavage of proteaseactivated receptor 1 (PAR1). Gene profiling showed that PAR1 signaling could account for the activation of aPCinduced protective genes, including the immunomodulatory monocyte chemoattractant protein-1 (MCP-1), which suggests a role for PAR-1 activation in protection from sepsis. In septic patients, aPC levels are reduced, and expression of endothelial thrombomodulin and EPCR are impaired, providing some support for the notion that replacement of aPC might have therapeutic value 51,52 . The counter-inflammatory response The profound proinflammatory response that occurs in sepsis is balanced by an array of counter-regulatory molecules that attempt to restore immunological equilibrium. Counter-inflammatory cytokines include antagonists such as the soluble TNF receptors and IL-1 receptor antagonist, decoy receptors such as IL-1 receptor type II, inactivators of the complement cascade and anti-inflammatory cytokines, of which IL-10 is the prototype. In concert with this, the host response to injury includes profound changes in metabolic activity (increased cortisol production and release of catecholamines), induction of acute-phase proteins and endothelial activation with upregulation of adhesion molecules and release of prostanoids and platelet-activating factor (PAF). Another facet of down-regulation of immunity that occurs in sepsis is the development of lymphocyte apoptosis; subset analysis of autopsy tissue samples has shown that there is selective depletion of B and CD4 + lymphocytes. This process and its functional consequences are viewed as part of a more general state of immunosuppression, characterized by T-cell hypo-responsiveness and anergy, which occurs to some extent in most septic patients and is seen as a counter-balancing response (and sometimes, over-response) to the initial proinflammatory state. It is because of this overresponse that some investigators view the counter-inflammatory response as the cause of an inadequate host defense against infection and thus as a potential 'mediator' of sepsis and progressive organ failure. Several researchers have pursued the notion that reversal of this immunosuppressive state might be of therapeutic value. For instance, mice transfected with the human gene bcl-2, which overexpresses the antiapoptotic protein Bcl-2, are protected from death after caecal ligation and puncture, and patients that received IFN-γ in a small, nonrandomized clinical study showed up-regulation of HLA-DR in their monocytes and a better-than-anticipated survival. 53 Mechanisms of organ failure The ultimate cause of death in patients with sepsis is multiple organ failure. There is a close relationship between the severity of organ dysfunction on admission to an ICU and the probability of survival and between the numbers of organs failing and the risk of death. The mechanisms involve widespread fibrin deposition that causes microvascular occlusion, the development of tissue exudates that further compromise adequate oxygenation and disorders of microvascular homeostasis that result from the elaboration of vasoactive substances such as PAF, histamine and prostanoids. Cellular infiltrates, particularly neutrophils, damage tissue directly by releasing lysosomal enzymes and superoxide-derived free radicals. TNF-α and other cytokines increase the expression of inducible nitric oxide synthase, and increased production of nitric oxide causes further vascular instability and may also contribute to the direct myocardial depression that occurs in sepsis. 54,55,56 The tissue hypoxia that develops in sepsis is reflected in the oxygen debt, i.e., the difference between oxygen delivery and oxygen requirements. The extent of the oxygen debt is related to the outcome from sepsis, and strategies designed to optimize oxygen delivery to the tissues can improve survival 57 . In addition to hypoxia, cells may be dysoxic; i.e., unable to properly utilize available oxygen. Recent data suggest that this may be another consequence of excess nitric oxide production because skeletal muscle biopsies from septic patients show evidence of impaired mitochondrial respiration, which is inhibited by nitric oxide 58,59 . Cross-talk between cytokines and neurohormones is the cornerstone of restoration of homoeostasis during stress. Production and release of vasopressin and corticotropin-releasing hormone are enhanced by circulating TNF and interleukins-1, -6 and -2, by locally expressed interleukin 1 and NO and by afferent vagal fibers. Moreover, cortisol synthesis is modulated by locally expressed interleukin-6 and TNF-α. Up-regulated hormones help maintain cardiovascular homoeostasis and cellular metabolism and contain foci of inflammation. Impaired endocrine responses to sepsis might result from cytokines, neuronal apoptosis, metabolic and ischemic derangements in the hypothalamic-pituitary and adrenal glands or drug administration. Deficiencies in adrenal gland function and vasopressin production occur in about a half and a third of septic shock cases, respectively, and contribute to hypotension and death. Other endocrine disorders during sepsis have unclear mechanisms and consequences 60,61 . Genetic polymorphisms Among this vast array of host molecules that orchestrate the response to sepsis, there are many examples of genetic variability that influence physiological activity. Various genetic polymorphisms are associated with increased susceptibility to infection and poor outcomes. Markers of susceptibility could include single nucleotide polymorphisms of genes encoding cytokines (e.g., TNF, lymphotoxin-α, interleukin-10, interleukin-18, interleukin-1 receptor antagonist, interleukin 6 and interferon α), cell surface receptors (e.g., CD14, MD2, toll-like receptors 2 and 4 and Fc-gamma receptors II and III), lipopolysaccharide ligand (lipopolysaccharide binding protein, bactericidal permeability increasing protein), mannose-binding lectin, heat shock protein 70, angiotensin I-converting enzyme, plasminogen activator inhibitor and caspase-12. Use of genotype combinations could improve the identification of high-risk groups 62,63,64,65 . THERAPEUTIC APPROACHES Despite the extraordinary developments in understanding the immunopathology of sepsis, therapeutic advances have been painfully slow. Septic shock remains a major source of both short-and long-term morbidity and mortality and places a large burden on the healthcare system 66 . The recent identification of molecules in humans that sense microbial determinants has been an important step in understanding the molecular and cellular basis of sepsis. Characterizing the links between inflammation, coagulation and the immune and neuroendocrine systems has led to the de-velopment of international guidelines. New knowledge about apoptosis, leukocyte reprogramming, epithelial dysfunction and factors involved in sepsis holds promise for the development of new therapeutic approaches. A group of international critical care and infectious disease clinicians, experts in the diagnosis and management of infection and sepsis who represent 11 organizations, came together to develop guidelines that the bedside clinician could use to improve outcome in severe sepsis and septic shock. This process represented phase II of the Surviving Sepsis Campaign, an international effort to increase awareness and improve outcomes in severe sepsis 67 . Besides guidelines, the Severe Sepsis Bundles are designed to allow teams to follow the timing, sequence and goals of the individual elements of care in order to achieve the goal of a 25% reduction in mortality from severe sepsis (Table 3). Individual hospitals should use the bundles to create customized protocols and pathways specific to their institutions. However, all of the elements in the bundles must be incorporated in these protocols. The addition of other strategies not found in the bundles is not recommended. The bundle will form the basis for the measurements that improvement teams will conduct to follow hospitals' progress as they make changes. Hospitals should implement two different Severe Sepsis Bundles. Each bundle articulates requirements for specific timeframes. • Sepsis Resuscitation Bundle: Tasks that should begin immediately but must be done within 6 hours for pa-tients with severe sepsis or septic shock. • Sepsis Management Bundle: Tasks that should begin immediately but must be done within 24 hours for patients with severe sepsis or septic shock. 67, 68 Initial Resuscitation The resuscitation of a patient with severe sepsis or sepsis-induced tissue hypoperfusion should begin as soon as the syndrome is recognized. An elevated serum lactate concentration identifies tissue hypoperfusion in patients at risk who are not hypotensive. During the first 6 hrs of resuscitation, the goals of initial resuscitation of sepsis-induced hypoperfusion should include all of the following as one part of a treatment protocol: Central venous pressure (CVP): 8-12 mm Hg, Mean arterial pressure (MAP): ≥ 65 mm Hg, Urine output: ≥ 0.5 mL·kg -1 ·hr -1 and Central venous (superior vena cava) or mixed venous oxygen saturation: ≥ 70% 68,69 . Predictive factors of fluid responsiveness have been evaluated in order to select patients who might benefit from volume expansion and avoid ineffective or even deleterious volume expansion. There is a minimal value of static ventricular preload parameters, and the use of dynamic parameters in the decision-making process is more effective concerning volume expansion. In sedated patients receiving mechanical ventilation in a volume-controlled mode with a tidal volume of at least 8 ml/kg with acute circulatory failure related to sepsis, the use of pulse pressure variation is an accurate indicator of fluid responsiveness 70 . Diagnosis Appropriate cultures should always be obtained before antimicrobial therapy is initiated. Two blood cultures should be obtained, with at least one drawn percutaneously and one drawn through each vascular access device, unless the device was recently (<48 hrs) inserted. Cultures from other sites should be obtained before antibiotic therapy is initiated as the clinical situation dictates 71,72,73 . Antibiotic Therapy Intravenous antibiotic therapy should be started within the first hour (and should be reasonable until the third hour) that severe sepsis is recognized, after appropriate cultures have been obtained. Establishing a supply of premixed antibiotics is an appropriate strategy to enhance the likelihood that antimicrobial agents will be infused promptly. The initial selection of an empirical antimicrobial regimen should be sufficiently broad; there is ample evidence that failure to initiate appropriate therapy promptly has adverse consequences on outcome. All patients should receive an initial full loading dose of each antimicrobial. De-escalation antibiotic therapy should be tailored according to clinical status, severity of illness and culture results 73,74,75 . Source Control Every patient presenting with severe sepsis should be evaluated for the presence of a focus on infection amenable to source control measures; specifically, the drainage of an abscess or local focus of infection, the debridement of infected necrotic tissue, the removal of a potentially infected device or the definitive control of a source of ongoing microbial contamination 76,77 . Fluid Therapy Fluid resuscitation may consist of natural or artificial colloids or crystalloids. There is no evidence-based support for one type of fluid over another. As the volume of distribution is much larger for crystalloids than for colloids, resuscitation with colloids requires less fluid to achieve the same end points. Fluid challenge in patients with suspected hypovolemia may be given at a rate of 500-1000 mL of crystalloids or 300-500 mL of colloids over 30 min and repeated on the basis of response (e.g., increase in blood pressure and urine output) and tolerance (e.g., evidence of intravascular volume overload) 78,79,80 . In animal models, ethyl pyruvate (EP) reduces organ system damage in ischemia/reperfusion injury and hemorrhagic and endotoxic shock, at least in part through its antioxidant action. In addition, EP appears to have direct beneficial effects on cytokine expression and proinflammatory gene regulation. These findings could be a rationale for the use of EP in septic patients; however, more studies are needed to support its use 81 . Our group has tested the hypothesis that a hypertonic solution could be an alternative solution for sepsis. In a previous study, we found that an early, large volume of crystalloid after live Escherichia coli injection in dogs promoted partial and transient benefits during the fluid infusion, which were especially poor at the splanchnic bed 82,83 . Subsequently, we have tested the hypothesis that hypertonic solution (HS) infusion promotes better systemic and regional benefits than conventional isotonic crystalloid infusion in experimental sepsis (infusion of E. coli at a dose of 1.2 × 10 10 cfu/kg). A large volume of Lactated Ringer's solution or a small volume of HS promoted similar transient hemodynamic benefits that were unable to restore sepsis-induced perfusional deficits. However, a single bolus of HS did promote sustained systemic and mesenteric oxygen extraction reductions without deterioration of perfusional markers such as lactate levels and pCO 2 gradients 84 . A clinical trial should be carried out to validate these findings. Vasopressors When an appropriate fluid challenge fails to restore adequate blood pressure and organ perfusion, therapy with vasopressor agents should be started. Use of an arterial catheter provides a more accurate and reproducible measurement of arterial pressure. Either norepinephrine or dopamine (through a central catheter as soon as available) is the first-choice vasopressor agent for correcting hypotension in septic patients. Low-dose dopamine should not be used for renal protection as part of the treatment of severe sepsis. Vasopressin use may be considered in patients with refractory shock despite adequate fluid resuscitation and high-dose conventional vasopressors. It should be administered at infusion rates of 0.01-0.04 units/min 85,86 . Inotropic Therapy In patients with low cardiac output despite adequate fluid resuscitation, dobutamine may be used to increase cardiac output. If used in the presence of low blood pressure, it should be combined with vasopressor therapy. A strategy of increasing cardiac index to achieve an arbitrarily predefined elevated level is not recommended 87 . Steroids Intravenous corticosteroids (e.g., hydrocortisone, 200-300 mg/day for 7 days in three or four divided doses or by continuous infusion) are suggested in patients with septic shock who respond poorly to both fluid replacement and vasopressors. Some experts would use a 250 µg adrenocorticotropic hormone ACTH stimulation test to identify responders (i.e., >9 µg/dL increase in plasmatic cortisol 30-60 min post-ACTH administration) and discontinue therapy in these patients 88 . However, in a recent trial (CORTICUS), the ACTH test was not able to discriminate patients who will eventually respond to steroid treatment. Hence, this test has been discouraged. Because hydrocortisone has intrinsic mineralocorticoid activity, there is controversy as to whether fludrocortisone should be added. A post hoc analysis of the effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock showed that a 7-day treatment with low doses of corticosteroids was associated with bet-ter outcomes in septic shock-associated early ARDS nonresponders but not in responders or in septic shock patients without ARDS 89,90,91 . Recombinant Human Activated Protein C (rhAPC) Using rhAPC is suggested in patients at high risk of death (e.g., APACHE II ≥ 25, sepsis-induced multiple organ failure, septic shock) and with no absolute contraindication related to bleeding risk or relative contraindication that outweighs the potential benefit of rhAPC. The EN-HANCE study has shown that patients treated within 0-24 hrs from their first sepsis-induced organ dysfunction had a lower observed mortality rate than those treated after 24 hrs. The ADDRESS study indicated that rhAPC should not be used in patients with severe sepsis who are at a low risk for death, such as those with single-organ failure or an APACHE II score less than 25. In these patients, there was an absence of a beneficial treatment effect, coupled with an increased incidence of serious bleeding. 92,93 More recently, a new industry-funded randomized clinical trial has been prepared to test the efficacy of rhAPC infusion only in high-risk septic patients. Mechanical Ventilation of Sepsis-Induced Acute Lung Injury (ALI)/ARDS High tidal volumes that are coupled with high plateau pressures should be avoided in ALI and/or ARDS; the goal is a reduction in tidal volumes over 1-2 hrs to a "low" tidal volume (i.e., 6 mL per kilogram of predicted body weight) in conjunction with end-inspiratory plateau pressures <30 cmH 2 O. Hypercapnia can be tolerated in order to minimize plateau pressures and tidal volumes. The use of hypercarbia is limited in patients with preexisting metabolic acidosis and is contraindicated in patients with increased intracranial pressure. A minimum amount of positive end-expiratory pressure should be set to prevent lung collapse at end-expiration. Setting positive end-expiratory pressure based on the severity of the oxygenation deficit and guided by the FiO 2 required to maintain adequate oxygenation is one acceptable approach. Some experts titrate positive end-expiratory pressure according to bedside measurements of thoracopulmonary compliance in order to obtain the highest compliance, reflecting lung recruitment 94,95,96,97 . One trial has shown that, in patients with ALI and ARDS who receive mechanical ventilation with a tidal-volume goal of 6 mL per kilogram of predicted body weight and an end-inspiratory plateau-pressure limit of 30 cm of water, clinical outcomes are similar whether lower or higher positive end expiratory pressure (PEEP) levels are used 98 . The currently suggested strategy of ventilation with low lung vol-umes can aggravate lung collapse and potentially produce lung injury through shear stress at the interface between aerated and collapsed lung, as a result of repetitive opening and closing of alveoli. An 'open lung strategy' focused on alveolar patency has therefore been recommended. In animal studies, recruitment maneuvers clearly reverse the derecruitment associated with low tidal volume ventilation, improve gas exchange and reduce lung injury. Data regarding the use of recruitment maneuvers in patients with ARDS show mixed results, with increased efficacy in those with a short duration of ARDS, good chest wall compliance, and extrapulmonary ARDS. More data are needed to support this strategy 102,103 . In experienced facilities, prone positioning should be considered in ARDS patients who require potentially injurious levels of FiO 2 or plateau pressure who are not at high risk for adverse consequences of positional changes 100 . A weaning protocol should be in place, and mechanically ventilated patients should undergo a spontaneous breathing trial to evaluate their ability to discontinue mechanical ventilation when they satisfy the following criteria: a) arousable; b) hemodynamically stable (without vasopressor agents); c) no new potentially serious conditions; d) low ventilatory and end-expiratory pressure requirements; and e) requiring levels of FiO 2 that could be safely delivered with a face mask or nasal cannula. If the spontaneous breathing trial is successful, extubation should be considered 101,102,103 . Glucose Control Following initial stabilization of patients with severe sepsis, blood glucose should be maintained at <150 mg/ dL. Studies supporting the role of glycemic control have used continuous infusion of insulin and glucose. With this protocol, glucose should be monitored frequently after initiation of the protocol (every 30-60 min) and on a regular basis (every 4 hrs) once the blood glucose concentration has stabilized. A large single-center trial of postoperative surgical patients showed significant improvement in survival when continuous-infusion insulin was used to maintain glucose between 80 and 110 mg/dL 104 . Another trial in a medical ICU showed that intensive insulin therapy significantly reduced morbidity but not mortality. Although the risk of subsequent death and disease was reduced in patients treated for three or more days, these patients could not be identified before therapy 105,106 . CONCLUSION Sepsis syndromes are still a daily challenge for intensivists all over the world. Despite great improvements in the understanding of epidemiology, pathophysiology and genetic predisposition to sepsis, both morbidity and mortality associated with severe sepsis and septic shock remain unacceptably high. Recent efforts aiming to provide a framework of uniform definitions and terms have been performed in order to facilitate communication, research and patient care. There has been progress in clinical management based on widely known guidelines, and interventions targeting specific pathologic pathways have led to important breakthroughs, as well as some deceptions. In the future, we hope that it will be possible to tailor treatment strategies based on knowledge of an individual's genetic profile, comorbidities and phenotypic expressions derived from environmental influences and host-pathogen interactions.
8,730.4
2008-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Measurement of the tt¯ production cross-section using eµ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector The inclusive top quark pair ( tt ) production cross-section σ tt has been measured in proton–proton collisions at √ s = 7 TeV and √ s = 8 TeV with the ATLAS experiment at the LHC, using tt events with an opposite-charge e μ pair in the final state. The measurement was performed with the 2011 7 TeV dataset corresponding to an integrated luminosity of 4.6 fb − 1 and the 2012 8 TeV dataset of 20.3 fb − 1 . The numbers of events with exactly one and exactly two b -tagged jets were counted and used to simultaneously determine σ tt and the efficiency to reconstruct and b -tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section was measured to be: Introduction The top quark is the heaviest known fundamental particle, with a mass (m t ) that is much larger than any of the other quarks, and close to the scale of electroweak symmetry breaking.The study of its production and decay properties forms a core part of the ATLAS physics programme at the CERN Large Hadron Collider (LHC).At the LHC, top quarks are primarily produced in quark-antiquark pairs (tt), and the precise prediction of the corresponding inclusive cross-section (σ tt ) is a substantial challenge for quantum chromodynamics (QCD) calculational techniques.Precise measurements of σ tt are sensitive to the gluon parton distribution function (PDF), the top quark mass, and potential enhancements of the cross-section due to physics beyond the Standard Model. Within the Standard Model (SM), the top quark decays almost exclusively to a W boson and a b quark, so the finalstate topologies in tt production are governed by the decay modes of the two W bosons.This paper describes a measurement in the dileptonic eμ channel, tt → W + bW − b → e ± μ ∓ ννbb, selecting events with an eμ pair with oppositesign electric charges, 1 and one or two hadronic jets from the b quarks.Jets originating from b quarks were identified ('tagged') using a b-tagging algorithm exploiting the long lifetime, high decay multiplicity, hard fragmentation and high mass of B hadrons.The rates of events with an eμ pair and one or two tagged b-jets were used to measure simultaneously the tt production cross-section and the combined probability to reconstruct and b-tag a jet from a top quark decay.Events with electrons or muons produced via leptonic τ decays t → W b → τ νb → e/μνννb, were included as part of the tt signal. The main background is W t, the associated production of a W boson and a single top quark.Other background con-tributions arise from Z → τ τ → eμ+jets (+4ν) production, diboson+jets production and events where at least one reconstructed lepton does not arise from a W or Z boson decay. Theoretical predictions for σ tt are described in Sect.2, followed by the data and Monte Carlo (MC) simulation samples in Sect.3, the object and event selection in Sect.4, and the extraction of the tt cross-section in Sect. 5. Systematic uncertainties are discussed in Sect.6, the results, including fiducial cross-section measurements, the extraction of the top quark mass from the measured cross-section and a limit on the production of supersymmetric top squarks, are given in Sect.7, and conclusions are drawn in Sect.8. Theoretical cross-section predictions Calculations of σ tt for hadron collisions are now available at full next-to-next-to-leading-order (NNLO) accuracy in the strong coupling constant α s , including the resummation of next-to-next-to-leading logarithmic (NNLL) soft gluon terms [1][2][3][4][5][6].At a centre-of-mass energy of √ s = 7 TeV and assuming m t = 172.5 GeV, these calculations give a prediction of 177.3 ± 9.0 +4.6 −6.0 pb, where the first uncertainty is due to PDF and α s uncertainties, and the second to QCD scale uncertainties.The corresponding prediction at √ s = 8 TeV is 252.9 ± 11.7 +6.4 −8.6 pb.These values were calculated using the top++ 2.0 program [7].The PDF and α s uncertainties were calculated using the PDF4LHC prescription [8] with the MSTW2008 68 % CL NNLO [9,10], CT10 NNLO [11,12] and NNPDF2.35f FFN [13] PDF sets, and added in quadrature to the QCD scale uncertainty.The latter was obtained from the envelope of predictions with the renormalisation and factorisation scales varied independently by factors of two up and down from their default values of m t , whilst never letting them differ by more than a factor of two.The ratio of cross-sections at √ s = 8 TeV and √ s = 7 TeV is predicted to be 1.430 ± 0.013 (PDF+α s ) ±0.001 (QCD scale).The total relative uncertainty is only 0.9 %, as the cross-section uncertainties at the two centre-of-mass energies are highly correlated. The NNLO+NNLL cross-section values are about 3 % larger than the exact NNLO predictions, as implemented in Hathor 1.5 [14].For comparison, the corresponding nextto-leading-order (NLO) predictions, also calculated using top++ 2.0 with the same set of PDFs, are 157±12±24 pb at √ s = 7 TeV and 225 ± 16 ± 29 pb at √ s = 8 TeV, where again the first quoted uncertainties are due to PDF and α s uncertainties, and the second to QCD scale uncertainties.The total uncertainties of the NLO predictions are approximately 15 %, about three times larger than the NNLO+NNLL calculation uncertainties quoted above. Data and simulated samples The ATLAS detector [15] at the LHC covers nearly the entire solid angle around the collision point, and consists of an inner tracking detector surrounded by a thin superconducting solenoid magnet producing a 2 T axial magnetic field, electromagnetic and hadronic calorimeters, and an external muon spectrometer incorporating three large toroid magnet assemblies.The inner detector consists of a highgranularity silicon pixel detector and a silicon microstrip tracker, together providing precision tracking in the pseudorapidity2 range |η| < 2.5, complemented by a transition radiation tracker providing tracking and electron identification information for |η| < 2.0.A lead/liquid-argon (LAr) electromagnetic calorimeter covers the region |η| < 3.2, and hadronic calorimetry is provided by steel/scintillator tile calorimeters for |η| < 1.7 and copper/LAr hadronic endcap calorimeters.The forward region is covered by additional LAr calorimeters with copper and tungsten absorbers.The muon spectrometer consists of precision tracking chambers covering the region |η| < 2.7, and separate trigger chambers covering |η| < 2.4.A three-level trigger system, using custom hardware followed by two software-based levels, is used to reduce the event rate to about 400 Hz for offline storage. The analysis was performed on the ATLAS 2011-2012 proton-proton collision data sample, corresponding to integrated luminosities of 4.6 fb −1 at √ s = 7 TeV and 20.3 fb −1 at √ s = 8 TeV after the application of detector status and data quality requirements.Events were required to pass either a single-electron or single-muon trigger, with thresholds chosen in each case such that the efficiency plateau is reached for leptons with p T > 25 GeV passing offline selections.Due to the high instantaneous luminosities achieved by the LHC, each triggered event also includes the signals from on average about 9 ( √ s = 7 TeV) or 20 ( √ s = 8 TeV) additional inelastic pp collisions in the same bunch crossing (known as pileup). Monte Carlo simulated event samples were used to develop the analysis, to compare to the data and to evaluate signal and background efficiencies and uncertainties.Samples were processed either through the full ATLAS detector simulation [16] based on GEANT4 [17], or through a faster simulation making use of parameterised showers in the calorimeters [18].Additional simulated pp collisions generated either with Pythia6 [19] (for √ s = 7 TeV simulation) or Pythia8 [20] (for √ s = 8 TeV) were overlaid to simulate the effects of both in-and out-of-time pileup, from additional pp collisions in the same and nearby bunch crossings.All simulated events were then processed using the same reconstruction algorithms and analysis chain as the data.Small corrections were applied to lepton trigger and selection efficiencies to better model the performance seen in data, as discussed further in Sect.6.The baseline tt full simulation sample was produced using the NLO matrix element generator Powheg [21][22][23] interfaced to Pythia6 [19] with the Perugia 2011C tune (P2011C) [24] for parton shower, fragmentation and underlying event modelling, and CT10 PDFs [11], and included all tt final states involving at least one lepton.The W → ν branching ratio was set to the SM expectation of 0.1082 [25], and m t was set to 172.5 GeV.Alternative tt samples were produced with the NLO generator MC@NLO [26,27] interfaced to Herwig [28] with Jimmy [29] for the underlying event modelling, with the ATLAS AUET2 [30] tune and CT10 PDFs; and with the leading-order (LO) multileg generator Alpgen [31] interfaced to either Pythia6 or Herwig and Jimmy, with the CTEQ6L1 PDFs [32].These samples were all normalised to the NNLO+NNLL cross-section predictions given in Sect. 2 when comparing simulation with data. Backgrounds were classified into two types: those with two real prompt leptons from W or Z boson decays (including those produced via leptonic τ decays), and those where at least one of the reconstructed lepton candidates is misidentified, i.e. a non-prompt lepton from the decay of a bottom or charm hadron, an electron from a photon conversion, hadronic jet activity misidentified as an electron, or a muon produced from an in-flight decay of a pion or kaon.The first category with two prompt leptons includes W t single top production, modelled using Powheg + Pythia6 [33] with the CT10 PDFs and the P2011C tune; Z → τ τ +jets modelled using Alpgen + Herwig + Jimmy ( √ s = 7 TeV) or Alpgen + Pythia6 including LO matrix elements for Zbb production, with CTEQ6L1 PDFs; and diboson (W W , W Z, Z Z) production in association with jets, modelled using Alpgen + Herwig + Jimmy.The W t background was normalised to approximate NNLO cross-sections of 15.7 ± 1.2 pb at √ s = 7 TeV and 22.4 ± 1.5 pb at √ s = 8 TeV, determined as in Ref. [34].The inclusive Z cross-sections were set to the NNLO predictions from FEWZ [35], but the normalisation of Z → τ τ → eμ4ν backgrounds with b-tagged jets were determined from data as described in Sect.5.1.The diboson background was normalised to the NLO QCD inclusive cross-section predictions calculated with MCFM [36].Production of tt in association with a W or Z boson, which contributes to the sample with same-sign leptons, was simulated with Madgraph [37] interfaced to Pythia with CTEQ6L1 PDFs, and normalised to NLO cross-section predictions [38,39]. Backgrounds with one real and one misidentified lepton include tt events with one hadronically decaying W ; W +jets production, modelled as for Z +jets; W γ +jets, modelled with Sherpa [40] with CT10 PDFs; and t-channel single top production, modelled using AcerMC [41] interfaced to Pythia6 with CTEQ6L1 PDFs.Other backgrounds, including processes with two misidentified leptons, are negligible after the event selections used in this analysis. Object and event selection The analysis makes use of reconstructed electrons, muons and b-tagged jets.Electron candidates were reconstructed from an isolated electromagnetic calorimeter energy deposit matched to an inner detector track and passing tight identification requirements [42], with transverse energy E T > 25 GeV and pseudorapidity |η| < 2.47.Electron candidates within the transition region between the barrel and endcap electromagnetic calorimeters, 1.37 < |η| < 1.52, were removed.Isolation requirements were used to reduce background from non-prompt electrons.The calorimeter transverse energy within a cone of size ΔR = 0.2 and the scalar sum of track p T within a cone of size ΔR = 0.3, in each case excluding the contribution from the electron itself, were each required to be smaller than E T and η-dependent thresholds calibrated to separately give nominal selection efficiencies of 98 % for prompt electrons from Z → ee decays. Muon candidates were reconstructed by combining matching tracks reconstructed in both the inner detector and muon spectrometer [43], and were required to satisfy p T > 25 GeV and |η| < 2.5.In the √ s = 7 TeV dataset, the calorimeter transverse energy within a cone of size ΔR = 0.2, excluding the energy deposited by the muon, was required to be less than 4 GeV, and the scalar sum of track p T within a cone of size ΔR = 0.3, excluding the muon track, was required to be less than 2.5 GeV.In the √ s = 8 TeV dataset, these isolation requirements were replaced by a cut I < 0.05, where I is the ratio of the sum of track p T in a variable-sized cone of radius ΔR = 10 GeV/ p μ T to the transverse momentum p μ T of the muon [44].Both sets of isolation requirements have efficiencies of about 97 % for prompt muons from Z → μμ decays. Jets were reconstructed using the anti-k t algorithm [45,46] with radius parameter R = 0.4, starting from calorimeter energy clusters calibrated at the electromagnetic energy scale for the √ s = 7 TeV dataset, or using the local cluster weighting method for √ s = 8 TeV [47].Jets were calibrated using an energy-and η-dependent simulation-based calibration scheme, with in-situ corrections based on data, and were required to satisfy p T > 25 GeV and |η| < 2.5.To suppress the contribution from low-p T jets originating from pileup interactions, a jet vertex fraction requirement was applied: at √ s = 7 TeV jets were required to have at least 75 % of the scalar sum of the p T of tracks associated with the jet coming from tracks associated with the event primary vertex.The latter was defined as the reconstructed vertex with the highest sum of associated track p 2 T .Motivated by the higher pileup background, in the √ s = 8 TeV dataset this requirement was loosened to 50 %, only applied to jets with p T < 50 GeV and |η| < 2.4, and the effects of pileup on the jet energy calibration were further reduced using the jet-area method as described in Ref. [48].Finally, to further suppress non-isolated leptons likely to have come from heavy-flavour decays inside jets, electrons and muons within ΔR = 0.4 of selected jets were also discarded. Jets were b-tagged as likely to have originated from b quarks using the MV1 algorithm, a multivariate discriminant making use of track impact parameters and reconstructed secondary vertices [49,50].Jets were defined to be b-tagged if the MV1 discriminant value was larger than a threshold (working point) corresponding approximately to a 70 % efficiency for tagging b-quark jets from top decays in tt events, with a rejection factor of about 140 against light-quark and gluon jets, and about five against jets originating from charm quarks. Events were required to have at least one reconstructed primary vertex with at least five associated tracks, and no jets failing jet quality and timing requirements.Events with muons compatible with cosmic-ray interactions and muons losing substantial fractions of their energy through bremsstrahlung in the detector material were also removed.A preselection requiring exactly one electron and one muon selected as described above was then applied, with at least one of the leptons being matched to an electron or muon object triggering the event.Events with an opposite-sign eμ pair constituted the main analysis sample, whilst events with a same-sign eμ pair were used in the estimation of the background from misidentified leptons. Extraction of the t t cross-section The tt production cross-section σ tt was determined by counting the numbers of opposite-sign eμ events with exactly one (N 1 ) and exactly two (N 2 ) b-tagged jets.No requirements were made on the number of untagged jets; such jets originate from b-jets from top decays which were not tagged, and light-quark, charm-quark or gluon jets from QCD radiation.The two event counts can be expressed as: where L is the integrated luminosity of the sample, eμ is the efficiency for a tt event to pass the opposite-sign eμ preselection and C b is a tagging correlation coefficient close to unity.The combined probability for a jet from the quark q in the t → W q decay to fall within the acceptance of the detector, be reconstructed as a jet with transverse momentum above the selection threshold, and be tagged as a b-jet, is denoted by b .Although this quark is almost always a b quark, b thus also accounts for the approximately 0.2 % of top quarks that decay to W s or W d rather than W b, slightly reducing the effective b-tagging efficiency.Furthermore, the value of b is slightly increased by the small contributions to N 1 and N 2 from mistagged light-quark, charm-quark or gluon jets from radiation in tt events, although more than 98 % of the tagged jets are expected to contain particles from B-hadron decays in both the one and two b-tag samples. If the decays of the two top quarks and the subsequent reconstruction of the two b-tagged jets are completely independent, the probability to tag both b-jets bb is given by bb = b 2 .In practice, small correlations are present for both kinematic and instrumental reasons, and these are taken into account via the tagging correlation C b , defined as where N tt eμ is the number of preselected eμ tt events and N tt 1 and N tt 2 are the numbers of tt events with one and two btagged jets.Values of C b greater than one correspond to a positive correlation, where a second jet is more likely to be selected if the first one is already selected, whilst C b = 1 corresponds to no correlation.This correlation term also compensates for the effect on b , N 1 and N 2 of the small number of mistagged charm-quark or gluon jets from radiation in the tt events. Background from sources other than tt → eμννbb also contributes to the event counts N 1 and N 2 , and is given by the terms N were estimated using a combination of simulation-and data-based methods, allowing the two equations in Eq. (1) to be solved numerically yielding σ tt and b . A total of 11796 events passed the eμ opposite-sign preselection in √ s = 7 TeV data, and 66453 in √ s = 8 TeV data.Table 1 shows the number of events with one and two b-tagged jets, together with the estimates of non-tt background and their systematic uncertainties discussed in detail in Sect.5.1 below.The samples with one b-tagged jet are expected to be about 89 % pure in tt events, with the dominant background coming from W t single top production, and smaller contributions from events with misidentified leptons, Z +jets and dibosons.The samples with two b-tagged jets are expected to be about 96 % pure in tt events, with W t production again being the dominant background. Distributions of the number of b-tagged jets in oppositesign eμ events are shown in Fig. 1, and compared to the , with the simulation normalised to the same number of entries as the data.The lepton |η| distributions reflect the differing acceptances and efficiencies for electrons and muons, in particular the calorimeter transition region at 1.37 < |η| < 1.52.In general, the agreement between data and simulation is good, within the range of predictions from the different tt simulation samples. The value of σ tt extracted from Eq. ( 1) is inversely proportional to the assumed value of eμ , with (dσ tt /d eμ )/ (σ tt / eμ ) = −1.Uncertainties on eμ therefore translate directly into uncertainties on σ tt .The value of eμ was determined from simulation to be about 0.8 % for both centreof-mass energies, and includes the tt → eμννbb branching ratio of about 3.2 % including W → τ → e/μ decays.Similarly, σ tt is proportional to the value of C b , also determined from simulation, giving a dependence with the opposite sign, (dσ tt /dC b )/(σ tt /C b ) = 1.The systematic uncertainties on eμ and C b are discussed in Sect.6. With the kinematic cuts and b-tagging working point chosen for this analysis, the sensitivities of σ tt to knowledge of the backgrounds N Background estimation The W t single top and diboson backgrounds were estimated from simulation as discussed in Sect.3. The Z +jets back-ground (with Z → τ τ → eμ4ν) at √ s = 8 TeV was estimated from simulation using Alpgen+Pythia, scaled by the ratios of Z → ee or Z → μμ accompanied by b-tagged jets measured in data and simulation.The ratio was evaluated separately in the one and two b-tag event samples.This scaling eliminates uncertainties due to the simulation modelling of jets (especially heavy-flavour jets) produced in association with the Z bosons.The data-to-simulation ratios were measured in events with exactly two opposite-sign electrons or muons passing the selections given in Sect. 4 and one or two b-tagged jets, by fitting the dilepton invariant mass distributions in the range 60-120 GeV, accounting for the backgrounds from tt production and misidentified leptons.Combining the results from both dilepton channels, the scale factors were determined to be 1.43 ± 0.03 and 1.13 ± 0.08 for the one and two b-tag backgrounds, after normalising the simulation to the inclusive Z cross-section prediction from FEWZ [35].The uncertainties include systematic components derived from a comparison of results from the ee and μμ channels, and from studying the variation of scale factors with Z boson p T .The average p T is higher in selected Z → τ τ → eμ4ν events than in Z → ee/μμ events due to the momentum lost to the undetected neutrinos from the τ decays.The same procedure was used for the √ s = 7 TeV dataset, resulting in scale factors of 1.23 ± 0.07 (one btag) and 1.14 ± 0.18 (two b-tags) for the Alpgen + Herwig Z +jets simulation, which predicts different numbers of events with heavy-flavour jets than Alpgen + Pythia. The background from events with one real and one misidentified lepton was estimated using a combination of data and simulation.Simulation studies show that the samples with a same-sign eμ pair and one or two b-tagged jets are dominated by events with misidentified leptons, with rates comparable to those in the opposite-sign sample.The contributions of events with misidentified leptons were therefore estimated using the same-sign event counts in data after subtraction of the estimated prompt same-sign contributions, multiplied by the opposite-to same-sign misidentified-lepton ratios R j = N mis,OS j /N mis,SS j estimated from simulation for events with j = 1 and 2 b-tagged jets.The procedure is illustrated by Table 2, which shows the expected numbers of The data are shown compared to the expectation from simulation, broken down into contributions from tt, W t single top, Z +jets, dibosons, and events with misidentified electrons or muons, normalised to the same integrated luminosity as the data.The lower parts of the figure show the ratios of simulation to data, using various tt signal samples generated with Powheg + Pythia6 (PY), MC@NLO + Herwig (HW) and Alpgen + Herwig, and with the cyan band indicating the statistical uncertainty events with misidentified leptons in opposite-and same-sign samples.The contributions where the electron is misidentified, coming from a photon conversion, the decay of a heavy-flavour hadron or other sources (such as a misidentified hadron within a jet), and where the muon is misidentified, coming either from heavy-flavour decay or other sources (e.g.decay in flight of a pion or kaon) are shown separately.The largest contributions come from photon conversions giving electron candidates, and most of these come from photons radiated from prompt electrons produced from t → W q → eνq in signal tt → eμννbb events.Such electrons populate both the opposite-and same-sign samples, and are treated as misidentified-lepton background. The ratios R j were estimated from simulation to be R 1 = 1.4±0.5 and R 2 = 1.1±0.5 at √ s = 7 TeV, and R 1 = 1.2± 0.3 and R 2 = 1.6 ± 0.5 at √ s = 8 TeV.The uncertainties were derived by considering the range of R j values for different components of the misidentified-lepton background, including the small contributions from sources other than photon conversions and heavy-flavour decays, which do not significantly populate the same-sign samples.As shown in Table 2, about 25 % of the same-sign events have two prompt leptons, which come mainly from semileptonic tt events with an additional leptonically decaying W or Z boson, diboson decays producing two same-sign leptons, and wrong-sign tt → eμννbb events where the electron charge was misreconstructed.A conservative uncertainty of 50 % was assigned to this background, based on studies of the simulation modelling of electron charge misidentification [42] and uncertainties in the rates of contributing physics processes. The simulation modelling of the different components of the misidentified-lepton background was checked by studying kinematic distributions of same-sign events, as illustrated for the |η| and p T distributions of the leptons in √ s = 8 TeV data in Fig. 4. The simulation generally models the normalisation and shapes of distributions well in both the one and two b-tag event samples.The simulation modelling was further tested in control samples with relaxed electron or muon isolation requirements to enhance the relative contributions of electrons or muons from heavy-flavour decays, and similar levels of agreement were observed. Systematic uncertainties The systematic uncertainties on the measured cross-sections σ tt are shown in detail in Table 3 together with the individual uncertainties on eμ and C b .A summary of the uncertainties on σ tt is shown in Table 4.Each source of uncertainty was evaluated by repeatedly solving Eq. ( 1) with all relevant input parameters simultaneously changed by ±1 standard deviation.Systematic correlations between input parameters (in particular significant anti-correlations between eμ and C b which contribute with opposite signs to σ tt ) were thus taken into account.The total uncertainties on σ tt and b were calculated by adding the effects of all the individual systematic components in quadrature, assuming them to be independent.The sources of systematic uncertainty are discussed in more detail below; unless otherwise stated, the same methodology was used for both √ s = 7 TeV and √ s = 8 TeV datasets. tt modelling: Uncertainties on eμ and C b due to the simulation of tt events were assessed by studying the predic- tions of different tt generators and hadronisation models as detailed in Sect.3. The prediction for eμ was found to be particularly sensitive to the amount of hadronic activity near the leptons, which strongly affects the efficiency of the lepton isolation requirements described in Sect. 4. These isolation efficiencies were therefore measured directly from data, as discussed below.The remaining uncertainties on eμ relating to lepton reconstruction, identification and lepton-jet overlap removal, were evaluated from the differences between the predictions from the baseline Powheg + Pythia tt sample and a sample generated using MC@NLO + Herwig, thus varying both the hard-scattering event generator and the fragmentation and hadronisation model.The MC@NLO + Herwig sample gave a larger value of eμ but a smaller value of C b .Additional comparisons of Powheg + Pythia samples with the AUET2 rather than P2011C tune and with Powheg + Herwig, i.e. changing only the fragmentation/hadronisation model, gave smaller uncertainties.The Alpgen + Herwig and Alpgen + Pythia samples gave values of eμ up to 2 % higher than that of Powheg+Pythia, due largely to a more central predicted η distribution for the leptons.However, this sample uses a leading-order generator and PDFs, and gives an inferior description of the electron and muon η distributions (see Fig. 3c, e), so was not used to set the systematic uncertainty on eμ .In contrast, the Alpgen samples were considered in setting the uncertainty on C b , taken as the largest difference between the predictions of Powheg + Pythia and any of the other generators. The effect of extra radiation in tt events was also considered explicitly by using pairs of simulation samples with different Pythia tunes whose parameters span the variations compatible with ATLAS studies of additional jet activity in tt events at √ s = 7 TeV [51], generated using both AcerMC + Pythia and Alpgen + Pythia.These samples predicted large variations in the lepton isolation efficiencies (which were instead measured from data), but residual variations in other lepton-related uncertainties and C b within the uncertainties set from other simulation samples. Parton distribution functions: The uncertainties on eμ , C b and the W t single top background due to uncertainties on the proton PDFs were evaluated using the error sets of the CT10 NLO [11], MSTW 2008 68 % CL NLO [9,10] and NNPDF 2.3 NLO [13] sets.The final uncertainty was calculated as half the envelope encompassing the predictions from all three PDF sets along with their associated uncertainties, following the PDF4LHC recommendations [8].QCD scale choices: The lepton p T and η distributions, and hence eμ , are sensitive to the choices of QCD renormalisation and factorisation scales.This effect was investigated using √ s = 8 TeV generator-level Powheg + Pythia tt samples where the two scales were separately varied up and down by a factor of two from their default values of Q 2 =m t 2 + p 2 T,t .The systematic uncertainty for each scale was taken as half the difference in eμ values between the samples with increased and decreased QCD scale, and the uncertainties for the renormalisation The simulation prediction is normalised to the same integrated luminosity as the data, and broken down into contributions where both leptons are prompt, or one is a misidentified lepton from a photon conversion originating from a top quark decay or from background, or from heavy-flavour decay.In the p T distributions, the last bin includes the overflows and factorisation scales were then added linearly to give a total scale uncertainty of 0.30 % on eμ , assumed to be valid for both centre-of-mass energies.Single top modelling: Uncertainties related to W t single top modelling were assessed by comparing the predictions from Powheg + Pythia, Powheg + Herwig, MC@NLO + Herwig, and AcerMC + Pythia with two tunes producing different amounts of additional radiation, in all cases normalising the total production rate to the approximate NNLO cross-section prediction.The resulting uncertainties are about 5 % and 20 % on the one and two b-tag background contributions.The background in the two b-tag sample is sensitive to the production of W t with an additional b-jet, a NLO contribution to W t which can interfere with the tt final state.The sensitivity to this interference was studied by comparing the predictions of Powheg with the diagram-removal (baseline) and diagram-subtraction schemes [33,52], giving additional single-top/tt interference uncertainties of 1-2 % and 20 % for the one and two b-tag samples.The production of single top quarks in association with a Z boson gives contributions which are negligible compared to the above uncertainties.Production of single top quarks via the t-and s-channels gives rise to final states with only one prompt lepton, and is accounted for as part of the misidentified-lepton background.Background cross-sections: The uncertainties on the W t single top cross-section were taken to be 7.6 % and 6.8 % at √ s = 7 TeV and √ s = 8 TeV, based on Ref. [34].The uncertainties on the diboson cross-sections were set to 5 % [36]. Lepton identification and measurement: The modelling of the electron and muon identification efficiencies, energy scales and resolutions (including the effects of pileup) were studied using Z → ee/μμ, J/ψ → ee/μμ and W → eν events in data and simulation, using the techniques described in Refs.[42,43,53].Small corrections were applied to the simulation to better model the performance seen in data, and the associated systematic uncertainties were propagated to the cross-section measurement. Lepton isolation: The efficiency of the lepton isolation requirements was measured directly in data, from the fraction of selected opposite-sign eμ events with one or two b-tags where either the electron or muon fails the isolation cut.The results were corrected for the contamination from misidentified leptons, estimated using the same-sign eμ samples as described in Sect.5, or by using the distributions of lepton impact parameter significance |d 0 |/σ d 0 , where d 0 is the distance of closest approach of the lepton track to the event primary vertex in the transverse plane, and σ d 0 its uncertainty.Consistent results were obtained from both methods, and showed that the baseline Powheg+Pythia simulation overestimates the efficiencies of the isolation requirements by about 0.5 % for both the electrons and muons.These corrections were applied to eμ , with uncertainties dominated by the limited sizes of the same-sign and high impact-parameter significance samples used for background estimation.Similar results were found from studies in Z → ee and Z → μμ events, after correcting the results for the larger average amount of hadronic activity near the leptons in tt → eμννbb events.Jet-related uncertainties: Although the efficiency to reconstruct and b-tag jets from tt events is extracted from the data, uncertainties in the jet energy scale, energy resolution and reconstruction efficiency affect the backgrounds estimated from simulation and the estimate of the tagging correlation C b .They also have a small effect on eμ via the lepton-jet ΔR separation cuts.The jet energy scale was varied in simulation according to the uncertainties derived from simulation and in-situ calibration measurements [47,54], using a model with 21 ( √ s = 7 TeV) or 22 ( √ s = 8 TeV) separate orthogonal uncertainty components which were then added in quadrature.The jet energy resolution was found to be well modelled by simulation [55], and remaining uncertainties were assessed by applying additional smearing, which reduces eμ .The calorimeter jet reconstruction efficiency was measured in data using track-based jets, and is also well described by the simulation; the impact of residual uncertainties was assessed by randomly discarding jets.The uncertainty associated with the jet vertex fraction requirement was assessed from studies of Z → ee/μμ+jets events.b -tagging uncertainties: The efficiency for b-tagging jets from tt events was extracted from the data via Eq.( 1), but simulation was used to predict the number of b-tagged jets and mistagged light-quark, gluon and charm jets in the W t single top and diboson backgrounds.The tagging correlation C b is also slightly sensitive to the efficiencies for tagging heavy-and light-flavour jets.The uncertainties in the simulation modelling of the b-tagging performance were assessed using studies of b-jets containing muons [50,56], jets containing D * + mesons [57] and inclusive jet events [58]. Misidentified leptons: The uncertainties on the number of events with misidentified leptons in the one and two b-tagged samples were derived from the statistical uncertainties on the numbers of same-sign lepton events, the systematic uncertainties on the opposite-to same-sign ratios R j , and the uncertainties on the numbers of prompt same-sign events, as discussed in detail in Sect.5.1.The overall uncertainties on the numbers of misidentified leptons vary from 30 to 50 %, dominated by the uncertainties on the ratios R j .Integrated luminosity: The uncertainty on the integrated luminosity of the √ s = 7 TeV dataset is 1.8 % [59].Using beam-separation scans performed in November 2012, the same methodology was applied to determine the √ s = 8 TeV luminosity scale, resulting in an uncertainty of 2.8 %.These uncertainties are dominated by effects specific to each dataset, and so are considered to be uncorrelated between the two centre-of-mass energies.The relative uncertainties on the cross-section measurements are slightly larger than those on the luminosity measurements because the W t single top and diboson backgrounds are evaluated from simulation, so are also sensitive to the assumed integrated luminosity.LHC beam energy: The LHC beam energy during the 2012 pp run was calibrated to be 0.30 ± 0.66 % smaller than the nominal value of 4 TeV per beam, using the revolution frequency difference of protons and lead ions during p+Pb runs in early 2013 [60].Since this calibration is compatible with the nominal √ s of 8 TeV, no correction was applied to the measured σ tt value.However, an uncertainty of 1.72 %, corresponding to the expected change in σ tt for a 0.66 % change in √ s is quoted separately on the final result.This uncertainty was calculated using top++ 2.0, assuming that the relative change of σ tt for a 0.66 % change in √ s is as predicted by the NNLLO+NNLL calculation.Following Ref. [60], the same relative uncertainty on the LHC beam energy is applied for the √ s = 7 TeV dataset, giving a slightly larger uncertainty of 1.79 % due to the steeper relative dependence of σ tt on √ s in this region.These uncertainties are much larger than those corresponding to the very small dependence of eμ on √ s, which changes by only 0.5 % between 7 and 8 TeV.Top quark mass: The simulation samples used in this analysis were generated with m t = 172.5 GeV, but the acceptance for tt and W t events, and the W t background cross-section itself, depend on the assumed m t value.Alternative samples generated with m t varied in the range 165-180 GeV were used to quantify these effects.The acceptance and background effects partially cancel, and the final dependence of the result on the assumed m t value was determined to be dσ tt /dm t = −0.28%/GeV.The result of the analysis is reported assuming a fixed top mass of 172.5 GeV, and the small dependence of the cross-section on the assumed mass is not included as a systematic uncertainty. As shown in Tables 3 and 4, the largest systematic uncertainties on σ tt come from tt modelling and PDFs, and knowledge of the integrated luminosities and LHC beam energy. Additional correlation studies The tagging correlation C b was determined from simulation to be 1.009 ± 0.002 ± 0.007 ( √ s = 7 TeV) and 1.007 ± 0.002 ± 0.006 ( √ s = 8 TeV), where the first uncertainty is due to limited sizes of the simulated samples, and the second is dominated by the comparison of predictions from different tt generators.Additional studies were carried out to probe the modelling of possible sources of correlation.One possible source is the production of additional bb or cc pairs in tt production, which tends to increase both C b and the number of events with three or more b-tagged jets, which are not used in the measurement of σ tt .The ratio R 32 of events with at least three b-tagged jets to events with at least two b-tagged jets was used to quantify this extra heavy-flavour production in data.It was measured to be R 32 = 2.7 ± 0.4 % ( √ s = 7 TeV) and 2.8 ± 0.2 % ( √ s = 8 TeV), where the uncertainties are statistical.These values are close to the Powheg + Pythia prediction of 2.4 ± 0.1 % (see Fig. 1), and well within the spread of R 32 values seen in the alternative simulation samples. Kinematic correlations between the two b-jets produced in the tt decay could also produce a positive tagging correlation, as the efficiency to reconstruct and tag b-jets is not uniform as a function of p T and η.For example, tt pairs produced with high invariant mass tend to give rise to two back-toback collimated top quark decay systems where both b-jets have higher than average p T , and longitudinal boosts of the tt system along the beamline give rise to η correlations between the two jets.These effects were probed by increasing the jet p T cut in steps from the default of 25 GeV up to 75 GeV; above about 50 GeV, the simulation predicts strong positive correlations of up to C b ≈ 1.2 for a 75 GeV p T cut.As shown for the √ s = 8 TeV dataset in Fig. 5, the cross-sections fitted in data after taking these correlations into account remain stable across the full p T cut range, suggesting that any such kinematic correlations are well modelled by the simulation.Similar results were seen at √ s = 7 TeV.The results were also found to be stable within the uncorrelated components of 123 the statistical and systematic uncertainties when tightening the jet and lepton η cuts, raising the lepton p T cut up to 55 GeV and changing the b-tagging working point between efficiencies of 60 % and 80 %.No additional uncertainties were assigned as a result of these studies. Results Combining the estimates of eμ and C b from simulation samples, the estimates of the background N shown in Table 1 and the data integrated luminosities, the tt crosssection was determined by solving Eq. ( 1) to be: where the four uncertainties arise from data statistics, experimental and theoretical systematic effects related to the analysis, knowledge of the integrated luminosity and of the LHC beam energy.The total uncertainties are 7.1 pb (3.9 %) at √ s = 7 TeV and 10.3 pb (4.3 %) at √ s = 8 TeV.A detailed breakdown of the different components is given in Table 3.The results are reported for a fixed top quark mass of m t = 172.5 GeV, and have a dependence on this assumed value of dσ tt /dm t = −0.28%/GeV.The product of jet reconstruction and b-tagging efficiencies b was measured to be 0.557± 0.009 at √ s = 7 TeV and 0.540 ± 0.006 at √ s = 8 TeV, in both cases consistent with the values in simulation. The results are shown graphically as a function of √ s in Fig. 6, together with previous ATLAS measurements of σ tt at √ s = 7 TeV in the ee, μμ and eμ dilepton channels using a count of the number of events with two leptons and at least two jets in an 0.7 fb −1 dataset [61], and using a fit of jet multiplicities and missing transverse momentum in the eμ dilepton channel alone with the full 4.6 fb −1 dataset [62].The √ s = 7 TeV results are all consistent, but cannot be combined as they are not based on independent datasets.The measurements from this analysis at both centre-of-mass energies are consistent with the NNLO+NNLL QCD calculations discussed in Sect. 2. The √ s = 7 TeV result is 13 % higher than a previous measurement by the CMS collaboration [63], whilst the √ s = 8 TeV result is consistent with that from CMS [64]. From the present analysis, the ratio of cross-sections R tt = σ tt (8 TeV)/σ tt (7 TeV) was determined to be: R tt = 1.326 ± 0.024 ± 0.015 ± 0.049 ± 0.001 with uncertainties defined as above, adding in quadrature to a total of 0.056.The experimental systematic uncertainties (apart from the statistical components of the lepton isolation and misidentified lepton uncertainties, which were evaluated independently from data in each dataset) and the LHC beam √ s = 7 TeV using the ee, μμ and eμ channels [61] and using a fit to jet multiplicities and missing transverse momentum in the eμ channel [62].The uncertainties in √ s due to the LHC beam energy uncertainty are displayed as horizontal error bars, and the vertical error bars do not include the corresponding cross-section uncertainties.The three √ s = 7 TeV measurements are displaced horizontally slightly for clarity.The NNLO+NNLL prediction [6,7] described in Sect. 2 is also shown as a function of √ s, for fixed m t = 172.5 GeV and with the uncertainties from PDFs, α s and QCD scale choices indicated by the green band energy uncertainty are correlated between the two centre-ofmass energies.The luminosity uncertainties were taken to be uncorrelated between energies.The result is consistent with the QCD NNLO+NNLL predicted ratio of 1.430 ± 0.013 (see Sect. 2), which in addition to the quoted PDF, α s and QCD scale uncertainties varies by only ±0.001 for a ±1 GeV variation of m t . Fiducial cross-sections The preselection efficiency eμ can be written as the product of two terms eμ = A eμ G eμ , where the acceptance A eμ represents the fraction of tt events which have a true opposite-sign eμ pair from t → W → decays (including via W → τ → ), each with p T > 25 GeV and within |η| < 2.5, and G eμ represents the reconstruction efficiency, i.e. the probability that the two leptons are reconstructed and pass all the identification and isolation requirements.A fiducial cross-section σ fid tt can then be defined as σ fid tt = A eμ σ tt , and measured by replacing σ tt eμ with σ fid tt G eμ in Eq. ( 1), leaving the background terms unchanged.Measurement of the fiducial cross-section avoids the systematic uncertainties associated with A eμ , i.e. the extrapolation from the measured lepton phase space to the full phase space populated by inclusive tt production.In this analysis, these come mainly from knowledge of the PDFs and the QCD scale uncertainties.Since the analysis technique naturally corrects for the fraction of jets which are outside the kinematic acceptance through the fitted value of b , no restrictions on jet kinematics are imposed in the definition of σ fid tt .In calculating A eμ and G eμ from the various tt simulation samples, the lepton fourmomenta were taken after final-state radiation, and including the four-momenta of any photons within a cone of size ΔR = 0.1 around the lepton direction, excluding photons from hadron decays or produced in interactions with detector material.The values of A eμ are about 1.4 % (including the tt → eμννbb branching ratio), and those of G eμ about 55 %, at both centre-of-mass energies. The measured fiducial cross-sections at √ s = 7 TeV and √ s = 8 TeV, for leptons with p T > 25 GeV and |η| < 2.5, are shown in the first row of Table 5.The relative uncertainties are shown in the lower part of Table 3; the PDF uncertainties are substantially reduced compared to the inclusive crosssection measurement, and the QCD scale uncertainties are reduced to a negligible level.The tt modelling uncertainties, evaluated from the difference between Powheg+Pythia and MC@NLO+Herwig samples increase slightly, though the differences are not significant given the sizes of the simulated samples.Overall, the analysis systematics on the fiducial cross-sections are 6-11 % smaller than those on the inclusive cross-section measurements. Simulation studies predict that 11.9 ± 0.1 % of tt events in the fiducial region have at least one lepton produced via W → τ → decay.The second row in Table 5 shows the fiducial cross-section measurements scaled down to remove this contribution.The third and fourth rows show the measurements scaled to a different lepton fiducial acceptance of p T > 30 GeV and |η| < 2.4, a common phase space accessible to both the ATLAS and CMS experiments. Top quark mass determination The strong dependence of the theoretical prediction for σ tt on m t offers the possibility of interpreting measurements of σ tt as measurements of m t .The theoretical calculations use the pole mass m pole t , corresponding to the definition of the mass of a free particle, whereas the top quark mass measured through direct reconstruction of the top decay products [65][66][67][68] may differ from the pole mass by O(1 GeV) [69,70].It is therefore interesting to compare the values of m t determined , showing the central values (solid lines) and total uncertainties (dashed lines) with several PDF sets.The yellow band shows the QCD scale uncertainty.The measurements of σ tt are also shown, with their dependence on the assumed value of m t through acceptance and background corrections parameterised using Eq.(2) from the two approaches, as explored previously by the D0 [71,72] and CMS [73] collaborations. The dependence of the cross-section predictions (calculated as described in Sect.2) on m pole t is shown in Fig. 7 at both √ s = 7 TeV and √ s = 8 TeV.The calculations were fitted to the parameterisation proposed in Ref. [6], namely: where the parameterisation constant and σ (m ref t ), a 1 and a 2 are free parameters.This function was used to parameterise the dependence of σ tt on m t separately for each of the NNLO PDF sets CT10, MSTW and NNPDF2.3,together with their uncertainty envelopes. Figure 7 also shows the small dependence of the experimental measurement of σ tt on the assumed value of m t , arising from variations in the acceptance and W t single top background, as discussed in Sect.6.This dependence was also parameterised using Eq. ( 2), giving a derivative of dσ tt /dm t = −0.28 ± 0.03 %/GeV at 172.5 GeV for both centre-of-mass energies, where the uncertainty is due to the limited size of the simulated samples.Here, m t represents the top quark mass used in the Monte Carlo generators, corresponding to that measured in direct reconstruction, rather than the pole mass.However, since this experimental dependence is small, differences between the two masses of up to 2 GeV have a negligible effect (<0.2 GeV) on the pole mass determination.A comparison of the theoretical and experimental curves shown in Fig. 7 therefore allows an unambiguous extraction of the top quark pole mass. The extraction is performed by maximising the following Bayesian likelihood as a function of the top quark pole mass Here, G(x|μ, ρ) represents a Gaussian probability density in the variable x with mean μ and standard deviation ρ. The first Gaussian term represents the experimental measurement σ tt with its dependence on m pole t and uncertainty ρ exp .The second Gaussian term represents the theoretical prediction given by Eq. ( 2) with its asymmetric uncertainty ρ ± theo obtained from the quadrature sum of PDF+α s and QCD scale uncertainties evaluated as discussed in Sect. 2. The likelihood in Eq. (3) was maximised separately for each PDF set and centre-of-mass energy to give the m pole t values shown in Table 6.A breakdown of the contributions to the total uncertainties is given for the CT10 PDF results in Table 7; it can be seen that the theoretical contributions are larger than those from the experimental measurement of σ tt .A single m pole t value was derived for each centre-of-mass energy by defining an asymmetric Gaussian theoretical probability density in Eq. ( 3) with mean equal to the CT10 prediction, and a ±1 standard deviation uncertainty envelope which encompasses the ±1 standard deviation uncertainties from each PDF set following the PDF4LHC prescription [8], giving: Considering only uncorrelated experimental uncertainties, the two values are consistent at the level of 1.7 standard deviations.The top pole mass was also extracted using a frequentist approach, evaluating the likelihood for each m pole t value as the Gaussian compatibility between the theoretically predicted and experimentally measured values, and fixing the theory uncertainties to those at m pole t = 172.5 GeV.The results differ from those of the Bayesian approach by at most 0.2 GeV.Finally, m pole t was extracted from the combined √ s = 7 TeV and √ s = 8 TeV dataset using the product of likelihoods (Eq.( 3)) for each centre-of-mass energy and accounting for correlations via nuisance parameters.The same set of experimental uncertainties was considered correlated as for the cross-section ratio measurement, and the uncertainty on σ theo tt was considered fully correlated between the two datasets.The resulting value using the envelope of all three considered PDF sets is m pole t = 172.9+2.5 −2.6 GeV and has only a slightly smaller uncertainty than the individual results at each centre-of-mass energy, due to the large correlations, particularly for the theoretical predictions.The results are shown in Fig. 8, together with previous determinations using similar techniques from D0 [71,72] and CMS [73].All extracted values are consistent with the average of measurements from kinematic reconstruction of tt events of 173.34 ± 0.76 GeV [74], showing good compatibility of top quark masses extracted using very different techniques and assumptions. Constraints on stop-pair production Supersymmetry (SUSY) theories predict new bosonic partners for the Standard Model fermions and fermionic partners for the bosons.In the framework of a generic R-parity conserving minimal supersymmetric extension of the SM [75][76][77][78][79], SUSY particles are produced in pairs and the light- ATLAS Fig. 8 Comparison of top quark pole mass values determined from this and previous cross-section measurements [71][72][73].The average of top mass measurements from direct reconstruction [74] is also shown est supersymmetric particle is stable.If SUSY is realised in nature and responsible for the solution to the hierarchy problem, naturalness arguments suggest that the supersymmetric partners of the top quark-the top squarks or stopsshould have mass close to m t in order to effectively cancel the top quark loop contributions to the Higgs mass [80,81].In this scenario, the lighter top squark mass eigenstate t1 would be produced in pairs, and could decay via , where χ 0 1 , the lightest neutralino, is the lightest supersymmetric particle and is therefore stable.Stoppair production could therefore give rise to tt χ 0 1 χ 0 1 intermediate states, appearing like tt production with additional missing transverse momentum carried away by the escaping neutralinos.The predicted cross-sections at √ s = 8 TeV are about 40 pb for m t1 = 175 GeV, falling to 20 pb for 200 GeV.If the top squark mass m t1 is smaller than about 200 GeV, such events would look very similar to SM QCD tt production, making traditional searches exploiting kinematic differences very difficult, but producing a small excess in the measured tt cross-section, as discussed e.g. in Refs.[82,83]. The potential stop-pair signal yield was studied for top squark masses in the range 175-225 GeV and neutralino masses in the range 1 GeV< m χ 0 1 < m t1 −m t using simulated samples generated with Herwig++ [84] with the CTEQ6L1 PDFs [32], and NLO+NLL production cross-sections [85][86][87].The mixing matrices for the top squarks and the neutralinos were chosen such that the top quark produced in the t1 → t χ 0 1 decay has a right-handed polarisation in 95 % of the decays.Due to the slightly more central |η| distribution of the leptons from the subsequent t → W q, W → ν decay, the preselection efficiency eμ for these events is typically 10-20 % higher than for SM QCD tt, increasing with m t1 .However, the fraction of preselected events with one or two b-tagged jets is very similar to the SM case.The effect of a small admixture of stop-pair production in addition to the 9 Expected and observed limits at 95 % CL on the signal strength μ as a function of m t1 , for pair produced top squarks t1 decaying with 100 % branching ratio via t1 → t χ0 1 to predominantly right-handed top quarks, assuming m χ 0 1 = 1 GeV.The black dotted line shows the expected limit with ±1σ contours, taking into account all uncertainties except the theoretical cross-section uncertainties on the signal.The red solid line shows the observed limit, with dotted lines indicating the changes as the nominal signal cross-section is scaled up and down by its theoretical uncertainty SM tt production is therefore to increase the measured crosssection by R t1 t1 σ t1 t1 , where R t1 t1 is the ratio of eμ values for stop-pair and SM tt production, and σ t1 t1 is the stop-pair production cross-section. Limits were set on stop-pair production by fitting the effective production cross-section R t1 t1 σ t1 t1 multiplied by a signal strength μ to the difference between the measured crosssections (σ tt ) and the theoretically predicted SM QCD production cross-sections (σ theo tt ).The two datasets were fitted simultaneously, assuming values of σ theo tt = 177.3+11.5 −12.0 pb for √ s = 7 TeV and 252.9 +15.3 −16.3 pb for √ s = 8 TeV, including the uncertainty due to a ±1 GeV variation in the top quark mass.The limits were determined using a profile likelihood ratio in the asymptotic limit [88], using nuisance parameters to account for correlated theoretical and experimental uncertainties. The observed and expected limits on μ at the 95 % confidence level (CL) were extracted using the CLs prescription [89] and are shown in Fig. 9. Due to the rapidly decreasing stop-pair production cross-section with increasing m t1 , the analysis is most sensitive below 180 GeV.Adopting the convention of reducing the estimated SUSY production crosssection by one standard deviation of its theoretical uncertainty (15 %, coming from PDFs and QCD scale uncertainties [90]), stop masses between the top mass threshold and 177 GeV are excluded, assuming 100 % branching ratio for t1 → t χ 0 1 and m χ 0 1 = 1 GeV.The limits from considering the √ s = 7 TeV and √ s = 8 TeV datasets separately are only slightly weaker, due to the large correlations in the systematic uncertainties between beam energies, particularly for the theoretical predictions.At each energy, they correspond to excluded stop-pair production cross-sections of 25-27 pb at 95 % CL.The combined cross-section limits depend only slightly on the neutralino mass, becoming e.g. about 3 % weaker at m t1 = 200 GeV for m χ 0 1 = 20 GeV.However, they depend more strongly on the assumed top quark polarisation; in a scenario with m t1 = 175 GeV and m χ 0 1 = 1 GeV, and squark mixing matrices chosen such that the top quarks are produced with full left-handed polarisation, the limits are 4 % weaker than the predominantly right-handed case, rising to 14 % weaker for m t1 = 200 GeV.In this scenario, top squarks with masses from m t to 175 GeV can be excluded.Although the analysis has some sensitivity to three-body top squark decays of the form t1 → bW χ 0 1 for m t1 < m t , the b-jets become softer than those from SM tt production, affecting the determination of b .Therefore, no limits are set for scenarios with m t1 < m t . Conclusions The inclusive tt production cross-section has been measured at the LHC using the full ATLAS 2011-2012 pp collision data sample of 4. where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, knowledge of the integrated luminosity, and of the LHC beam energy, giving total uncertainties of 7.1 pb (3.9 %) and 10.3 pb (4.3 %) at √ s = 7 TeV and √ s = 8 TeV.The dependence of the results on the assumed value of m t is dσ tt /dm t = −0.28%/GeV, and the associated uncertainty is not included in the totals given above.The results are consistent with recent NNLO+NNLL QCD calculations, and have slightly smaller uncertainties than the theoretical predictions.The ratio of the two cross-sections, and measurements in fiducial ranges corresponding to the experimental acceptance, have also been reported. The measured tt cross-sections have been used to determine the top quark pole mass via the dependence of the predicted cross-section on m pole t , giving a value of m pole t = 172.9+2.5 −2.6 GeV, compatible with the mass measured from kinematic reconstruction of tt events. The results have also been used to search for pair-produced supersymmetric top squarks decaying to top quarks and light neutralinos.Assuming 100 % branching ratio for the decay t1 → t χ 0 1 , and the production of predominantly right-handed top quarks, top squark masses between the top quark mass and 177 GeV are excluded at 95 % CL. 2 . The preselection efficiency eμ and tagging correlation C b were taken from tt event simulation, and the background contributions N 1 ) = − 0.12 and (dσ tt /dN bkg 2 )/ (σ tt /N bkg 2 ) = −0.004.The fitted cross-sections are therefore most sensitive to the systematic uncertainties on N bkg 1 , whilst for the chosen b-tagging working point, the measurements of N 2 serve mainly to constrain b .As discussed in Sect.6.1, consistent results were also obtained at different b-tagging efficiency working points that induce greater sensitivity to the background estimate in the two b-tag sample. Fig. 1 Fig. 1 Distributions of the number of b-tagged jets in preselected opposite-sign eμ events in a √ s = 7 TeV and b √ s = 8 TeV data.The data are shown compared to the expectation from simulation, broken down into contributions from tt, W t single top, Z +jets, dibosons, and events with misidentified electrons or muons, normalised to the same integrated luminosity as the data.The lower parts of the figure show the ratios of simulation to data, using various tt signal samples generated with Powheg + Pythia6 (PY), MC@NLO + Herwig (HW) and Alpgen + Herwig, and with the cyan band indicating the statistical uncertainty Fig. 2 Fig. 3 2 Fig. 2 Distributions of a the number of jets, b the transverse momentum p T of the b-tagged jets, c the |η| of the electron, d the p T of the electron, e the |η| of the muon and f the p T of the muon, in events with an opposite-sign eμ pair and at least one b-tagged jet.The √ s = 7 TeV data are compared to the expectation from simulation, broken down Fig. 4 Fig.4 Distributions of electron and muon |η| and p T in same-sign eμ events at √ s = 8 TeV with at least one b-tagged jet.The simulation prediction is normalised to the same integrated luminosity as the data, and broken down into contributions where both leptons are prompt, or Fig. 5 Fig. 5 Measured tt cross-section at √ s = 8 TeV as a function of the b-tagged jet p T cut.The error bars show the uncorrelated part of the statistical uncertainty with respect to the baseline measurement with jet p T > 25 GeV Fig. 6 Fig.6 Measurements of the tt cross-section at √ s = 7 TeV and √ s = 8 TeV from this analysis (eμ b-tag) together with previous ATLAS results at √ s = 7 TeV using the ee, μμ and eμ channels[61] and using a fit to jet multiplicities and missing transverse momentum in the eμ channel[62].The uncertainties in √ s due to the LHC beam energy uncertainty are displayed as horizontal error bars, and the vertical error bars do not include the corresponding cross-section uncertainties.The three √ s = 7 TeV measurements are displaced horizontally slightly for clarity.The NNLO+NNLL prediction[6,7] described in Sect. 2 is also shown as a function of √ s, for fixed m t = 172.5 GeV and with the uncertainties from PDFs, α s and QCD scale choices indicated by the green band Fig. 7 Fig. 7 Predicted NNLO+NNLL tt production cross-sections at √ s = 7 TeV and √ s = 8 TeV as a function of m pole t 6 fb −1 at √ s = 7 TeV and 20.3 fb −1 at √ s = 8 TeV, in the dilepton tt → eμννbb decay channel.The numbers of opposite-sign eμ events with one and two b-tagged jets were counted, allowing a simultaneous determination of the tt cross-section σ tt and the probability to reconstruct and b-tag a jet from a tt decay.Assuming a top quark mass of m t = 172.5 GeV, the results are: σ tt = 182.9± 3.1 ± 4.2 ± 3.6 ± 3.3 pb ( √ s = 7 TeV) and σ tt = 242.4± 1.7 ± 5.5 ± 7.5 ± 4.2 pb ( √ s = 8 TeV), Table 3 Detailed breakdown of the symmetrised relative statistical, systematic and total uncertainties on the measurements of the tt production cross-section σ tt at √ s = 7 TeV and √ s = 8 TeV.Uncertainties quoted as '0.00' are smaller than 0.005, whilst '-' indicates the corresponding uncertainty is not applicable.The uncertainties on eμ and C b are also shown, with their relative signs indicated where relevant.They contribute with opposite signs to the uncertainties on σ tt , which also include uncertainties from estimates of the background terms N The lower part of the table gives the systematic uncertainties that are different for the measurement of the fiducial cross-section σ fid tt , together with the total analysis systematic and total uncertainties on σ fid [40] assessed by comparing the baseline prediction from Alpgen + Herwig with that of Sherpa[40]including massive b and c quarks, and found to be about 20 %.The background from 125 GeV SM Higgs production in the gluon fusion, vector-boson fusion, and W H and Z H associated production modes, with H → W W and H → τ τ , was evaluated to be smaller than the diboson modelling uncertainties, and was neglected.Z+ jets extrapolation: The uncertainties on the extrapolation of the Z +jets background from Z → ee/μμ to Table 4 Summary of the relative statistical, systematic and total uncertainties on the measurements of the tt production cross-section σ tt at Z → τ τ events result from statistical uncertainties, comparing the results from ee and μμ, which have different background compositions, and considering the dependence of the scale factors on Z boson p T . Table 5 Fiducial cross-section measurement results at √ s = 7 TeV and √ s = 8 TeV, for different requirements on the minimum lepton p T and maximum lepton |η|, and with or without the inclusion of leptons from W → τ → decays.In each case, the first uncertainty is statistical, the second due to analysis systematic effects, the third due to the integrated luminosity and the fourth due to the LHC beam energy Table 6 Measurements of the top quark pole mass determined from the
15,358
2014-01-01T00:00:00.000
[ "Physics" ]
Hydrodynamic Performance Improvement of Double-Row Floating Breakwaters by Changing the Cross-Sectional Geometry )is paper is presented to develop the hydrodynamic performance of double-row floating breakwater (FBW) by changing crosssectional geometry in the high wave periods. )e ANSYS-AQWA software is employed for the present calculations, which is a potential-based boundary element method (BEM). )e rectangular moored pontoons in the singleand double-row types are selected, and the results of the wave transmission coefficient and response amplitude operator (RAO) are presented and compared. )e numerical results showed good agreement with experimental data at different wavelengths, wave height, and the distance between double-row FBWs. )en, the performance results of FBWs for five shapes (rectangular, π-shaped, plus-shaped, triangular-shaped, and box-shaped) in the wave transmission coefficient, RAO, and mooring line tension are presented and compared to each other. )e results showed that the plus-shaped FBW has a better performance in reducing wave transmission than other shapes. In waves with long periods, the performance of π-shaped, triangular-shaped, and box-shaped FBWs is reduced, and the rectangular FBW loses its efficiency. Overall, the plus-shaped FBW has preferable performance regarding RAO response, mooring tension, and wave transmission. Introduction Breakwaters are structures that are designed to protect coastal installations from the danger of waves and their impact. ere are two types of breakwaters: floating and fixed. Although fixed breakwaters always provide higher protection performance than floating breakwaters (FBWs), the FBW is an affordable solution that can be used effectively in coastal calm conditions. In recent years, utilizing FBW has received much attention. In the cases of deep water, poor geotechnical conditions, severe sediment conditions, and high seabed slope, this type of breakwater is the best option. In the wave-structure interaction, the wave field around the structure is divided into three parts (reflected wave, damped wave due to turbulence, and transmitted wave). e amount of transmitted waves determines the performance of the FBWs. A structure that can reflect more waves has better performance. In the analysis of FBWs, one of the essential parameters which are commonly analyzed is the wave transmission coefficient. is parameter is the ratio of transmitted wave height to the incident wave height, which determines the FBW performance [1]. Some of the structural and hydrodynamic parameters are effective in wave transmission coefficient, which has been studied by many researchers. McCartney examined four types of floating breakwater and concluded that the moored pontoon breakwater had superior performance [2]. Sannasiraj et al. used a two-dimensional finite element model to study the FBW and concluded that the arrangement of the mooring does not affect the float performance [3]. Lee et al. proposed a method to obtain the response of a floating pontoon quay. ey showed that pontoon's movements were proportional to the size, draft, and mooring characteristics [4]. Wang and Sun made a new experimental model of the porous floating breakwater. is model was fabricated with large numbers of diamond-shaped blocks to reduce transmitted waves. ey concluded that the porous floating breakwater could reduce incident wave height [5]. Pena et al. experimented with several models to calculate wave transmission coefficients and concluded that the width of the pontoon is a useful parameter to determine the breakwater performance [6]. He et al. introduced a type of FBW with pneumatic chambers. ey showed that the installation of pneumatic chambers to the FBW would improve the performance of the system [7]. Martin et al. provided a computational fluid dynamic (CFD) numerical model to analyze the effect of the mooring system on the FBW [8]. Cho analyzed a rectangular FBW with the vertical porous plates and concluded that selected porosity plates help reduce the transmission coefficients [9]. Peng et al. investigated waves' interactions with submerged FBWs moored using a numerical model [10]. All studies in were done on the single-row FBW, and these studies showed that the width of FBW is effective in its performance. Some of the studies to increase the performance of the structure have used double-row breakwaters. Ji et al. studied the hydrodynamic behavior of a double-row pontoon floating breakwater using physical and numerical models. ey found that the new FBW has a better performance for the high-period and large-amplitude waves [1]. Ji et al. did some experiments about single-and double-row breakwater and then concluded that, in general, the double-row breakwater system has better performance [24]. Ji et al., in an experimental study, analyzed the hydrodynamic behaviors of double-row rectangular pontoon FBW and concluded that double-row FBWs significantly reduce the transmission coefficients [25]. Hitherto, much research has been done on rectangular FBWs. Moreover, the excellent performance of rectangular FBW is proven in wave periods of less than 5 seconds [14,26]. e performance of FBWs in high-wave periods is a problem that has been inadequately investigated. Researchers also concluded that double-row FBWs are efficient in reducing the wave transmission coefficient [1,24,25]. However, using double-row breakwaters increases the costs. is subject destroys the cost-effectiveness characteristic of FBWs. e FBW shape has a high impact on its performance, which determines added mass, radiation damping, length, width, draft, and mass parameters. erefore, the shape of the FBWs can change their performance. is study intends to investigate the hydrodynamic performance of double-row moored FBWs with different shapes at low wave periods compared with high wave periods using a numerical model. In fact, it has been tried to improve FBW performance in higher wave periods by changing its shape. In Section 2, the governing equations and the numerical method used in this study are described. In Section 3, the numerical model of this study has been validated using an experimental model in terms of the various conditions in determining the wave transmission coefficients and RAO response motions. In Section 4, five different shapes of the FBWs are modeled under a specific wave with different periods. en their performance in wave transmission coefficient, RAO responses, and mooring tension are presented and compared. Finally, the results of this study are presented in Section 5. Theoretical Formulation In analyzing hydrodynamic problems, fluid is usually assumed to be Newtonian and incompressible. Such an assumption is acceptable for water. erefore, the fluid flow governed by a set of elliptic partial differential equations is known as the Navier-Stokes (N-S) equations. Due to the large dimensions of the structure, the viscosity of the fluid is negligible. So, water is assumed to be inviscid everywhere. Such an assumption, together with the assumption of incompressibility, results in an ideal fluid and the N-S equations reduced to Euler equations, in which all viscous stresses are eliminated. Assuming that the flow is irrotational, the governing equations reduce to a linear partial differential equation called the Laplace equation. Such a flow is known as potential flow. e boundary element method (BEM) has the advantage of converting a domain integration problem to a surface integration problem, and this may improve computational efficiency. However, BEM's application is most prevalent in solving the Laplace equation, where the volume-surface transformation, ensured by Green's theorem, is complete [27]. e Laplace equation calculations have provided acceptable results assuming the incompressible fluid and the irrotational flow in the wavestructure interaction problems [28]. Here, the purpose is the analysis of the effect of wave environmental force on pontoon FBWs. In the analysis of the force of water waves on marine structures that are large in proportion to the wavelength, it is necessary to consider the wave-structure interaction. e configuration of a floating pontoon breakwater interacting with a linear wave is shown in Figure 1, where diffraction and radiation problems have been applied. In this study, ANSYS-AQWA commercial software has been used for the hydrodynamic analysis of floating structures in the time and frequency domain, and the method and technic of this application to solve problems are presented. Governing Equations. In order to describe the fluid flow field around a floating structure, the velocity potential is defined as [29] Φ(X where A is the incident wave amplitude, ω is the wave frequency, t is the time, and X → � (x, y, z) is the location relative to fixed reference axes (FRA). Here, using the usual symbol of floating Rigid Motions, three rotational and three translational motions of the body center of gravity are incited by an incident wave with unit amplitude: Here, the total potential φ(X → ) can be considered as a sum of three components: incident wave (φ I ), diffracted wave (φ D ), and radiated wave (φ R ), and all three potentials satisfy the Laplace equation. is is mathematically represented as where φ I is the first-order incident wave potential with unit wave amplitude, φ D is the corresponding diffraction wave potential, and φ Rj is the radiation wave potential due to the jth motion with unit motion amplitude. e velocity potential function is Φ(X → ،t), time-independent term is φ(X → ), according to linear hydrodynamic theory for incompressible and inviscid fluid, and irrotational fluid flow is described by the following equations: (i) e Laplace equation in the fluid domain (Λ) [30]: (ii) Linear free surface (s f ) on z � 0: (iv) Seabed surface condition (s z ) at z � −h: (v) For far-field condition (s ∞ ) where ������ In this study, as described in Introduction, e ANSYS-AQWA software is employed to solve the velocity potential, which is based on the potential-based BEM. In addition to the boundary conditions mentioned in the previous section, in the fluid domain, also the below boundary condition is satisfied [31]: whereX ∈ Λ, ξ ∈ Λ, ξ → � (ξ, η, ζ) is the location of a source on the FBW wetted surface, and δ(X → − ξ → ) is the Dirac function, which is described as en, according to the Dirac function, Green's function can be signified as where J 0 is the first kind of Bessel function, and Mathematical Problems in Engineering where k � (2π/L) is the wavenumber, ω is the wave frequency, L is the wavelength, and g is the gravitational acceleration. Here, the velocity potential of radiation and diffraction waves is defined as a Fredholm integral equation of the second kind by Green's theorem. en, the fluid potential is defined as where X → ∈ Λ ⋃ S b . In equation (14), using the hull surface boundary condition given by equation (6), the source strength over the mean wetted hull surface is defined as Equation of Motion and RAOs. e obtained solutions for the diffraction and radiation problems can be combined with the equation of motion of the floating FBW system to analyze the dynamic responses of the structural system in both time and frequency domains. In the frequency domain, the structural equation of motion is given by where M a ′ and M s are the total added mass matrices and the total structural mass, respectively, and the coefficient C ′ is the hydrodynamic damping matrix. K a and K hys ′ are the additional structural stiffness matrices and the assembled hydrostatic stiffness, respectively, and F jm ′ represents the total Froude-Krylov and diffracting forces and moments, where m corresponds to the structure and j pertains to the motion modes. en, the equation of motion in the time domain is expressed as where M is the added mass in the mass matrix and C is the hydrodynamic damping in the damping matrix, and both of them are frequency-dependent, and K is the total stiffness matrix. Here, according to the external force (F(t)), which has a constant amplitude, the equation of motion in the frequency domain cannot be straight converted into the time-domain equation. erefore, by employing a convolution integral form, the equation of motion can be defined as follows [32]: where A ∞ is the added mass matrix at the infinite frequency, c is the damping matrix, including the results of the radiation damping, R is the velocity impulse function matrix, and K is the total stiffness matrix. Also, the acceleration impulse function matrix can be used in the equation of motion as follows: e acceleration impulse function matrix can be determined as where B(ω) is the hydrodynamics damping matrix and A(ω) is the added mass matrix. By replacing the first-and second-order wave loads into equation (19), the equation of motion is obtained as where K is the total stiffness matrix, and it includes mooring stiffness and the linear hydrostatics,F t (t) is the mooring and articulation force, F (1) (t) is the first-order wave excitation force and moment, and F (2) (t) is the second-order wave excitation force. Response amplitude operator (RAO) is the motion of a floating structure in six degrees of freedom (surge, sway, heave, roll, pitch, and yaw) due to hydrodynamic wave force. RAOs are utilized as input data for calculations to determine the displacements, accelerations, and velocities at any given location on a marine floating structure. In general, RAO is calculated by the ratio of response amplitude of the FBW (X j ) to the wave amplitude (A i ) for linear motion and the ratio of response amplitude of the FBW to the wave slope (α i ) for rotational motion, which is defined as follows: where α i is the wave slope, A i is the wave amplitude, and X j is the response amplitude of FBW in rotational (θ j−3 ) and displacement (u j ) mode. ANSYS-AQWA analyzes linear algebraic equations to determine the harmonic response of the body to regular waves. ese response characteristics are commonly referred to as RAOs and are dependent on wave amplitude. Mooring System. In order to analyze the dynamics of the cable motion, many factors should be considered, such as the effects of cable mass, drag forces, inline elastic tension, and bending moment. e forces applied to the cable vary with time and, generally, the cables behave nonlinearly. e simulation of cable dynamics is needed to discretize cable along its length and assemble the mass and applied forces. Each mooring line is discretized as a series of Morison-type elements subjected to various external forces, as shown in Figure 2. e general equations for the force and moment acting on the cable are expressed as follows: where R → is the position vector of the first node of the cable element, T → is the tension force vector at the first node of the element, M �→ is the bending moment vector at the first node of the element, V → is the shear force vector at the first node of the element, F → h represents the external hydrodynamic loading vectors per unit length, q → is the distributed moment loading per unit length, m is the structural mass per unit length, w → is the element weight per unit length, D e is the diameter of the element, and Δs e is the length of the element. e bending moment and tension are relevant to the bending stiffness and the axial stiffness of the cable material defined as follows: where M ″ is the bending moment of the cable, T ″ is the tension of the cable, ε is the axial strain of the element, EA is axial stiffness of cable, and EI is the bending stiffness of the cable. Wave Transmission and Reflection Coefficients. e radiation wave energy transfer from the FBW causes the wave to pass through the structure. is energy transfer occurs in three ways, that is, the waves passing over the structure, the waves passing under the structure, and the waves created by the motion of the structure, as shown in Figure 3. FBWs are designed to reduce wave transmission. So, as explained in Introduction, the primary and most effective parameter in determining the performance of FBWs is the wave transmission coefficient. e total wave energy per unit length of the FBW is straightly proportionate to the square of the wave height [33]. where H i is the incident wave height,H t is the transmitted wave height, and H r is the reflected wave height. e transmission coefficient is defined by C t � (H t /H i ), while the reflection coefficient is defined by C r � (H r /H i ). Hence, by replacing the defined parameters in equation (25), we can write the following equation: In the real situation, due to the presence of viscous dissipation, equation (26) can be written as where C d is the dissipation coefficient that is due to the viscous effects and the resulting energy loses, such as vortex shedding, friction, and wave breaking. In this study, to calculate the wave transmission coefficient, the AQWA-GS module of ANSYS-AQWA is used. In this module, the results of calculations are displayed in different ways. After applying the wave load with determined characteristics, the wave amplitude can be calculated for different points of the domain. en, by calculating the ratio of the amplitude of the incident wave to the amplitude of the wave behind the rear FBW, the wave transmission coefficient is obtained. where A t is the transmitted wave amplitude, H i is the transmitted wave height, A i is the incident wave amplitude, and H i is the incident wave height. Validation of the Numerical Model In this section, the numerical model is verified by an experimental model. e experimental model selected for the validation is the study of Ji et al. In that study, some experiments were performed on the single-and double-row breakwaters to determine their performance [24]. Figure 5. In Figures 6-8, the results of numerical and experimental analyses are presented. Figures 6 and 7 show the wave transmission coefficients (C t ) versus (L/B) and (H/B) under different (S/B). In these results, L is the wavelength, B is the width of FBW, and S is the distance between FBWs. As shown in these figures, the numerical results are in good agreement with experimental data. e effect of (H/B) is not significant on C t , while (L/B) is affected on C t . Figure 8 shows the surge, heave, and pitch RAO of double-row FBWs and single-row FBW against (H/B) under different (S/B). It should be mentioned that the pitch RAO is defined by degree per meter. For all cases, the wave period is 1.1 s, and the wave height is 0.1 m and 0.15 m. As observed in the results, there is a relatively good agreement between the numerical results and experimental data. e wave height is not significantly affected by the wave transmission, while the increase of the wavelength is significantly caused to rise in the wave transmission coefficient. Regarding the RAO, increasing the ratio of H/B has reduced all the surge, heave, and pitch motions. e effect of (S/B) in heave RAO is minimal, but, in surge and pitch, RAO can be affected. It is pointed out that double-row FBW is better than the single-row FBW regarding wave transmission. e reason is that the turbulence caused by the interaction between FBWs and waves diffracts more wave energy. e next reason is that FBW can be considered as a roam pool. Five Different Shapes of Double-Row FBWs. Five different shapes of the double-row FBWs are selected for checking the hydrodynamic performance in the wave transmission, as well as RAO of the three motions in the time and frequency domains. e shapes of FBWs were chosen based on the rectangular shape. So, without increasing the volume compared to the rectangle FBW (even less than rectangular FBW), it was tried to increase the draft of FBW in different modes. Table 2 shows the main dimensions of the five different shapes of the FBWs, moment of inertia, mooring properties, and environment constants. e box-shaped FBW is more massive than the others, and also it has more moment of inertia, while the rectangular shape has less mass and less moment of inertia. e main dimensions of all shapes (length, width, draft, and volume) are the same except the rectangular shape. In fact, the shapes are created by rectangular correction with an increase in the draft and a decrease in the volume to improve the performance of FBWs. Figure 9 shows five different shapes of the FBWs. Dimensions are also given for all shapes in the same figure. Mooring properties are presented in Table 2, which are equal in all shapes. Given that the front FBW is designed to increase the performance of the rear FBW, all calculations of the wave transfer coefficient, RAO, and mooring forces are applied to the rear FBW. Wave Transmission Coefficient. As stated in Introduction, the wave transmission coefficient (C t ) is an essential parameter in FBWs hydrodynamic performance. Lower value of C t provides better performance. ere have been many studies on FBWs in the low-wave period. e increase in the wave period suddenly causes a reduction in the FBW performance. is section examines the effect of the geometry of FBW on its performance in the high periods. e wave transmission coefficient values for all shapes at periods 4 to 16 s are shown in Figure 10. In that figure, C t are compared at different wave periods and wave heights for all configuration types. e results show that the wave height has a low effect on the results of C t . As shown in these figures, three zones are given (I, II, and III). Zone I shows the Mathematical Problems in Engineering FBWs performance at low-wave periods (less than 4.6 s). All of the FBWs have a good performance with less wave transmission (ordinary performance). Zone II shows the inefficiency of FBWs, where, in this area, C t is more than one, which means that the amount of wave transmitted is at least equal to the accident wave. e FBW has no use in this area (loss of usability). In this zone, in addition to the incident wave passing up the FBW, sometimes some extra wave due to the structural motions is transmitted, which has an opposite effect. Zone III is the acceptable performance of FBWs at high-wave periods. Achieving excellent FBWs performance in this range of periods was the aim of this study (great performance). Rectangular-shaped FBW can be used for up to 4 s of wave period, and it does not work at the higher period. In the rectangular-shaped FBW, the significant increase of wave transmission coefficient in the 5 s period means that the transmitted wave height becomes larger than the incident wave height after the incident wave hits the FBW. π-Shaped, box-shaped, plus-shaped, and triangular FBWs give better performance compared to the rectangular-shaped FBW, and they have almost the same performance. However, it is observed that plus-shaped FBW has a more stable situation. In order to indicate the wave pattern around the FBW, a wave is applied to the FBWs in 1 m amplitude and two periods of 4 and 10 s. e 4 and 10 s periods were selected to simultaneously show the wave amplitude variations after hitting the FBW in both the high-wave and low-wave periods. Figures 11-13 show the wave pattern contours around all five shapes. e contours show the amplitudes of the waves around the FBW. As shown in these figures, to check the performance of the structure, a point 20 m behind the rear FBW (point C) was selected, and the wave amplitude was measured to calculate C t . In Figure 11, rectangular FBW contours show that, in the period of 4 s, the FBW works well, and the wave amplitude behind the rear FBW (transmitted wave amplitude) is significantly reduced compared to the incident wave amplitude. But, in the period of 10 s, as shown in Figure 12, contours show that the incident wave amplitude (1 m) and the transmitted wave amplitude are equal, and the FBW has lost its performance. Figure 13 shows the wave amplitude contours around π-shaped, box-shaped, plus-shaped, and triangular-shaped FBWs for two wave periods of 4 s and 10 s. In this paper, in order to reduce the volume of results, these contours are not shown in detail as in Figures 11 and 12, but it is easy to realize the results with generalizing to these figures. As shown in Figure 13, the FBWs have maintained their performance in the period of e results of the rectangular FBW show that the RAO is increased significantly in the period of 5 s. For heave motion, the increase in RAO reduces slightly but then follows a steady process. For pitch motion, this increase in the period of 5 s reduces until it reaches the desired process. In the wave transmission analysis, the results of the rectangular FBW showed that, in the period of 5 s, there is a significant increase in wave transmission. Also, in higher-wave periods, the wave transmission is high, and the rectangular FBW loses its performance. e reason for the increase in wave transmission in the rectangular FBW could be the increase in FBW oscillations shown in Figures 15 and 16. Actually, the wave energy increased by the floating breakwater motion. According to the equal conditions for all FBWs, this can be due to the geometry and displacement of the FBWs. In fact, the geometry and displacement of FBWs change the inertial term (added mass + structural mass) of the equation of motion of FBWs relative to each other. Frequency domain results show that the plus-shaped FBW has a stable process in all motions studied for all periods. e reason for the increase in wave transmission in the rectangular FBW could be the increase in FBW oscillations shown in Figures 14 and 16. Mathematical Problems in Engineering show that, at the period of 4 s, triangular FBW at all three motions response and box-shaped FBW at the pitch motion have a bigger amplitude compared to rectangular FBW. ere are some irregular motions observed for π-shaped, triangular, and box-shaped FBWs at periods of 8 and 16 s, which is maybe due to instability at some periods. Based on the results in both frequency and time domains, the plusshaped FBW has a lower amplitude (RAO) and a more efficient and stable response at all periods. Figure 20 shows the cable tension force histories for different shapes of FBWs at three periods (4, 8, and 16 s). In this study, the mooring tension calculation was carried out on the first cable of the rear FBW (Figure 2). e analyses are presented at a time of 100 s with step of 0.001 s, and the incident wave height was 1 m. e results show that the variation of geometry and the increase in the wave period do not have a significant effect on the tension of the cables. In the period of 4 s, the cable tension of the rectangular FBW is more than those of the other FBWs. In the whole time, tension force oscillations were observed for all FBWs except the plus-shaped FBW, which showed good stability. Conclusion In this study, a numerical model is performed to analyze the hydrodynamic performance of the moored double-row FBWs in different shapes (rectangular, plus-shaped, π-shaped, triangular, and box-shaped). First, the numerical results of wave transmission coefficient and RAO responses were compared with experimental data. en, five shapes of FBWs were selected to analyze the wave transmission coefficient, RAO, and mooring line tension. Based on the numerical results, the following conclusions can be drawn: (i) e wave transmission coefficient and RAO response of the rectangular shape are compared with available experimental data, and the results showed good agreement between them under various wavelengths, wave height, and distances of doublerow FBW. (ii) Among five FBWs configurations, the π-shaped, box-shaped, and plus-shaped FBWs have better performance regarding the wave transmission coefficient in high-wave periods. Although the rectangular FBW has a larger volume, it has an unfavorable performance. (iii) Comparisons of motion responses showed that rectangular FBW has significant RAO response at all periods, while the plus-shaped FBW has less RAO. (iv) e results of the mooring tension showed that the variations of geometry and the increase in the wave period do not have a significant effect on the tension of the cables, and the plus-shaped FBW has excellent stability in cable tension. (v) In general, the plus-shaped FBW has better performance regarding RAO response, mooring tension, and wave transmission. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,608
2021-06-12T00:00:00.000
[ "Engineering", "Environmental Science" ]
THE RISK OF DEVELOPMENT OF LITHUANIAN DERIVATIVES MARKETS Derivative financial instruments play a very important role in financial markets, but they are seen as rather contradictory and their impact on financial markets and the stability of these markets has not been comprehensively examined. Therefore, the aim of this article is to systematise the potential risks of derivatives in the context of the past global financial crisis, and the recent situation in Lithuania. In particular, growing international tension and deteriorating economic situation, make it necessary to re-analyse the recent crisis, its causes and consequences. The 2007–2008 global financial crisis revealed the challenges and risks of derivatives and showed the tremendous impact that their imprudent use may have on the stability of a financial system. The Lithuanian economy recently joined the euro, but its macroeconomic fundamentals show certain risks. Infrastructures of the derivatives market, liquidity and an adequate supervisory framework are necessary to maintain stability. Introduction Over the last decades, the global securities market has advanced considerably, and new financial instruments have emerged thus reducing the dependence of countries on the banking sector and also meeting the need for financial diversification (Hull, 2012;Blanco & Wehrheim, 2017;Carroll et al., 2017). The development of the derivatives market has been particularly prominent (Brigham & Davies, 2016;Geyer-Klingeberg et al., 2018). In order to distribute risk among countries, derivatives are used extensively and increasingly in risk management (Kosowski & Neftci, 2015;Bardoscia et al., 2019;Sakurai & Kurosaki, 2020). In 2007, at the onset of the U.S. financial crisis, which in 2008 gained enormous momentum not only in the USA, but spread around the world directly affecting Europe and indirectly spreading to emerging markets (Kazi et al., 2013;Lel, 2014;Mayordomo et al., 2014). The crisis highlighted the risks of derivatives and the ways of manifestation of such risks (Obstfeld & Rogoff, 2009;Hentschel & Smith, 2020). It is important not only for banks, but also for the economy as a whole to consider how the growing use of derivatives affects the stability of financial markets (Gródek-Szostak et al., 2019;Sarveswara Reddy & Sathish, 2020), what were the causes and consequences of the 2007-2008 global financial crisis (Bank for International Settlements [BIS], 2014; Oldani, 2008;Foster & Magdoff, 2009;Jacopo et al., 2009;Borger, 2020). Scientific literature offers a diverse analysis of derivatives which is nevertheless insufficient not only to comprehensively consider the positive impact of these instruments on financial markets, but also to reveal the potential threats and risks of these instruments and how these risks may manifest themselves (Hoa et al., 2013;Vo et al., 2019a). Therefore, the impact of derivatives on financial markets and associated risks remain a topical research issue. The aim of this paper is to analyse and systematise the advantages of derivatives in the context of recent decades, to analyse the dynamics of derivatives in Lithuania during 2004-2016, as well as to forecast their future trends. The object of the research is derivatives and related risks, their development (progress). Research methods. The paper employs the methods of systemic analysis of scientific literature, statistical analysis, logical comparative analysis and generalisation. The structure of the paper consists of an introduction, five explanatory parts and conclusions. The first part provides an overview of the development of the Lithuanian economy. In the second part, the analysis of derivatives and its structure is performed. The third part presents the riskiness of the derivatives market. The next section presents research on the situation in Lithuanian derivative markets. The last section presents a forecast of derivatives transactions and currency pairs. Overview of the development of the Lithuanian economy In order to overview the development of the Lithuanian economy from the perspective of growing international tension and deteriorating economic situation, this paper analyses several key economic indicators revealing the context of the progress of derivatives (Burns & Tobin, 2016;Lietuvos bankas [LB], 2016, 2019bBurns et al., 2019). In this way, it can be stated that global economic activity has recently remained at quite high levels, but its development in different regions is becoming less uniform (Vo et al., 2019b). Foreign trade development declined in developed countries, namely, in the euro area, Japan, and some other countries. It is in this group of countries that in 2017 economic activity jumped the most thus boosting global economic development, however most macroeconomic indicators in these countries have recently become less strong, with slower growth in manufacturing, imports and exports and decreasing confidence indicators (LB, 2016(LB, , 2018(LB, , 2019a. Uncertainty over the prospects of international trade is becoming increasingly important for such trade (Barron & Hultén, 2011;Vo et al., 2019b). The direct effect of the introduced trade restrictions is limited, as these restrictions apply only to a relatively small part of global trade. A much greater impact both on trade flows and economic activity can be attributed to the risk of increasing trade tensions and growing distance from achievements in the area of free trade. Tighter restrictions on trade would increase costs for businesses, reduce the purchasing power of the population, which could affect household consumption, investment, and labour market indicators. A slower growth of demand in trading partners affects Lithuanian exports ( Figure 1). The exports of goods of Lithuanian origin are increasing at slower pace than last year. Such a slowdown in growth is mostly attributed to a weaker growth in demand in EU countries. The growth of re-exports has almost completely waned. Russian imports, having increased considerably last year, significantly boosted Lithuania's re-exports to this country, while this year, as the growth of Russian imports slowed down, re-exports almost stopped increasing. Inflation, as a long-term increase in the general level of prices, has a major influence on the processes taking place in the country. As prices rise, the purchasing power of money decreases. It is argued that inflation rates can be boosted by the rate of GDP growth, however higher rates of inflation can lead to economic decline (Wang, 2017). The overall annual inflation rate remains on a downward trend. Fluctuations in inflation are mainly influenced by prices linked with global markets of raw materials. With the rapid growth of the global economy, rising demand has a stimulating effect on oil prices, which is also maintained by supply constraints on the countries that produce this raw material. Rising fuel prices currently increase overall inflation rates more than prices of other commodity and service groups. Food prices are also pushing up inflation rates, though less than predicted. As stocks have been accumulated and supply is sufficient, most prices of food raw materials, other than grain crops, are falling on world markets. For this reason, food prices are increasing slower also for consumers. Net inflation, including prices of services and industrial goods, has also fallen slightly. Disregarding the mentioned price effects, net inflation remains fairly stable as it continues to be driven by rising labour costs and domestic demand. An overview of Lithuania's GDP during 2008-2018 (see Figure 3) shows that constant fluctuations of GDP. In 2008, it amounted to 47.85 billion U.S. dollars, then dropped in 2010 to 37.12 billion U.S. dollars, however later started to rise again and in 2014 reached 48.52 billion U.S. dollars (LB, 2019a). The macroeconomic indicators under analysis influence the Lithuanian market, including changes in derivatives, their development and the forms and tendencies of their risk manifestations. After reviewing the development of the Lithuanian economy in the context of growing international tensions and increasingly complicated economic situation, the author proceeds to examine the development of derivatives. What are derivatives? Derivatives are spreading rapidly around the world due to globalisation and financial integration (David, 2009;Bae et al., 2017;Inekwe, 2018;Vo et al., 2019aVo et al., , 2019b. Especially for young growing markets, it is a matter of importance to identify these instruments and the risks associated with them (Bartram et al., 2009;Foster & Magdoff, 2009;Hentschel & Smith, 2020). It is therefore important to systematise and distinguish the most important forms of risk manifestations in relation to derivatives in order to identify and properly manage such risks (Vo et al., 2019a;Sakurai & Kurosaki, 2020). Derivatives can be defined as instruments for transferring financial risk to a third party when the title to the underlying is not transferable (Lietuvos Respublikos Seimas [LRS], 2007; Bezzina & Grima, 2012;Bae et al., 2017). Typically, derivatives improve the distribution of risk within a financial system. There are two ways to do this: firstly, derivatives make risk management more effective and flexible, especially in banks, and secondly, derivatives help to more effectively distribute individual risks and reduce the overall economic risk associated with them (Bingham & Rüdiger, 2013;Bae et al., 2017;Geyer-Klingeberg et al., 2018). Derivatives may be used not only for hedging purposes but also for speculative or arbitration purposes Bingham & Rüdiger, 2013;Bae et al., 2017). An analysis of derivatives shows that their use is usually referred to as an advantage in improving certain factors: -hedging. These are instruments for banks and other institutions to protect themselves against risks related to certain circumstances (Crotty, 2009;Bingham & Rüdiger, 2013;Brigham & Davies, 2016;Hentschel & Smith, 2020). The costs of hedging are lower compared to syndication of loans. Derivative financial instruments may have various modifications related to different risk profiles. If such techniques are widely used, they are beneficial for the entire system; -liquidity. Transactions in derivatives increase the liquidity of the banking sector. As bank risk exposure is limited (and passed on to other parties, such as insurance companies or pension funds), banks can lend more funds to different enterprises (Jankensgard, 2013;Wang, 2017;Höing & Kunstein, 2018;Bardoscia et al., 2019); -stability of the financial sector. By enhancing risk spreading both in the national economy and globally, derivatives improve the stability of not only banks, but also of the financial system itself Vo et al., 2019b). Risk can be passed from high-risk sectors on to those sectors that can take it, thus distributing it among hedge funds, investment funds or insurance companies. In this way, economic shocks, such as economic slowdown or crises in certain sectors, may be easier to survive (Bingham & Rüdiger, 2013;Braendle, 2018). -market information on risk. The value of derivative financial instruments performs a valuable information function. Their valuation provides better and timely information about an enterprise's financial position compared to credit ratings. Although credit ratings are most widespread as the indicators of stability and reliability published by credit rating agencies, and despite their considerable importance for markets, ratings undoubtedly have weaknesses (Melvin & Taylor, 2009;Murphy, 2013;Swedbank, 2017;LB, 2018;SEB bankas, 2018;Sarveswara Reddy & Sathish, 2020). However, beside advantages, the use and development of derivatives can cause certain problems. The following shortcomings are highlighted: -systemic risk in the derivatives market. Many investors, in particular hedge funds, are hedged by means of derivative financial instruments (Rutkauskas et al., 2008;Bardoscia et al., 2019;Sakurai & Kurosaki, 2020). Relatively minor changes in this market can cause huge problems throughout the system, in particular, by causing liquidity problems in financial markets. Although derivatives may reduce systemic risk, the opposite can also happen (Reddy et al., 2014;Garskaite-Milvydiene & Burksaitiene, 2016;Geyer-Klingeberg et al., 2018). This is what happened in 2008, when due to the high value of derivatives the U.S. crisis spread to and struck international financial markets; -fairly high market concentration. When there is a high degree of concentration in the market, this constitutes an obstacle to the economically optimal distribution of risk, even if it does not directly threaten the stability of financial markets. The high degree of concentration in the derivatives market is very common for the U.S., which means that there are few players on the market, but they are large. If one of these large players went bankrupt, this would lead to difficulties also for other market participants, and at the same time transaction costs would increase (Melvin & Taylor, 2009;McKibbin & Stoecke, 2010;Marshall et al., 2013;Hentschel & Smith, 2020); -price distortions. Although there are no explicit indications that derivatives may be systemically mispriced, the lack of experience of new participants entering the market may lead to such assumptions. Systemic mispricing may lead to misallocation of resources (Gay et al., 2011;Gródek-Szostak et al., 2019). The main danger is that market participants may underestimate the actual risk and assume more risk than would be desirable for the whole economy, which would lead to misallocation of resources among market participants. Mispricing can also give wrong information to other market participants about events (Naiker et al., 2013;Wang, 2017;Su et al., 2018); -reduced role of banking supervision. Counterparties to derivatives transactions must assume the role of bank control, as appropriate, in relation to the risks involved. The hedge funds selling derivative financial instruments may act as active controllers of the enterprises which are counterparties to the derivatives transactions (Begg, 2009;Bingham & Rüdiger, 2013;Li & Marinc, 2014;Quaglia, 2013;Braendle, 2018); -insufficient regulation of the derivatives market. The derivatives market is not very transparent. Derivatives themselves are rather non-regulated, documentation practices need to be improved. As a result, the derivatives market is not sufficiently clear about the risk assumed and the nature and source of this risk. Such uncertainty has negative consequences. The main players in this market can influence the regulatory rules being developed, which may harm the interests of other countries (Oldani, 2018). Although standardisation has reduced transaction costs, legislative interventions in this area still need to be improved, and there remain documentation shortcomings (Donohoe, 2015;Müller et al., 2015;Braendle, 2018;Nedelchev, 2018;Hentschel & Smith, 2020). Like most or even all financial assets, derivatives are also risky. Investors must assess the risk they take and at the same time identify the risks arising from the derivatives that they want to invest in. The risk of investing in derivatives differs from the usual risk of investing directly in the assets underlying the derivatives (Carruthers, 2013;Sakurai & Kurosaki, 2020). The key difference is that investing in assets can, in the worst-case scenario, result in the loss of all the invested assets, while investing in derivatives can lead to both the loss of invested assets and the taking of additional financial obligations. The following key risks associated with derivatives are distinguished (Silvo et al., 2012;Akbar et al., 2013;Thapa et al., 2016;Parlapiano et al., 2017;Shil & Das, 2017): -risk of a change in value. The value of derivatives depends both on the price of the underlying asset and residual maturity. Therefore, it has been found that a change in the price of a derivative would be greater than a change in the price of the underlying asset if the price of the underlying asset changed. This is due to the so-called leverage effect. The biggest changes in the prices of derivatives occur when there is little time left to maturity (Geyer-Klingeberg et al., 2018;Su et al., 2018); -risk of losing more than has been invested. Some derivatives are, due to their contractual terms, classified as instruments with unlimited risk (future, forward, other complex derivatives) that, in the event of a failure, may create additional obligations for the buyer (Aysun & Guldi, 2011;Silvo et al., 2012;Hentschel & Smith, 2020). According to risk exposure, derivatives are classified as high-risk instruments, when invested assets are at risk, and very high-risk instruments, where the risk is higher than the invested assets. Riskiness of the derivatives market The recent situation, viewed from the perspective of growing international tension, makes it necessary to re-examine the past crisis, its causes and consequences (Reddy et al., 2014;Garskaite-Milvydiene & Burksaitiene, 2016). It is recalled that the 2007-2008 global financial crisis was distinguished by its rapid spread and strength multiplied by the use of derivatives Chang et al., 2018). The scale and speed of this crisis were considerable for a number of reasons. Firstly, these asset backed securities have the market values that fell instantly in the wake of the crisis (Mayordomo et al., 2014;Müller et al., 2015). Secondly, the emergence of this crisis was caused by a sharp increase in financial leverage, both at the level of households and that of financial institutions, which was further strengthened by the rapid growth of credit derivatives, although these measures were specifically designed to hedge the risk of such leverage increase. These reasons led to a rapid decline in asset values, financial leverage and asset sales volumes (Melvin & Taylor, 2009;McKibbin & Stoeckel, 2010). Due to globalisation and financial integration, the crisis quickly spread outside the USA both among countries and financial institutions, namely, banks, pension funds, insurance companies, etc. (Kazi et al., 2013;Lel, 2014). Yet another reason was related to credit ratings (Donohoe, 2015). Too much trust was put in credit agencies' ratings. In the case of derivative securities, a large share of second-rate and risky assets was converted into AAA-rated assets, which seemed to be a safe investment. In this way, a huge risk was assumed. This led to significant changes in the credit rating industry. The increased competition arising in the credit rating industry could solve certain problems (Melvin & Taylor, 2009;McKibbin & Stoeckel, 2010;Bardoscia, et al., 2019). Thus, derivatives had a rather negative impact on the stability of the financial system as a whole, although derivatives should help to maintain the stability of the financial system under certain conditions. Firstly, risk must be carefully assessed and estimated; secondly, risk must be properly managed; thirdly, systemic liquidity must be ensured (Hoa et al., 2013;Reddy et al., 2014;Eilifsen & Quick, 2018;Inekwe, 2018). Thus, the global financial crisis has highlighted the shortcomings of the market for conventional instruments and derivatives. As it has become apparent that there is a lack of sufficient information, sufficient risk monitoring, sufficient liquidity in financial markets, and market infrastructure is weak and the whole system is dynamic (where the uniform behaviour of individual market participants causes a wave of change across the system), there has emerged the need to reform this market with a view to reducing the riskiness of the market for conventional instruments and derivatives (Reddy et al., 2014;Garskaite-Milvydiene & Burksaitiene, 2016;Su et al., 2018). In order to eliminate the shortcomings, firstly, attempts were made to ensure transparency in the derivatives market and the intervention of certain government institutions was needed in order to regulate this market for financial instruments, and secondly, market infrastructure had to be improved (Murphy, 2013;Reddy et al., 2014;Müller et al., 2015). The aim was for the majority of the instruments to be traded on stock exchanges. This helped to control the risks associated with counterparties as well as operational risk (Iba & Aranha, 2012;Kosowski & Neftci, 2015). Market participants had to register derivatives transactions, and this information also had to be made available to other market participants. Moreover, information on historical prices of instruments in certain segments of this market had to be provided centrally (Müller et al., 2015;Vo et al., 2019b;Sarveswara Reddy & Sathish, 2020). Thus, in order to identify and properly manage the risks of the derivatives market (McKibbin & Stoeckel, 2010;Inekwe, 2018;Bardoscia et al., 2019;Hentschel & Smith, 2020), as these instruments are spreading into new markets that are not yet well-developed, and in addressing issues in existing markets, an analysis of the mentioned potential prudential measures and ways to address market problems has been undertaken following the financial crisis and facing a potential new crisis. Analysis of the situation in Lithuanian derivative markets In 2016, the Bank of Lithuania repeatedly participated in preparing a survey of global foreign exchange and derivatives markets, which is drawn up every three years (Swedbank, 2017;LB, 2016LB, , 2018SEB bankas, 2018). It is prepared by the Bank for International Settlements (BIS, 2018a, 2018b) together with the central banks and monetary authorities of 52 countries. The Bank of Lithuania has participated in the preparation of such surveys since 2004, with the banks and branches of foreign banks actively operating in local and international foreign exchange and derivatives markets being selected for this purpose (LRS, 2007;LB, 2016LB, , 2018. Since 1996, the Bank for International Settlements has periodically provided financial markets with statistical information on the derivatives market, its size and structure. The global survey conducted in 2016 featured data of more than 1200 key market participants worldwide concerning their conventional foreign exchange transactions (spot, forwards and swaps) and OTC derivatives activity in April (BIS, 2018a(BIS, , 2018b. The survey conducted in Lithuania covered six banks and branches of foreign banks whose share in the country's foreign exchange and derivatives markets makes up 97% (LB, 2016(LB, , 2018(LB, , 2019a. Compared to findings of the previous survey, the Lithuanian foreign exchange and derivatives market has shrunk by more than half over three years. However, it should be noted that this year, Lithuania participated in the survey as a member of the euro area. In Lithuania, the turnover in the foreign exchange market in April 2006 amounted to 4.9 billion U.S. dollars (the turnover attributable to transactions per business day averaged 0.2 billion U.S. dollars). FX swaps accounted for 57% and spot transactions -for 41% of the monthly turnover (Table 1) (LB, 2016(LB, , 2018BIS, 2018aBIS, , 2018b. Summarising the data contained in Table 1 and highlighting the key transactions, Figure 4 shows the structure of certain transactions (LB, 2016(LB, , 2018BIS, 2018aBIS, , 2018b. It demonstrates that since 2004, the percentage of spot transactions (futures) in the overall transaction structure has decreased, and only from 2013 onwards, there has been observed a slight upward trend. Looking at the period under review, it is clear that since 2004, FX swaps have clearly grown in the overall structure of transactions. Data confirm that the Lithuanian derivatives market is becoming increasingly concentrated in the most important Lithuanian banks (LB, 2018). Foreign exchange market participants in Lithuania mostly entered into transactions with other banks participating in the survey (63%), the majority of which were non-resident banks (67%). Transactions with other financial institutions accounted for 4% and non-financial customers -for 33% ( Figure 5) (LB, 2016(LB, , 2018(LB, , 2019bBIS, 2018b). Moreover, Figure 6 summarises the data of Table 1 and identifies the main groups of counterparties (LB, 2016(LB, , 2018BIS, 2018b). It shows that from 2004 until 2016, transactions with cross-border dealers prevailed. In recent years, there has been an upward trend in transactions with local dealers. Exchange rate risk and the value of both Lithuanian and European enterprises are hedged using derivatives. Most transactions in the Lithuanian foreign exchange market take place in euro and U.S. dollars. An upward trend in the value of the prevailing euro-dollar pair can be observed (Table 2) (LB, 2016(LB, , 2018BIS, 2018aBIS, , 2018b. It is likely that having considered the subsequent three-year situation, i.e. after processing data for 2019, this trend will be preserved. In summarising the data provided in Table 2, the situation is more vividly shown by Figure 7, which presents the structure of the Lithuanian foreign exchange market (LB, 2016(LB, , 2018BIS 2018aBIS , 2018b. Figure 7 shows that the euro and the U.S. dollar are the main currencies traded on the market, and the euro-dollar pair transactions have prevailed over the past few years (LB, 2016(LB, , 2018BIS, 2018aBIS, , 2018b. The latest data from 2016 show that the Lithuanian OTC derivatives (interest rate) market was rather modest, just as previously. The turnover of interest rate contracts in the market amounted to 389.5 million U.S. dollars (Table 3) (LB, 2016(LB, , 2018BIS, 2018aBIS, , 2018b. It is likely that having considered the subsequent three-year situation, i.e. after processing data for 2019, this situation will look different. The Bank for International Settlements publishes on its website (LB, 2018; BIS, 2018a, 2018b) preliminary data of triennial surveys of global foreign exchange and derivatives markets. The purpose of the BIS surveys is to obtain comprehensive and internationally comparable statistical information on the size and structure of the foreign exchange and OTC derivatives markets (LRS, 2007;BIS, 2018aBIS, , 2018b. It can be claimed that this is an exclusive and unique, all-inclusive and reliable survey which since 1996 has provided financial markets with key guidance on the derivatives market (BIS, 2018a). Forecasting of derivatives transactions and currency pairs An analysis and assessment of the situation in derivatives markets reveals that the focus is on foreign exchange transactions as well as currency pairs. According to data of Table 1 and Figure 1, spot transactions (futures) were steadily decreasing from 2004 until 2016 in value terms, however as of 2013 a rise in the foreign exchange market as a whole in terms of percentage was observed. There was a sharp drop in forwards during the period from 2004 to 2016, although modest growth was observed in 2016. Foreign exchange swaps fluctuated sharply during 2004-2016, with their value and percentage in the foreign exchange market as a whole surging in 2010. However, a downward trend in both value and percentage terms was seen subsequently. In order to examine in more detail the dynamics, over recent decades, and expected trends of derivative financial instruments, a forecast of foreign exchange transactions was carried out using the weighted moving average method and various functions (exponential, linear, logarithmic, polynomial, power, etc.). The forecasts of foreign exchange transactions using the weighted moving average method (Figure 8) reveal that spot transactions (futures) and FX swaps account for the major part of the foreign exchange market. The forecasts of these transactions over a few more fixed-duration (three-year) periods ahead show that FX swaps prevail, though there is also a substantial volume of spot transactions (futures). Trends in these transactions do not project any major changes when forecasted using the weighted moving average method. Slight fluctuations are to be noted. As spot transactions (futures) stand out in terms of their value and percentage, their forecasting was carried out by applying various functions, such as exponential, linear, logarithmic, polynomial, power, etc. Upon analysing the obtained results and graphs according to various functions, the function with the highest coefficient of determination, and thus the correlation coefficient, i.e., R 2 = 0.8899 (Figure 9), was selected for forecasting purposes. An analysis of the trends in spot transactions forecasted by applying a polynomial function predicts a sharp increase in the percentage of these transactions in the future. The forecasts using the weighted moving average method also project an upward trend in the future, though less marked. FX swaps also stand out in terms of their value and percentage among all transactions under study (Table 1, Figure 1), therefore their forecasting was carried out using various functions, such as exponential, linear, logarithmic, polynomial, power, etc. Upon analysing the obtained results and graphs according to various functions, the function with the highest coefficient of determination, and thus the correlation coefficient, i.e., R 2 = 0.9076 (Figure 10), was selected for forecasting purposes. An analysis of trends in FX swaps forecasted by applying a polynomial function foresees a sharp drop in the percentage of these transactions in the future. The forecasts using the weighted moving average method also project a downward trend in the future, though less marked. In order to examine in more detail the dynamics, over recent decades, and expected trends of currency pairs (Table 2, Figure 7), currency pair forecasting was carried out using the weighted moving average method and various functions (exponential, linear, logarithmic, polynomial, power, etc.). The forecast of currency pairs using the weighted moving average method (Figure 11) shows that the LT/EUR and USD/EUR pairs account for the largest share of the currency pair market as a whole. The forecast of these pairs over a few fixed-duration (three-year) periods ahead reveals that the USD/EUR pair prevails, however other currency pairs also have substantial volumes. Trends in respect of these pairs, forecasted using the weighted moving average method, do not project significant changes in the future. Slight fluctuations are to be noted. The LT/EUR pair was no longer in use as of 2015, hence a sharp rise of the USD/EUR pair could be observed during 2015-2016, with these trends expected to stabilise later. , 2016, 2019bBIS, 2018aBIS, , 2018b As the USD/EUR pair stands out in terms of value and percentage, its forecasting using various functions, such as exponential, linear, logarithmic, polynomial, power, etc., was carried out. Upon analysing the obtained results and graphs according to various functions, the function with the highest coefficient of determination, and thus the correlation coefficient, i.e., R 2 = 0.7982 (Figure 12), was selected for forecasting purposes. The trends in the USD/ EUR pair forecasted by applying a polynomial function show a significant increase in the percentage of this pair in the future. The forecast carried out according to the weighted moving average method also projects a future upward trend, though less marked. Among all currency pairs, the pair EUR/other currencies could be distinguished due to its value and percentage (Table 2, Figure 4), therefore it was forecasted using various functions, such as exponential, linear, logarithmic, polynomial, power, etc. Upon analysing the obtained results and graphs according to various functions, the function with the highest coefficient of determination, and thus the correlation coefficient, i.e., R 2 = 0.3158 (Figure 13), was selected for forecasting purposes. An analysis of trends in respect of this currency pair forecasted with a polynomial function projects a substantial decline in the percentage of these transactions in the future. The forecast using the weighted moving average method also projects a downward trend in the future, though less marked. In order to examine in more detail the dynamics, over recent decades, and expected trends of derivatives and currency pairs, foreign exchange transactions and currency pairs were forecasted using the weighted moving average method and various functions (exponential, linear, logarithmic, polynomial, power, etc.). Foreign exchange transaction forecasting using the weighted moving average method and according to various functions shows that in the foreseen period, currency swaps will prevail, though there is also a significant volume of spot transactions (futures). Trends in these transactions do not project major changes. The forecasting of currency pairs using the weighted moving average method and various functions allows to predict that in the foreseen period, the USD/EUR pair will prevail, while other currency pairs also demonstrate substantial volumes. The trends observed in respect of these pairs do not project any major changes in the future. Conclusions It can be claimed that derivatives, if used properly, allow for effective management of various risks faced by business entities. Hedging through the use of derivatives enables businesses to reduce risk and distribute risk among individual market participants. The derivatives can be used to manage not only currency, interest rate, commodity, and credit risks, but also the overall risk of emergence of any adverse effects. The recent situation, viewed from the perspective of growing international tension, makes it necessary to re-examine the past crisis, its causes and consequences. It is recalled that the 2007-2008 global financial crisis revealed the challenges and risks of derivative financial instruments and demonstrated the tremendous impact that their imprudent use can have on the stability of a financial system. Derivatives, which were supposed to assist financial institutions as regards hedging and diversification of risk, became one of the causes of the crisis, and as a result of the globalisation and integration of the financial market the crisis that had begun in the USA grew into a global financial crisis. As an implication for the present, the past financial crisis showed the need to implement a certain package of measures to address the issues emerging in this market. These measures include regulation of the derivatives market and standardisation of the instruments themselves and credit rating activities, as well as improvement of the market infrastructure itself. Moreover, the crisis also revealed the shortcomings of derivatives in terms of insufficient risk monitoring, absence of market regulation, lack of standardisation, systemic market risk, high market concentration, price distortions and emergence of inadequate credit ratings. The global financial crisis highlighted these issues in the derivatives market and the riskiness of the market in these instruments as well as their dangerous impact on the overall stability of a financial system. However, it would be unreasonable to abandon the instruments that filled the niche emerging in present-day markets, and use should be made of the specific features of these instruments in the context of the development of financial markets. The forecasting of derivative financial instruments and currency pairs using the weighted moving average method and various functions (exponential, linear, logarithmic, polynomial, power, etc.) has yielded certain results concerning a foreseen future period. Foreign exchange transaction forecasting shows that currency swaps will prevail in the future, but there is also a significant volume of spot transactions (futures). Currency pair forecasting reveals that the USD/EUR pair will prevail in the future, however other currency pairs also demonstrate substantial volumes. In the derivatives market, it is necessary to implement certain measures to improve the infrastructure of the derivatives market, improve liquidity assistance in financial markets, review credit rating mechanisms and methodologies, and create an adequate supervisory framework. A rapidly developing derivatives market creates prerequisites for a better distribution of risk within a financial system by enhancing the stability of the financial system. Derivatives can make risk management more efficient and flexible, especially in banks, and can also result in a more effective distribution of individual risks and reduction of the overall economic risk associated with them. In the absence of an adequate supervisory framework, it is important to ensure the means of prudential treatment of derivatives: stricter regulation of investment banking activities; establishment of a regulatory and supervisory mechanism for the financial institutions using financial instruments; disclosure of information on supervision in different financial markets; assessment of the impact of new financial instruments on a financial system; ensuring of the transparency of methodologies and independence of assessment. The paper includes analysis of the risks for the underlying risks stemming from the derivatives market. Limitations and difficulties were encountered in the research. It has been difficult to find the necessary data related to new financial instruments and their impact on the financial system. Information on derivatives is provided every three years (the last will be provided in 2021). The possibilities for future research should be greater and wider. Future research is expected to include analysis about derivative pricing, their growth rates compared to the real economy, or the derivatives risks on the financial and the banking system.
7,950.6
2021-02-03T00:00:00.000
[ "Economics" ]
MoS2-Cu/CuO@graphene Heterogeneous Photocatalysis for Enhanced Photocatalytic Degradation of MB from Water The industrial revolution resulted in the contamination of natural water resources. Therefore, it is necessary to save and recover the natural water resources. In this regard, polymer-based composites have attracted the scientific community for their application in wastewater treatment. Herein, molybdenum disulfide composites with a mix phase of copper, copper oxide and graphene (MoS2-Cu/CuO@GN) were synthesized through the hydrothermal method. Methylene blue (MB) was degraded by around 93.8% within the 30 min in the presence of MoS2-Cu/CuO@GN under visible light. The degradation efficiency was further enhanced to 98.5% with the addition of H2O2 as a catalyst. The photocatalytic degradation efficiency of pure MoS2, MoS2-Cu/CuO and MoS2-Cu/CuO@GN were also investigated under the same experimental conditions. The structural analysis endorses the presence of the Cu/CuO dual phase in MoS2. The charge recombination ratio and band gap of MoS2-Cu/CuO@GN were also investigated in comparison to pure MoS2 and MoS2-Cu/CuO. The chemical states, the analysis of C1s, O1s, Mo3d and Cu2p3, were also analyzed to explore the possible interaction among the present elements. The surface morphology confirms the existence of Cu/CuO and GN to MoS2. Introduction Environmental and energy remediation are two major issues for human beings due to rapid industrialization [1]. The rapid increase in industrialization activities resulted in the contamination of natural water resources. The organic dyes and various kinds of heavy metals are being discharged into water reservoirs on a daily basis from different industrial activities [2]. These contaminations in water are a severe threat for human health and the ecosystem. In this regard, various materials and methods have been deployed to remove the contamination from the water, such as absorption, membrane filtration and photocatalytics [3]. However, the photocatalytic process is being considered an efficient and cost-effective way to remove or neutralize hazardous material from water [4]. Therefore, various materials such as carbon-based metals and metals oxides and, more importantly, polymers are attractive candidates to remove the pollutants through the photocatalytic process [5]. Polymers are among the most versatile materials for wastewater treatment through photocatalysts, membranes and the absorption process owing to chemical stability/versatility, ease of functionalization, high specific surface, etc., [6,7]. Therefore, a two-dimensionallayered structure of molybdenum disulfide (MoS 2 ) has attracted the scientific community because of its extraordinary structure and properties such as strong oxidizing activity, non-toxicity in nature and a low band gap (1.8 eV) that can be further tuned with quantum confinement effects [8]. This low band gap of the MoS 2 is beneficial to absorb the light photons in the visible region which is highly desirable to enhance the photocatalytic process [9,10]. However, the interaction of Mo-S can engender the unsaturated atoms at the crystal edge which may render the photocatalytic activity of MoS 2 [11]. Therefore, the doping of some metals or metal oxides can enhance the photocatalytic activity of MoS 2 . In this context, p-type semiconductor materials such as copper oxide (CuO) have gained attention due to their photo-conductivity nature, which supports enhancing the photocatalytic activity of n-type MoS 2 by reducing the charge recombination ratio [12]. Moreover, CuO doping can enhance the absorption of light photons which results in the augmentation of catalytic activity [13]. Further the mix phase of Cu/CuO supports the formation of heterojunction which reduces the recombination of charge carriers that can enhance the photocatalytic activity [14]. However, the access amount of CuO in composites can provide the recombination center for charge carriers which will reduce the photocatalytic activity [15]. Therefore, the optimal amount of CuO is also important to enhance photocatalytic activity. The charge carrier separation can be enhanced by the addition of graphene (GN) which amended the charge separation and absorbed the more visible light which consequently enhanced the photocatalytic activity [16]. Moreover, GN can also create some defects while making the composites with metals and metal oxides. These defects can capture the pollutant during the photocatalytic process [17] which makes it attractive for wastewater treatment. Further, the high specific surface area (2650 m 2 /g), π orbital, π-π interaction, functional groups makes it an attractive dopant with metal, metal oxides and polymers to synthesiss the photocatalysts [18]. Previous studies show that there are various reports on MoS 2 with GN and CuO and other metal oxides as a catalyst [12,19,20]. However, the low catalytic performance is still a major issue for these binary composites. Further, there are few reports on the role of metals and metal oxides in the photocatalytic activity of MoS 2 . Therefore, in this study, the combination of MoS 2 with GN and Cu/CuO mix phases were synthesized through an in-suit hydrothermal process. The role of Cu/CuO and GN to enhance the photocatalytic activity of MoS 2 was explored. Moreover, the change in the functional groups through XPS and the change in optical and structural properties were also studied. Synthesis of MoS 2 -Cu/CuO@GN MoS 2 was synthesized by hydrothermal methodology using MoO 3 and thiourea as precursors. In a typical process, 0.1150 g of MoO 3 and 0.2664 g of thiourea was taken in 80 mL of water and the system was put under stirring conditions for 30 min. Thereafter, the whole reaction mixture was transferred to 100 mL of Teflon-lined hydrothermal reactor and subsequently heated at 200 • C for 24 hrs. Thus, obtained the black precipitate of MoS 2 was separated by centrifugation, washed with excess of water and ethanol, dried at 80 • C for 12 h and subsequently stored in desiccator for further experiments. For the MoS 2 -Cu/CuO@GN, first, binary composite of MoS 2 -Cu/CuO was prepared and further coating of GN over it resulted in MoS 2 -Cu/CuO@GN. The GO (stock solution of 10 mg/mL) and Cu/CuO nanoparticles were prepared separately. The synthesis of GO can be seen elsewhere [21]. The Cu/CuO nanoparticles were synthesized by the reduction of copper (II) sulfate in the presence of CTAB surfactant. In a typical process, 0.1 M copper (II) sulfate solution was dissolved in 100 mL of water and to it, 0.25 g of CTAB was added and the whole system was put under stirring conditions. In another beaker, 50 mL of 0.2 M ascorbic acid solution was prepared. In the second step, the solution of ascorbic acid was slowly added to the copper (II) sulfate solution and, subsequently, 30 mL of 1 M sodium hydroxide solution was also added. The whole system was heated to 80 • C for 2 h and a dark reddish-brown color confirmed the formation of Cu/CuO. Thus, prepared Cu/CuO was separated by centrifugation, washed with excess of water and ethanol and subsequently dried at room temperature [22]. The MoS 2 -Cu/CuO was prepared by mixing 1 g of MoS 2 and 0.1 g of Cu/CuO in 50 mL ethanol and the mixture was put in ultrasonic bath for 1 h, followed by stirring on hot plate until the complete evaporation of ethanol. Further, the fabrication of ternary MoS 2 -Cu/CuO@GN was done by mixing 10 mL of GO with 1 g of MoS 2 -Cu/CuO and the whole mixture was heated at 400 • C for 3 h for the complete reduction of GO into GN and to, subsequently, give MoS 2 -Cu/CuO@GN. The ratios of MoS 2 , Cu/CuO and GN were, respectively, 87.1%, 8.71 and 4.19%. Photodegradation Measurement MB was selected as model pollutant to assess the catalytic performance of MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN. In this regard, 25 mg of catalyst (optimized against initial concentration of MoS 2 and pH Figure S1) was added to the 20 ppm aqueous solution of MB. However, the MB solution containing the catalyst was put in the dark under vigorous stirring for 30 min to achieve the adsorption desorption equilibrium. Afterwards, the solution was irradiated with visible light of 2 watt, having a distance of 12 cm from MB solution for 30 min. The intensity was approximately 11.06 watt/meter. During this irradiation, a certain amount (5 mL) of solution was taken to estimate the degradation of MB by measuring the spectrum through UV-Visible spectrometer. The MB degradation ability of prepared photocatalysts was calculated by applying the following relation [23]: where C o represent the initial taken concentration of MB while C t symbolizes the remaining MB concentration after interval of 10 min. Once the photocatalytic efficiency was calculated, then following giving relation was used to calculate the reaction rate constant during degradation process [24]. Characterizations The structural and surface compositional analysis of MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN were performed, respectively, with X-ray diffraction (Ultima IV-Rigaku Tokyo Japan) and X-ray photoelectron spectroscopy (PHI-Versa ProbeII Chanhassen USA). The pass energies of 187.85 eV and 47.46.95 eV were used, respectively, to acquire the survey and narrow scan mode. Surface morphology was investigated by field emission scanning electron microscopy (JSM7600-F-Jeol Tokyo Japan). The spectrometer (DR 6000 Hach Loveland USA) was used to calculate the absorption of MB, while charge recombination ratio was investigated through photoluminescence spectrometer (Shimadzu RF 5301PC Kyoto Japan). 20 are the representation of copper (Cu), as revealed in JCPD # 01-085-1326. Therefore, XRD analysis confirms the presence of Cu/CuO with MoS 2 . Moreover, after the addition of GN (MoS 2 -Cu/CuO@GN), no additional peak was observed. The non-observable diffraction peak of GN is attributed due to the exfoliation nature [25]. Further, the functional groups of GN can interact with MoS 2 and Cu/CuO may lead to variations in the structural properties without changing the preferred orientation of the diffraction planes [26]. This could be noticed in our diffraction analysis ( Table 1) that shows the change in the crystal grain size, which also revealed the successful interaction of GN functional groups with MoS 2 and Cu/CuO. The Scherrer relation was used to estimate the crystal size of MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN [27]. Structural Analysis the tenorite phase of CuO (JCPD # 00-001-1117), while the diffraction peaks at 2θ = 43.43, 50.56 and 74.20 are the representation of copper (Cu), as revealed in JCPD # 01-085-1326. Therefore, XRD analysis confirms the presence of Cu/CuO with MoS2. Moreover, after the addition of GN (MoS2-Cu/CuO@GN), no additional peak was observed. The non-observable diffraction peak of GN is attributed due to the exfoliation nature [25]. Further, the functional groups of GN can interact with MoS2 and Cu/CuO may lead to variations in the structural properties without changing the preferred orientation of the diffraction planes [26]. This could be noticed in our diffraction analysis ( Table 1) that shows the change in the crystal grain size, which also revealed the successful interaction of GN functional groups with MoS2 and Cu/CuO. The Scherrer relation was used to estimate the crystal size of MoS2, MoS2-Cu/CuO and MoS2-Cu/CuO@GN [27]. The crystal grain size of MoS2 was around 2.37 nm, while it was increased to 13.05 and 18.27, respectively, for MoS2-Cu/CuO and MoS2-Cu/CuO@GN. This increment in the crystal grain size has a direct relation to the enhancement of the photocatalytic activity of the material, as reported previously [28]. It could also be noticed that (Section 3.5) MoS2-Cu/CuO@GN enhanced the photocatalytic activity in comparison to MoS2 and MoS2-Cu/CuO. The interaction of Cu/CuO and GN can further lead to a change in the dislocation density of MoS2 and be calculated by the following equation [29]. The crystal grain size of MoS 2 was around 2.37 nm, while it was increased to 13.05 and 18.27, respectively, for MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN. This increment in the crystal grain size has a direct relation to the enhancement of the photocatalytic activity of the material, as reported previously [28]. It could also be noticed that (Section 3.5) MoS 2 -Cu/CuO@GN enhanced the photocatalytic activity in comparison to MoS 2 and MoS 2 -Cu/CuO. The interaction of Cu/CuO and GN can further lead to a change in the dislocation density of MoS 2 and be calculated by the following equation [29]. The dislocation density was calculated around 4.43 × 10 −1 , 1.81 × 10 −2 and 1.96 × 10 −2 , respectively, for MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN. This variation in the location density further leads to a change in the lattice strain of MoS 2 after the addition of Cu/CuO and GN. This change was calculated by the following relation [30]. The lattice strain was around 2.01 × 10 −2 , 4.00 × 10 −3 and 3.79 × 10 −3 , respectively, for MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN. In summary, the structural analysis (Table 1) showed the change in the lattice parameter of MoS 2 after the addition of Cu/CuO and GN. However, this addition does not lead to a change in the diffraction orientation or phase of MoS 2 . Optical Properties The surface oxygen defects and charge recombination in the photocatalytic material can affect the catalytic efficiency which can be estimated by the PL spectra. Moreover, the peak intensity of PL spectra is attributed to the charge recombination during charge propagation from the valence to conduction band. The PL spectra (Figure 2a [31]. Moreover, the decrease in the PL intensity of MoS 2 -Cu/CuO was also associated to the chemisorption absorption of the oxygen over the surface of the catalyst, which resulted in the enhanced charge separation [32]. PL intensity was further reduced for MoS 2 -Cu/CuO@GN and this lessening is attributed to the interface between GN and MoS 2 -Cu/CuO. Moreover, functional groups attached to the basal planes of GN provided the attractive site to enhance the charge carrier movement by reducing the charge recombination, which ultimately enhanced the catalytic activity of the MoS 2 -Cu/CuO@GN [33]. cation density further leads to a change in the lattice strain of MoS2 after the addition of Cu/CuO and GN. This change was calculated by the following relation [30]. The lattice strain was around 2.01 × 10 −2 , 4.00 × 10 −3 and 3.79 × 10 −3 , respectively, for MoS2, MoS2-Cu/CuO and MoS2-Cu/CuO@GN. In summary, the structural analysis (Table 1) showed the change in the lattice parameter of MoS2 after the addition of Cu/CuO and GN. However, this addition does not lead to a change in the diffraction orientation or phase of MoS2. Optical Properties The surface oxygen defects and charge recombination in the photocatalytic material can affect the catalytic efficiency which can be estimated by the PL spectra. Moreover, the peak intensity of PL spectra is attributed to the charge recombination during charge propagation from the valence to conduction band. The PL spectra (Figure 2a) of MoS2, MoS2-Cu/CuO and MoS2-Cu/CuO@GN was recorded from 350 to 650 nm, having the 320 nm excitation wavelength. The PL intensity of the MoS2 is higher in comparison to MoS2-Cu/CuO and MoS2-Cu/CuO@GN, which revealed the higher recombination rate of the charged carrier in MoS2. However, the intensity of MoS2 reduced after the addition of Cu/CuO (i.e., MoS2-Cu/CuO), which indicated the role of Cu/CuO for charge separation and transfer at the heterojunction of MoS2-Cu/CuO [31]. Moreover, the decrease in the PL intensity of MoS2-Cu/CuO was also associated to the chemisorption absorption of the oxygen over the surface of the catalyst, which resulted in the enhanced charge separation [32]. PL intensity was further reduced for MoS2-Cu/CuO@GN and this lessening is attributed to the interface between GN and MoS2-Cu/CuO. Moreover, functional groups attached to the basal planes of GN provided the attractive site to enhance the charge carrier movement by reducing the charge recombination, which ultimately enhanced the catalytic activity of the MoS2-Cu/CuO@GN [33]. The change in the recombination of the charge carrier can change the band gap of the MoS 2 , which ultimately affects the photocatalytic activity. Therefore, the band gap of MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/CuO@GN was estimated by applying the Kubelka-Munk relation [34]. The band gap of MoS 2 (Figure 2b) was approximately 1.7 eV which is consistent with the previous literature [35]. The band gap of MoS 2 reduced to 1.60 eV after the addition of Cu/CuO (MoS 2 -Cu/CuO). This reduction in the band gap is attributed to the absorption ability of Cu/CuO in the visible region, which results in the reduction of the MoS 2 -Cu/CuO band gap [36]. The band gap of MoS 2 -Cu/CuO@GN was approximately 1.50 eV, which enlightens the role of GN to reduce the band gap. Generally, the GN has an sp2 band and other oxygen functional groups in the form of epoxy (C-O-C) and hydroxyl (OH) which enhances the charge carrier mobility with the absorption of light in the GN-based composites [37]. Table 2 shows the detected atomic percentage of each detected element in the prepared catalyst. The C1s spectra (Figure 3b) [41,42]. However, the Mo, MoS 2 , MoS 3 and MoO 3 contribution was changed with the addition of Cu/CuO and GN, as revealed in (Figure 4b-d). This change in the metal and oxide interaction can lead to changes in the photocatalytic activity of the catalyst [43]. O1s spectra of MoS2 (Figure 5a) revealed the appearance of two peaks at approxi mately 530.9 and 532.5 eV which are attributed to the oxidation of O1s (O − ) and OH [41,44]. However, the small shift in the contribution of O − (87.08 to 87.55%) and OH (12.92 (Figure 5a) revealed the appearance of two peaks at approximately 530.9 and 532.5 eV which are attributed to the oxidation of O1s (O − ) and OH [41,44]. However, the small shift in the contribution of O − (87.08 to 87.55%) and OH (12.92 to 12.45%) were seen in the Cu/CuO counterpart (Figure 5b), which could be due to possible interactions among the constituent elements. The addition of GN to MoS 2 -Cu/CuO resulted in an additional peak (Figure 5c) at approximately 529.02 eV which presents the C-O bonding with the contribution of 15.41% [41]. Surface Morphology FESEM images of MoS2 (Figure 6a) shows the stacked petal-like structure of MoS2 which is also consistent with previous reports [45]. The surface morphology of MoS2-Cu/CuO (Figure 6b) shows the appearance of some clusters of nanoparticles in addition to the stacked petal-like structure of MoS2 which are the Cu/CuO nanoparticles. Further, with the addition of GN (Figure 6c), flakes were observed with clustered MoS2 which is due to the possible interactions of GN with MoS2 and Cu/CuO [46]. Surface Morphology FESEM images of MoS 2 (Figure 6a) shows the stacked petal-like structure of MoS 2 which is also consistent with previous reports [45]. The surface morphology of MoS 2 -Cu/CuO (Figure 6b) shows the appearance of some clusters of nanoparticles in addition to the stacked petal-like structure of MoS 2 which are the Cu/CuO nanoparticles. Further, with the addition of GN (Figure 6c), flakes were observed with clustered MoS 2 which is due to the possible interactions of GN with MoS 2 and Cu/CuO [46]. Photocatalytic Activity The photocatalytic activity of the prepared MoS 2 , MoS 2 -Cu/CuO and MoS 2 -Cu/ CuO@GN was tested against the degradation of MB. The UV abortion peak of MB appeared at 665 nm and was monitored for the purpose of degradation. Figure S2). This enhanced surface area also provides an ample active site to capture the dye molecules. Moreover, GN provided the support in electron transport and lessened the recombination of the charge carried, which resulted in the enhancement of the catalytic activity of MoS 2 -Cu/CuO@GN [48]. This change in the movement of the charge particle may affect the reaction rate constant (k) which is shown in Figure 7f Photocatalytic Activity The photocatalytic activity of the prepared MoS2, MoS2-Cu/CuO and MoS2-Cu/CuO@GN was tested against the degradation of MB. The UV abortion peak of MB appeared at 665 nm and was monitored for the purpose of degradation. Figure 7a an ample active site to capture the dye molecules. Moreover, GN provided the support in electron transport and lessened the recombination of the charge carried, which resulted in the enhancement of the catalytic activity of MoS2-Cu/CuO@GN [48]. This change in the movement of the charge particle may affect the reaction rate constant (k) which is shown in Figure 7f. The order of the reaction rate constant was MoS2-Cu/CuO@GN > MoS2-Cu/CuO > MoS2 with values, respectively, of 5.01 × 10 −2 , 6.69 × 10 −2 and 9.26 × 10 −2 . Effect of Hydrogen Peroxide The catalytic properties of the material can be tuned by adjusting the reaction parameters, such as by introducing the scavenging of free radicals. In this regard, H 2 O 2 is commonly used for the production of reactive oxygen species such as hydroxyl radicals and superoxide during the photocatalytic reaction. These generated radicals can react with MB to enhance the photocatalytic efficiency. Therefore, we decided to investigate the photocatalytic activity of MoS 2 -Cu/CuO@GN at different concentrations of H 2 O 2 , i.e., 0, 2, 4, 6 and 8 mL. Figure 8d- Effect of Hydrogen Peroxide The catalytic properties of the material can be tuned by adjusting the reaction parameters, such as by introducing the scavenging of free radicals. In this regard, H2O2 is commonly used for the production of reactive oxygen species such as hydroxyl radicals and superoxide during the photocatalytic reaction. These generated radicals can react with MB to enhance the photocatalytic efficiency. Therefore, we decided to investigate the photocatalytic activity of MoS2-Cu/CuO@GN at different concentrations of H2O2, i.e., 0, 2, 4, 6 and 8 mL. Figure 8d-f show the change in the reaction kinetics of MoS2-Cu/CuO@GN with the addition of H2O2. The degradation efficiency of MoS2-Cu/CuO@GN was approximately 93.8% in the absence of H2O2, having a rate constant of 0.092.min −1 . However, the efficiency increased to 98.5% at 4% of H2O2, with the highest rate constant of 0.141 min −1 . This revealed the generation of maximum free radicals that interacts with MB during the degradation process. However, the catalytic efficiency of MoS2-Cu/CuO@GN was reduced to 96.5% and 93.1 %, respectively, for 6 and 8mL of H2O2. This showed that free radicals react with H2O2 rather than MB which ultimately reduced the photocatalytic efficiency [49]. Reusability of MoS2-Cu/CuO@GN The reusability of the photocatalysts matters for their potential application. Therefore, the cyclic reusability of MoS2-Cu/CuO@GN was tested. The five consecutive cyclic photocatalytic experiments were performed. A total of 25 mg of MoS2-Cu/CuO@GN was added to 20 ppm of MB solution having 4% of H2O2. The sample was separated through a centrifuge (3000 rpm/min) after completing each cycle. The cyclic results (Figure 9) revealed that the photocatalytic efficiency remained at 96.3% after five consecutive cycles under the same experimental conditions. Reusability of MoS 2 -Cu/CuO@GN The reusability of the photocatalysts matters for their potential application. Therefore, the cyclic reusability of MoS 2 -Cu/CuO@GN was tested. The five consecutive cyclic photocatalytic experiments were performed. A total of 25 mg of MoS 2 -Cu/CuO@GN was added to 20 ppm of MB solution having 4% of H 2 O 2 . The sample was separated through a centrifuge (3000 rpm/min) after completing each cycle. The cyclic results ( Figure 9) revealed that the photocatalytic efficiency remained at 96.3% after five consecutive cycles under the same experimental conditions. Conclusions In conclusion, the addition of Cu/CuO and GN accelerated the photocatalytic activity of MoS2 under certain optimized experimental conditions. This enhanced the photo- Conclusions In conclusion, the addition of Cu/CuO and GN accelerated the photocatalytic activity of MoS 2 under certain optimized experimental conditions. This enhanced the photocatalytic activity of MoS 2 -Cu/CuO@GN, attributed to the change in the charge carrier movement, the alteration in the recombination ratio, the band gap, the interaction of the present element and the structural properties. The band gap of MoS 2 reduced to 1.5 eV from 1.7 eV. Moreover, Cu/CuO and GN supports the chemisorption absorption of the oxygen over the surface of the catalyst by providing the more active sites, which resulted in the enhanced photocatalytic activity. The structural analysis revealed the increase in the grain size of MoS 2 (2.37 nm) with the addition of Cu/CuO (13.05 nm) and GN (18.27 nm) without changing the preferred crystal orientation. The XPS analysis confirms the variation in the C-C, C-OH, C=O and OH functional groups of MoS 2 with the addition of Cu/CuO and GN, which is attributed to the enhanced photocatalytic activity. In short, this study revealed the potential use of polymer-based nanocomposites with metal, metal oxides and graphene for wastewater treatment through a facile photocatalytic process.
5,706.4
2022-08-01T00:00:00.000
[ "Engineering" ]
Product Demand Forecasting in Ecommerce Based on Nonlinear Autoregressive Neural Network With the rapid growth of the e­commerce business scale, to meet customers’ demand for efficient order processing, it is of great significance to establish an order management mechanism capable of responding quickly by accurately predicting product demand. This study used real e­commerce order demand data and established a nonlinear autoregressive neural network (NAR) model after pre­processing methods including down­sampling and data set partition to effectively forecast the demand of products in the next 13 weeks. Compared with the Prophet time series prediction framework, NAR had better generalization ability, and the prediction time was reduced by 18.54%. Finally, we summarized two methods’ characteristics and gave instructions on applying our model in the real scene. After being deployed in the actual demand management, the trained artificial neural network provides a scientific reference for the data­driven e­commerce decision­making process and brings new advantages over other companies, achieving the rational allocation of resources. Introduction With the arrival of ecommerce and big data, the powerful business flow has promoted the logistics business's F. Peijian Wu School of Business and Administration, Anhui University of Finance and Economics, Bengbu 233030, China Tel.: 18712462020 Email<EMAIL_ADDRESS>S. Yulu Chen School of Business and Administration, Anhui University of Finance and Economics, Bengbu 233030, China vigorous development.More and more enterprises begin to make full use of operational history data and focus on predicting future ecommerce orders (Song et al., 2016).The implementation of future demand and order forecasting quickly respond to customer needs and cope with the changing market environment, and the resulting data is an essential basis for the future development planning of enterprises (Chen et al., 2019).E commerce goods have the characteristics of rich categories that bring challenges to enterprises' warehousing space management, and passive order management can not meet customers' high response needs.When enterprises accurately predict the order demand of the ecommerce warehouse, they will arrange the relevant resources beforehand to realize the proper planning of commodity location.Effective demand prediction is of great significance to the management of the ecommerce warehouse (Zhang et al., 2020). Many scholars have made comparative studies and case studies on the methods of product demand prediction.Mezzogori and Zammori (2019) discussed applying deep learning architecture in the demand prediction of fashion products.The first layer of the framework predicts the total order of a specific customer, and the second layer of network structure foresees the demand of a given product based on the realtime sales data.This forecasting method had advantages over fashion companies' existing marketing strategies after a decade of sales analysis.Huber and Stuckenschmidt (2020) focused on the order demand of bread chain stores on calendric special dates, transformed the prediction problem into supervised machine learning, and evaluated methods such as artificial neural networks and gradient lifting tree.Finally, the conclusion was that the machine learning model had superior performance, and the classification based optimization prediction method was superior to the regression method.Lee et al. (2012) probed into ways to predict the demand of products newly introduced to the market, conducting consumer surveys to estimate product trends, combining with the Bayes' rule to carry out demand forecast.They made fair use of 23 quarterly data of South Korea's broadband Internet services market for empirical research.In the end, they put forward the model of the solution was better than the other benchmark model.Xu and Chan (2019) used big data and machine learning methods to forecast medical equipment demand, collected the required data from search engines and companies, and established a univariate equipment demand prediction method.They introduced big data into the prediction model and improved the accuracy of it.Tarallo et al. (2019) proposed a exploratory research of machine learning methods in the sales demand prediction of perishable products with short shelf life.The results showed that machine learning methods exceeded the accuracy level of traditional statistical techniques, and the demand prediction of fastmoving consumer goods could improve the inventory balance in the supply chain and increase the profits of enterprises. As consumers purchase items on ecommerce platforms, they will leave a large number of comments.Since the comments contain their purchase intention, these reviews have a meaningful impact on product sales.Yuan et al. ( 2017) obtained many product reviews from social networks, mined consumers' emotions towards products, and used sentiment analysis and other quantitative features to predict product sales demand in the next period.The demand prediction combined with sentiment analysis was more accurate than that of quantitative features alone through a case study.Shih and Lin (2019) combined LSTM and consumer sentiment analysis to forecast shortterm sales and adjusted the emotional evaluation weights to improve prediction accuracy further.The proposed method with shortterm demand for commodities sales accuracy was satisfactory and realized prediction using the least amount of transaction data.Fan et al. (2017) made use of online reviews and historical sales data to forecast product demand.A naive Bayes algorithm was applied to extract sentiment index from reviews and integrated it into the Bass model's imitation coefficient to improve the prediction accuracy.Compared with the standard Bass model, the method combined with sentiment analysis had a better performance. The models used in the existing research is relatively complex.Using machine learning methods to predict the time series data in this paper will involve a complex feature construction process.It is necessary to split the time series data into multiple time windows according to a fixed length of time and then construct each time window's features.Furthermore, the selection of different machine learning methods requires continuous attempts, and finally, we may find the prediction method suitable for data rules.The prediction methods combined with consumer sentiment analysis need to obtain text through web crawler techniques and carry out many data cleaning and natural language processing analyses.The procedure of data acquisition and preprocessing is relatively complicated. Many adjustable parameters enable an artificial neural network to have the potentiality for improvement, and the neural network has a strong ability of fault tolerance and robustness on noise data.Therefore this article selected the NAR neural network to establish a time series forecasting model, and we searched the most suitable number of hidden layer neurons by experiment.Our data source for empirical research comes from Kaggle, consisting of reallife warehouse operating records. After necessary preprocessing, we feed the data into the model to test the accuracy.Finally, we compared our model with Prophet, a time series prediction framework, on accuracy and time. NAR neural network The artificial neural network is a network that simulates the human brain's nervous system and realizes specific functions.The network system is built based on the connection structure between the neurons of the brain (Kumar et al., 2020).Plenty of neurons that mimic the synaptic connections between neurons in a biological system give artificial neural networks more advantages than general mathematical models.All neurons participate in the whole system's information processing, and the final output is obtained by the interaction and mutual feedback between neurons.This characteristic makes the neural network model robust (Wu et al., 2020).Concurrently, partial errors in the network will only reduce the network's adaptability and will not make the network appear significant errors. The NAR model uses itself as the regression variable and describes the random variable at a particular time using the linear combination of several time variables during the observation period.The following formula expresses this model's basic structure, and e(n) represents the white noise in the data collection process.Meanwhile, according to this formula, it can be judged that the observed value y(n + 1) at a specific moment has correlations with the previous value y(n). The NAR neural network model adopted in this paper can be expressed as the following formula, where y(t) indicates the considered value of y at time t , d represents the time delay and f signifies the conversion function.The NAR neural network learns potential patterns from the data input into the network and minimizes the difference between the final output and the actual record through a continuous iterative fitting process.The neural network with a feedback mechanism transmits the error variation during iteration until the optimal prediction accuracy is achieved. Model construction The neural network looks for rules through fitting and training from taskrelated data and continually updates the model to fit a reasonable result.The ultimate purpose is to make the model maintain a good prediction effect when deployed in the real environment.Partitioning the data set will effectively improve the model's generalization ability, and the model with higher generalization ability will have a lower error when predicting future data.This paper divided the data set into three parts: training set, verification set, and test set.Using a training set to train the model and then the validation set will minimize the overfitting phenomenon in the training process.The test set's error denotes the model's generalization error when dealing with the real scene's prediction. Aiming to improve the accuracy of the prediction results and accelerate the convergence rate, and considering the interval limit of the neural network activation function on the output data results, it is important to normalize the data (Han and Wang, 2020).This paper made fair use of the maximum and minimum standardization processing method to normalize the data, and the formula involved is as follows.x is the data to be normalized, x min and x max are the minimum value and the maximum value in the data sequence, respectively.And x ′ indicates the result data sequence after normalization.Inverse normalization processing is also needed to make the final prediction data fit the real level. In The diagram below shows how the function completes the procedure. Fig. 2 Transig function Inside the network, x t is the network's data input, the neural network's hidden layer obtains the output N j of the neuron according to the data input, and the connection weights w tj , thresholds b j , and activation function f between the neurons in the layer. When the data signal transmits to the output layer, the neural network will carry out the linear operation according to the hidden layer result N j , and the linear function of the layer will eventually complete the calculation and output the final result.In the formula below, w j signifies the connection weight between the j neuron in the hidden layer and the neuron in the output layer and p represents the neuron threshold in the output layer. Since the prediction model's final predictive variable comprises one output variable, we set the number of nodes in the output layer as one.Generally speaking, a network containing many middle layers will not obtain an expected result.The middle layer's excessively complex setting will often magnify the noise information in the model's training process, eventually leading to overfitting and reducing the model's generalization ability in the real application.Meanwhile, if we set too few intermediate layers, underfitting will be caused, and the final prediction effect can not meet the accuracy requirements. The training requirements can be already satisfied if the intermediate layer set as one, and appropriately increasing the intermediate layer nodes will improve the network accuracy (Bandyopadhyay and Chattopadhyay, 2007).This paper set one layer of the intermediate layer and further optimized the network structure by adjusting the neurons' number.When the network cannot meet the requirements, the number of hidden layers was adjusted, and we found the best number of neuron nodes in the hidden layer by selfexperiment. We trained the neural network in the form of offline learning.In the process of training, the network processes samples in batches.After the neural network gets all the samples' data for training, the network weights and thresholds will update at once.The training algorithms in the neural network include the Levenberg-Marquardt(LM) algorithm, Bayesian normalization method and elastic gradient descent method.The training algorithms with their unique characteristics will bring different results to the model.Because the LM algorithm has fast training speed and small error characteristics(Azar, 2013), we chose the LM algorithm as the preferred. Model evaluation index In the context of predicting the time series data of goods, in order to objectively evaluate the prediction effect and performance of the model in each stage, this paper uses three evaluation indexes, namely error autocorrelation coefficient, mean square error and root mean square error, to monitor and evaluate the model. Assuming that the calculated error series is stable, the error autocorrelation coefficient signifies the autocorrelation coefficient between the model's predicted value and the actual value at the specified confidence level.In statistics, it's definition is as follows.T represents the length of the time series, The definition of the root mean square error measurement index is specified as follows.Where ŷ represents the predicted value of the model and y represents the real value.After taking the sum of the errors and the mean value, it carries out the square root.The smaller the value is, the higher the accuracy of model prediction will be, and vice versa. The mean square error (MSE) mainly evaluates the neural network verification set's prediction effect, and it is one of the parameter settings in the neural network loss function.In the process of iterative training, the network will continuously seek to minimize the loss value and stops iteration when it finds the optimal prediction effect.The mean square error is the result of taking the square of RM SE, and the formula is as follows. 3 Empirical study and result analysis Data selection and preprocessing The data we used came from Kaggle, a famous data science competition platform.The data set contains a historical product demand of a manufacturing company with footprints globally, 33 product categories, and 2172 SKUs distributed in 4 warehouses, consistent with the characteristics of ecommerce goods with rich categories and extensive sources.We preprocessed the data, sorted out the time series demand data of each product, and conducted an example discussion based on the timeseries data. The data set contains uncounted product demand data due to manual statistical errors and not ontime records.We retain the original data with relatively complete records and delete the goods with many missing values.The goods with relatively complete records are selected, and one of the goods is taken as an example to model and predict the order's demand.The order modeling process of other goods was the same as the selected one. Data resampling converts the original frequency of the data sequence to another rate.Downsampling processes data recording from high frequency to low frequency (Shekhawat and Meinsma, 2015), which involves data aggregation operation.An acceptable record of product demand will update by day, but warehouse staff usually do not record an item in the warehouse's actual operation as it has zero demand.In data preprocessing, the unrecorded data will bring challenges to establish the model, so we used downsampling to convert the data to weekly record.As can be seen from Fig. 3, the original data set was relatively dense.After resampling as Fig. 4 demostrates, the weekly demand data is aggregated, which virtually eliminates the influence of missing values when the demand data of goods is zero.Preparing the data and exploring the fit model is also more feasible when working with smaller data sets, and a week is a complete cycle for the warehousing operation itself. Original order demand data contains noises caused by unreasonable recording and measurement methods and the influence of abrupt factors (Ma et al., 2018).These data noises make it difficult to fit the model, and the cost of fitting is high.In this paper, we took the original data 'DWH 5HVDPSOHG'DWD Fig. 4 Resampled product data distribution logarithm.The specified formula is as follows, where ts ′ represents data after being logged and ts is the raw data. The following Fig. 5 reflects that the data after logarithmic transformation concentrates within a specific range and fluctuate smoothly, while the original data fluctuates considerably.To train, verify, and test the model's prediction accuracy, this paper divided the original product's data into the training set, verification set, and test set according to the ratio of 70%:25%:5%.As shown in Fig. 6, the blue line represents the data of the training set, with a total of 183 samples; the orange line represents the data division of the verification set, containing 65 statistical data; the green line in the later period indicates the data of the test set used to measure the accuracy of the model, with 13 records of one item demand. Parameter optimization In the neural network designing process, if there are too few neurons in hidden layers, underfitting problems will emerge.On the contrary, when the network has too much information processing capacity due to too many neurons, the limited information contained in the training set cannot meet the training needs of all the neurons in the hidden layer, thus leading to the phenomenon of overfitting.Even if there is enough information in the training data, too many neurons will increase the model's training time and make it difficult to achieve the desired effect (Adil et al., 2020).Choosing an appropriate number of hidden layer neurons plays an important role to establish a robust neural network. This paper started with five neurons and gradually increased the number of neurons.After each number set, the model ran ten times.Under this number of neurons, the final prediction performance was the average of RM SE.The number of neurons corresponding to the lowest root mean square error of the test set will become the optimal neuron parameter.We visualized the experimental results in Fig. 7.When the number of neurons in the hidden layer gradually increases, the model performs well in the training set.However, the root mean square error of the test set generally increases.That is, the situation of overfitting appears.The ultimate goal of neural network training is to ensure that the model still maintains good generalization ability in specific predictions.Nevertheless, with the increase of the number of hidden layer neurons, the model's generalization ability is generally decreasing.Therefore, in the case of data used in this paper, the optimal number of hidden layer neurons is five. Predicted results We set the number of neurons in the hidden layer as five and the output layer set as one.LM algorithm carried out optimization iteration inside the network, and sample data was input into the neural network with preset parameters to fit and predict the ecommerce product demand in the next 13 weeks.As depicted in Fig. 8, the error of the validation set reached the minimum at the end of the tenth epoch,.It can be concluded from the training results that the model fully reflects the time correlation between the values of variables and capable of fitting the historical data effectively.The overall prediction was close to the actual recorded values, and the root mean square error of this training was 0.5865. The error autocorrelation coefficient refers to the difference between the observed value and the predicted value.Observing the error autocorrelation results will conclude whether the model fully predicts the expected trend, seasonality, and randomness.According to the degree of error correlation, we get an idea about the model's prediction performance. Ideally, the model's error results at a onetime node will not affect the prediction errors at other time points, and the neural network only expands the prediction process according to the law followed by the time series itself.If the model's overall performance is not satisfactory due to one unreasonable prediction, it needs adjustment.It can be seen from Fig. 9 that the correlation of prediction errors among the prediction points of the model was nonlinear, and the correlation degree was small.This characteristic NAR vs. Prophet Prophet uses a decomposable time series model, consisting of trends, seasons, and holidays.The formula used can be expressed as follows, where g(t) is the trend function, representing the value of nonperiodic change.s(t) indicates the cyclical change, h(t) represents the influence of holidays on the target data, and the error term ε t signifies the particular factors that the model cannot predict.For this error term, we assume that it conforms to the normal distribution (Taylor and Letham, 2018). We conducted Prophet's hyperparameter optimization using the crossvalidation method and searched optimal parameter combinations for best prediction results.We set linear model Logistic as growth trend and specified the maximum limit value as the maximum historical data value.The demand predicted tends to be saturated when approaching this value.In the same way as the data processing in the NAR model, the relatively stable downsampled data after logarithmic processing was input into the Prophet model, and the model obtained the final effect after training and fitting. Conclusion Taking the demand forecast for one of the product time series data as an example, we used the NAR model to construct an accurate prediction of ecommerce product demand.By comparing the Prophet and time series prediction framework, we find that the nonlinear regression neural network uses less time and has higher accuracy.The main conclusions of this study are as follows: First, it is vital to choose the appropriate model according to the research background.The NAR model has many parameters that need to be adjusted, such as the network topology, initial values of weights and thresholds, making enough space for this method's custom parameter setting (Melin et al., 2020).The Prophet is a time series prediction framework released by Facebook, capable of dealing with outliers in the time series and coping with partial missing values, predicting the future trend almost automatically.As shown in the above results, the Prophet framework performed poorly in the data used in this paper, while the NAR neural network achieved a satisfactory prediction effect.The reason is that the Prophet model was not suitable for the data scenario in this paper. Secondly, an accurate product demand forecast model is of great significance to the development of enterprises.The academic contribution of this paper lies in that through accurate prediction of future product demand, the warehouse management department foresees the upcoming demand for freights in the next quarter, and the appropriate equipment and human resources in the warehouse can be reasonably allocated beforehand to realize the rational use of resources and support green and sustainable development strategy.Traditional warehousing management is too passive to achieve the same results.Applying the forecasting model to the real working environment can make the warehousing operation active and promote its lean management (Poll et al., 2018). Advice and suggestions As models and methods used in this study integrate into the enterprise warehouse management practice, through several steps of data analysis and processing mechanism, our model completes the task of future product demand prediction and meet the requirements of the product changes updated in realtime synchronization, and finally achieve a realtime prediction effect.Enterprises need to adjust and optimize resources dynamically according to the forecasting results to reduce costs and improve operating performance. For the goods in high demand during the forecast period, the ecommerce warehouse department should optimize the storage space by combining the ABC classification rule, and place these goods in a position that is easy to pick, thus significantly reducing the time and cost needed for picking (Li et al., 2016).The unreasonable management of warehousing goods has made the warehousing department suffer benefit loss.More detailed data recording and accurate demand forecasting reduce cost and increase efficiency for the warehousing department.In the past, the warehouse space management mode arranged for ecommerce goods according to the subjective experience of warehouse personnel is no longer suitable for the needs of today's customers. The logistics warehousing business must emphasize historical operation data, which is of great significance for its future management and decisionmaking (Li et al., 2018).Companies need to normalize the data recording process, as data with high quality and more granularity segment usually make the forecasting model more robust, while unreasonable records of data for demand forecasting is full of challenges.They also need to set standards for data collection and make the most of it.Scientific and datadriven warehouse management decisions win a unique competitive advantage for enterprises, enabling them to achieve sustainable management. Prospect of improvements In the future, there is still much room for improvement on ecommerce product demand prediction. * First of all, the model's performance on different data levels needs to be tested and optimized to adjust superior parameters in small, moderate, and large data sets.* Secondly, different ecommerce goods have a variety of connections.Considering the development rule of a single product in the time series is not comprehensive enough, the direction of optimization is to add the correlation analysis of multiple products in the prediction process and then use the multivariate time series prediction model to develop the demand prediction (Nguyen et al., 2020).* Thirdly, introducing more abundant demand prediction models and comparing the prediction effects of more methods and the model's improvement direction will be summarized.* Finally, improving the evaluation index of the forecast results into a multiindex fusion scheme.Adding weight to the index according to the specific business needs and making the weighted combination of various indexes (Banerjee et al., 2017). Funding This research was supported by Anhui Social Science Planning General Projects (NO.AHSKY2020D09) and Anhui Virtual Simulation Experimental Teaching Project (NO.2019xfxm44). Declaration of no conflict of interest The authors declare that there are no known financial interests or personal relationships that will affect the research work described in this paper. Da Chun Wu, Babak Bahrami Asl, Ali Razban, and Jie Chen.Air compressor load forecasting using artificial neural network.Expert Systems with Applications, 2020.Shuojiang Xu and Hing Kai Chan.Forecasting medical device demand with online search queries: A big data and machine learning approach.2019.Hui Yuan, Wei Xu, Qian Li, and Raymond Lau.Topic sentiment mining for sales performance prediction in e commerce.Annals of Operations Research, 2017.Bo Zhang, Runhua Tan, and Cheng Jian Lin.Forecasting of ecommerce transaction volume using a hybrid of extreme learning machine and improved mothflame optimization algorithm.Applied Intelligence, pages 1-14, 2020. Fig. 1 Fig. 1 NAR neural network structure the training process of NAR, the selection of activation function will directly affect the final prediction performance of the model, and it is necessary to select the appropriate activation function according to the requirements of data and network(Sato and Hikawa, 1999).The Sigmoid function compresses the original data into a specified date range in the model's hidden layer.The linear activation function amplifies the data information in the output layer, and the network returns mapped value as the model's final output result.This paper selected the Transig function in the hidden layer.Its mathematical formula is as follows, which converts the input data sequence to [−1, 1]. T t=k+1 (y t − ȳ) (y t−k − ȳ) is the covariance between the error of the time series t and the error series t − k, and T t=1 (y t − ȳ) 2 indicates the variance of the error series. Fig. 3 Fig. 3 Distribution diagram of original demand data Fig. 5 Fig.5The original data and logarithmic results Fig. 6 Fig. 6 Diagram of data set partition Fig. 7 Fig. 7 RMSE under different number of neurons in one hidden layer Fig. 8 Fig. 8 Iterative variation diagram of MSE evaluation index Fig. 9 Fig. 9 Analysis diagram of the error autocorrelation coefficient Fig. 10 demonstrates Prophet's prediction result.A red vertical dotted line divides the predicted value and the training data set to show the difference.The black dots in the figure represent the original discrete points of the time series, and the dark blue lines represent the values obtained by fitting.Area with light blue covering indicates the reasonable upper and lower bounds of Prophet's predicted value.The prophet model had an overfitting phenomenon in the training process, and the RM SE of the predicted value was 0.7305.
6,351.8
2021-07-06T00:00:00.000
[ "Computer Science" ]
A national survey of digital health company experiences with electronic health record application programming interfaces Abstract Objectives This study sought to capture current digital health company experiences integrating with electronic health records (EHRs), given new federally regulated standards-based application programming interface (API) policies. Materials and methods We developed and fielded a survey among companies that develop solutions enabling human interaction with an EHR API. The survey was developed by the University of California San Francisco in collaboration with the Office of the National Coordinator for Health Information Technology, the California Health Care Foundation, and ScaleHealth. The instrument contained questions pertaining to experiences with API integrations, barriers faced during API integrations, and API-relevant policy efforts. Results About 73% of companies reported current or previous use of a standards-based EHR API in production. About 57% of respondents indicated using both standards-based and proprietary APIs to integrate with an EHR, and 24% worked about equally with both APIs. Most companies reported use of the Fast Healthcare Interoperability Resources standard. Companies reported that standards-based APIs required on average less burden than proprietary APIs to establish and maintain. However, companies face barriers to adopting standards-based APIs, including high fees, lack of realistic clinical testing data, and lack of data elements of interest or value. Discussion The industry is moving toward the use of standardized APIs to streamline data exchange, with a majority of digital health companies using standards-based APIs to integrate with EHRs. However, barriers persist. Conclusion A large portion of digital health companies use standards-based APIs to interoperate with EHRs. Continuing to improve the resources for digital health companies to find, test, connect, and use these APIs “without special effort” will be crucial to ensure future technology robustness and durability. Background and significance Over the past decade, and increasingly over the past few years, electronic health record (EHR) developers have implemented application programming interfaces (APIs) in response to the need to open their systems to third-party applications.2][3] Standardsbased APIs harmonize connections across different EHRs and facilitate third-party software integration, thereby improving interoperability by enabling streamlined and secure data exchange. 4The progress of these efforts and maturity of APIs set the stage for federal regulations, implementing provisions of the 21st Century Cures Act, that made standards-based APIs the default method for third-party applications to access and exchange patient electronic health information from EHRs certified to the criteria and standards adopted by the US Department of Health and Human Services (HHS). 5In particular, these regulations, finalized in 2020, adopted the Health Level Seven (HL7) Fast Healthcare Interoperability Resources (FHIR) data exchange standard to enable thirdparty app developers to connect to certified EHRs. 6Certified health IT developers were required to implement these APIs by 2022. While the intent of these efforts-to improve interoperability-is clear, to what extent and for what use cases these standards-based APIs succeed in doing so is less clear.Historically, a 2016 survey of digital health companies found that a substantial number had attempted integrations with EHRs but encountered barriers, including a lack of developer support from EHR vendors, overall difficulty partnering with EHRs, and high associated costs. 7A follow-up survey in 2018 found progress in companies' abilities to integrate with EHRs through APIs, though challenges still remained. 8Other studies have examined the availability of certain technologies integrated with EHRs (ie, capturing what was successfully integrated) and the overall robustness and durability of individual EHR company's resources for third-party developers. 4,9However, both prior digital health company surveys took place before the 2022 implementation deadline and included responses from less than 100 companies that had ever integrated their technology with an EHR.We therefore undertook an updated survey of these companies to capture the early impact of these regulations.We specifically sought to assess 3 dimensions.First, it is important to evaluate the use of standards-based versus proprietary EHR APIs to get a snapshot of national progress toward streamlined health data exchange between EHRs and third-party applications.Second, understanding company experiences integrating with specific EHR vendors (eg, Epic, Cerner) as well as the total number of vendors provides insight into the extent of interoperability of digital health company products.Third, it is critical to understand enablers of and barriers to EHR integration to inform ongoing policy and industry efforts to advance APIs and EHR integration. Objective This study sought to capture current digital health company experiences integrating with EHRs, now that new federally regulated standards-based API policies are in place and being implemented by EHR vendors.The survey covered company experience with EHR API integration, barriers to EHR integrations, and API policy and advancement efforts to ensure a robust perspective from digital health companies who are the primary consumers of these EHR APIs.These perspectives directly inform both policymakers and industry stakeholders on how to deliver next-generation technology solutions to health care providers and consumers.In particular, results will serve to guide the Department of Health and Human Services (HHS) on where ongoing policymaking may be needed to fulfill the intent of the 21st Century Cures Act.Results will also serve to guide EHR vendors and third-party software companies on the prevalence of ecosystem pain points that could lend themselves to private-sector solutions. Sample data sources A list of digital health companies to survey was compiled from a variety of data sources.The majority of companies (n ¼ 605) came from a data scraping methodology developed by Barker & Johnson, which pulled company data from public app galleries for EHR-integrated solutions available from 1uphealth, Allscripts, athenahealth, CMS Bluebutton, CARIN Alliance, Cerner Corporation, eMDs, Epic Systems Corporation, Greenway, NextGen, and SMART. 9Scraped data included the company name, the number of app galleries in which a company was found; the number of unique apps, names of apps, and functional app categories associated with a company; the targeted users of the company's technology, and the company's webpage.Since this method only identified companies that had been successful in integrating at least 1 app with an EHR or EHR-associated platform, we sought to capture a broader set of companies that may have attempted EHR integrations but have not been successful.We supplemented the preliminary list by pulling companies from: (1) a 2020 CB Insights Report titled "The digital health startups transforming the future of healthcare," (n ¼ 20), 10 (2) an analysis of relied upon software reported through the ONC Health IT Certification Program (n ¼ 9), and (3) members of a national expert advisory board convened to support this project (n ¼ 110) (see Table S1 for the list of members). Inclusion criteria Once we developed the list of companies across these 4 sources, we sought to limit it to those that develop solutions that enable human interaction with an API, such as providerfacing apps that access clinical data, either alone or in combination with non-clinical data, as well as patient-facing apps that access clinical or non-clinical data.These criteria exclude companies that solely make solutions that do not enable human interaction with an API, such as external databases or networks that connect to EHRs, apps that enable integration between 2 EHR systems, and provider-facing apps that do not access clinical data-given that these use cases are not the focus of federal regulations and face a different set of challenges.We also sought to exclude companies that make solutions that do not connect to an EHR (primarily those sourced from the CB Insights Report), as well as EHR vendors themselves. To apply our inclusion criteria, we leveraged the app categories from the data scraping methodology.Companies and apps that were categorized as "clinical use" or "patient care" were included, while companies and apps that were categorized as "administrative" only were excluded.Companies and apps that were categorized as "patient engagement" were manually reviewed to determine inclusion.Manual review primarily involved accessing the app developer's website or reviewing marketing materials obtained from the online marketplace or gallery to learn more about the app and its intended use.If it was determined that an app's patient engagement function allowed access to patient records and clinical data, the company was included.For the remaining companies-those that did not have information on their app category, either because they had missing data or were not sourced using the scraping methodology-we first relied on data from the Apple and Google app stores to identify the app's category.Among apps that could be found in the Apple or Google app stores, those categorized as "medical" were included in our sample, while those categorized as "health and fitness" were excluded.Apps that could not be found in the Apple or Google app stores were manually reviewed by evaluating the marketing materials on the app developer's website to determine if they met inclusion criteria.This resulted in a final sampling frame of 704 companies. Survey development To capture the current state of progress and challenges that digital health companies face when integrating tools with EHRs, we developed and fielded a survey.The survey instrument was developed by the University of California San Francisco (UCSF) in collaboration with ONC, the California Health Care Foundation, and ScaleHealth (a healthcare solutions marketplace).It was refined based on feedback from the expert advisory board.The survey had 3 sections: (1) Experiences with API integrations, (2) barriers faced during API integrations, and (3) API-relevant policy efforts.The survey was pilot tested with 5 companies and then refined based on feedback.The final instrument can be found in the Supplementary Material. Survey administration Contact information for a target respondent at each company was sourced by ScaleHealth.The survey was distributed via the survey software Qualtrics and was fielded from June to November 2022. To maximize the response rate, we employed a variety of outreach strategies.These included individual emails not only from UCSF but also from ScaleHealth, our expert advisors, and together.Health to target companies with whom they had existing relationships.We also posted the survey link and information to a variety of message boards, online forums, and listservs (which resulted in capturing 9 additional companies not in our original sampling frame that met inclusion criteria), increasing our total sample to 713.These boards, forums, and listservs included Health Tech Nerds, the American Medical Association Innovation Network, HIMSS Accelerate, ScaleHealth email listservs, the Society of Physician Entrepreneurs LinkedIn group, and the CARIN Alliance email listserv.Lastly, we printed business cards with a QR code link to the survey and distributed them to companies at the 2022 HLTH Conference.We followed-up with nonrespondents up to 15 times over the course of survey administration.Incentives to participate in the survey included listing participating companies on public and peer-reviewed reports, providing a copy of the reports to respondents, and inviting respondents to a special session hosted by ONC during which the results and insights from the findings will be shared. Analysis We conducted a set of descriptive analyses based on survey responses.First, we assessed the organizational demographics of the sample, including company relationship with protected heath information (eg, healthcare provider or other covered entity), primary application domain(s), and 2 proxies for size/ maturity: company development stage and number of full time equivalent (FTE) staff working on products that integrate with commercial EHRs. Our first set of analyses sought to capture use of standardsbased versus proprietary APIs.We used survey questions that captured company status of integrations with EHRs via proprietary APIs, standards-based APIs, and third-party integration service (eg, Redox).For each integration type, companies were given the following response options: "Yes, in production (currently or previously)," "Yes, in process but not in production," "Yes, but stopped (incomplete)," or "No." We then measured the relationship between the use of standards-based and proprietary APIs by calculating the percent of companies that use 1 type only (standards-based or proprietary), both types, and neither type.We also examined the relative use of proprietary and standards-based EHR APIs for companies that reported using both types by measuring the percent of respondents that reported using each API predominantly, mostly, or equally.Finally, within each of the groups, we calculated the percent of companies that reported using FHIR at all and the percent that used FHIR "extensively" to assess differences between companies' use of FHIR in their apps across types of EHR API integrations.As FHIR represents the leading industry data standard for RESTful API-based data exchange, it is important to measure how companies' adoption and use of the standard associates with the types of APIs they used to integrate with EHRs. Our second set of analyses focused on experiences integrating with specific EHR vendors (eg, Epic, Cerner) as well as the total number of vendors.Through these analyses, we sought to assess the share of companies that integrate with specific EHRs and how adoption of standards-based APIs varies across companies that integrate with 1 or more EHR vendors.Specifically, we calculated the percent of companies that had a successful integration or 1 underway with an EHR.We then stratified the use of FHIR by the number of vendors with which a company integrated (1 vendor, 2-3 vendors, 4þ vendors) and calculated the percent of companies that reported using FHIR at all and the percent that reported using FHIR "extensively" to assess whether companies integrating with more than 1 EHR had higher rates of FHIR use.The core impetus for standardizing API-based exchange is to facilitate app and software integrations across multiple EHRs.We evaluated FHIR use this way because it is important to understand whether FHIR adoption by companies in their products correlates with the number of EHRs with which they integrate. Our third set of analyses focused on enablers and barriers.First, we calculated the percent of companies that endorsed different dimensions of APIs as "moderately critical" or "critical to a great extent" to the company's ability to work successfully with EHR APIs.These listed dimensions on the survey included: technical performance, breadth of data elements, and cost.We then calculated the top 10 most prevalent barriers reported by companies as "substantial" barriers to integration from a closed list of 20 barriers.We also compared the effort to establish and maintain proprietary and standards-based APIs to show how reported barriers may differently impact companies' abilities to establish versus maintain EHR integrations.Finally, we examined open-ended responses to the questions of (1) high-priority clinical data types for future federally regulated availability and (2) future directions for policy efforts in promoting or enforcing access to data.We performed a text analysis of the free-text responses and report the 5 most common responses (grouped by key terms and themes) for each of the questions. Sample sizes for each measure varied based on item nonresponse and skip logic (eg, if a company had no API-based EHR integrations, the survey programming logic had them skip many questions on the survey).Missing data were excluded from reported percentages.We conducted a nonresponse bias analysis to compare company characteristics between respondents and non-respondents.We did not do non-response weighting for reported statistics. Results Of the 713 digital health companies on our final list, 125 companies completed the survey and 16 were considered sufficient partial completers (defined as completing through the questions on effort/resources to establish and maintain integrations with EHR vendors), for a response rate of 20%.A summary of respondent characteristics is included in Table 1. Use of standards-based and proprietary APIs Respondents reported using standards-based APIs to integrate their technologies with EHRs at high levels.Overall, 73% of companies reported current or previous use of a standards-based API in production, and another 13% reported having a standards-based API integration in process (Figure 1).The second most frequently reported method for integration with EHRs was proprietary APIs, which 68% of companies reported as having currently or previously in production.About 30% of respondents indicated currently using or having previously used a third-party integration service in production.It was more common for companies to integrate their solutions using the EHR APIs directly than using a third-party integrator.a Groups are not mutually exclusive.b Denominator differs due to survey question skip logic.Characteristic was collected only from respondents who reported an "in production" or "in process" integration with a commercial EHR.A majority of respondents (57%) indicated using both standards-based and proprietary APIs to integrate with an EHR (Figure 2).Overall, 85% of companies reported supporting the FHIR standard as part of their application, with 61% using the standard extensively.Reported use of the FHIR standard was much higher among companies that used a standards-based EHR API (either alone or alongside a proprietary EHR API) compared to those that did not.82% of companies using standards-based EHR APIs only and 79% of companies using standards-based EHR APIs alongside proprietary APIs reported use of FHIR in their products, with 89% and 75% of those companies using FHIR, respectively, reporting extensive use of the standard.Conversely, fewer companies that did not use standards-based EHR APIs used the FHIR standard.About 67% of companies only using proprietary APIs to integrate with an EHR and 52% of companies using neither API type, reported use of FHIR, with 50% of those companies using FHIR reporting extensive use of the standard. We found that 24% of companies worked about equally with both standards-based and proprietary APIs and 44% mostly or predominantly used standards-based APIs (Figure 3). EHR vendors Companies reported successful integrations most frequently with market leading EHRs, including Epic (64%), athenahealth (37%), and Cerner (36%).An additional 18% (Epic), 13% (athenahealth), and 24% (Cerner) of companies reported that API-based integration efforts were underway (Figure 4).About 92% of companies had integrations underway with at least 1 EHR and 78% had integrations underway with 2 or more EHRs.Those companies that worked with more than 1 EHR vendor more frequently reported extensive use of the FHIR standard (Figure 5).Specifically, among companies that worked with more than 1 EHR vendor, 73% reported extensive use of FHIR, compared to 27% of companies working with just 1 EHR vendor.About 47% of companies with integrations with just 1 EHR vendor reported using FHIR in a limited way, and 27% reported no use of the FHIR standard.The percent of companies that reported no use of the FHIR standard was just 9% for companies with integrations with 2-3 EHR vendors and 5% for companies with integrations with 4 or more vendors. Enablers and barriers Several dimensions were identified by most respondents as critical for a company's ability to work successfully with APIs (Table 2).Technical performance (61%), breadth of data elements (60%), cost (56%), and quality documentation (51%) were reported most frequently as dimensions that were critical "to a great extent" for successful work with APIs, followed by EHR vendor support (50%), and effort to implement (45%). About 28% of companies rated standards-based APIs as very good based on the critical dimensions for a company to be able to work successfully with an API; this was a larger percent than proprietary APIs (25%), but a lesser percent compared to API-based third-party integration (40%). Barriers pose challenges to digital health company use of EHR APIs.Among companies that reported using APIs, 47% reported high fees associated with accessing an EHR API as a Table 2. Percent of digital health company respondents that indicated dimensions were "moderately critical" and "critical to a great extent" for a company's ability to work successfully with EHR APIs (N ¼ 141). Critical dimension Moderately critical (%) Critical to a great extent (%) substantial barrier (Figure 6).The next most common challenges included a lack of realistic clinical testing data (41%), access to data elements of interest or value through APIs (40%), availability of standards-based APIs from the EHR vendor (38%), and standardized data elements (35%). Efforts to establish and maintain proprietary and standards-based APIs differed substantially.Companies reported that standards-based APIs required on average less burden than proprietary APIs to establish and maintain, with 52% and 21% of companies reporting that substantial effort is required for establishment and maintenance of proprietary APIs, and just 40% and 13% reporting substantial effort required for the establishment and maintenance of standardsbased APIs. Digital health company respondents provided open-ended responses regarding high-priority clinical data types for future federally regulated availability via EHR APIs, as well as future opportunities for policy efforts to promote or enforce access to data.This is summarized in Table 3. In brief, respondents indicated interest in federally regulated availability (through EHR APIs) of social determinants of health (SDoH) and demographic data, genomic testing results, prescription and administered medications lists, clinical notes, and claims data.In addition to expanded data element availability, companies frequently highlighted the need for cost controls on EHR integration, as well as enforcement and incentivization of EHR vendor adherence to API standards. Non-response bias analysis Given the survey's relatively low response rate (20%), we assessed non-response bias and found a few statistically significant differences between respondents and nonrespondents.However, the observed, small-magnitude differences are unlikely to bias the representativeness of our results (Appendix SA2). Discussion This study sought to capture current digital health company experiences integrating with EHRs, now that new federally regulated standards-based API policies are in place and being implemented by EHR vendors.Our analysis focused on 3 domains: the use of standards-based and proprietary EHR APIs, integrations across EHR vendors, and enablers and barriers to integrate with EHR APIs.Our results reveal that the majority of respondents use standards-based APIs to integrate with EHRs and support use of the HL7 FHIR standard in their products, likely facilitating their use of standardsbased APIs.Although nearly the same number of companies reported use of proprietary EHR APIs, more companies reported predominantly or mostly using standards-based versus proprietary APIs, signaling that both API types were needed to successfully integrate, but that standards-based APIs were more integral.Taken together, this suggests that the field is making important progress in moving toward use of APIs that streamline data exchange through a common language but that a notable portion of digital health companies rely to some extent on non-standards-based APIs. Substantial barriers such as high fees, lack of realistic clinical testing data, and lack of data elements of interest or value, indicate that progress has not been without associated friction.This is further supported by the significant difference we found in companies' reported efforts to establish and maintain EHR API integrations-where efforts to establish were more than twice as burdensome.Companies' recommendations for improving upon the current state of integration included that federal policy should promote more access through cost controls, testing and validation, and an expanded set of data elements available through APIs, which directly address these barriers.Further private sector support and federal policy are needed to ensure APIs are available to reduce barriers to entry and nurture competition "without special effort." In particular, results signal an opportunity for industry and ONC to consider and gain input on other high value use cases not currently adopted in the United States Core Data for Interoperability standard and standards-based APIs.Government and industry efforts, through pilots, standards accelerators, and standards development work groups, can help further standardize the data elements that can be accessed using standards-based APIs. 11ONC also accepts and uses public feedback and complaints on real-world certified health IT use and barriers through the ONC Health IT Feedback portal to inform agency actions. 12eported barriers related to the uneven availability of APIs and access across different EHRs could lead more digital health companies to focus their integration efforts and customer recruitment across a subset of EHR vendors who provide more robust developer support and a wider availability of data elements beyond just the floor set by federal requirements.The percent of companies, however, that integrate with each EHR vendor align with the EHR market share we calculated across office-based sites and acute care hospitals derived from recent public data sources. 13,14Even though the EHR marketplace skews toward a few predominant market leaders, it is important to ensure the market remains competitive and the burgeoning app ecosystem is built across all technologies (not just a few leaders).High rates of FHIR use among respondents, especially among companies working with multiple EHR vendors, suggest that FHIR-based APIs are successful in supporting apps developed with the intention to scale across multiple EHRs. Limitations The sample and respondents may not comprise a representative sample of digital health companies or all companies that are actively integrating and using EHR APIs.Nonetheless, our methodology to base our sample primarily on a list of companies pulled from public app galleries maintained by EHR vendors and other organizations and evolve and modify that list based on technical expert input resulted in a comprehensive list that, to our knowledge, exists nowhere else.We found through our market research no other representative list or sampling frame for this study, so novel methods and expert insights were needed to derive a sample of companies knowledgeable and experienced to answer the survey's technical questions. Our study was also limited to primarily commercial users of EHR APIs and did not include perspectives from clinicians, academic medical center researchers, and other EHR data users, who have research and business cases to use the APIs to connect and integrate their technologies and applications to the EHR.Their perspectives are no less important but were determined as out of scope for this study. Conclusion This study used a novel survey and sampling methodology to derive a robust sample of digital health companies to glean novel, national insights into companies' experiences using EHR APIs and how the industry and federal policy can continue to shape the healthcare technology ecosystem.We found that a high proportion of digital health companies use standards-based APIs to interoperate with EHRs and support standards as part of their product base.The results show that an iterative and inclusive approach that incorporates industry feedback (not just EHRs, but the digital health and app developer community, too) can help push the technical and functional properties of standards-based APIs forward and in step with developer needs.Continuing to improve the resources for digital health companies to find, test, connect, and use these APIs "without special effort" will be crucial to ensure the technology is robust and durable into the future. Figure 1 . Figure 1.Digital health company status of integrations with EHRs.N ¼ 141. Figure 2 . Figure 2. Digital health company use of APIs and the FHIR standard.N ¼ 141. Figure 3 . Figure 3.The extent to which digital health companies report currently working with proprietary versus standards-based EHR APIs.N ¼ 141. Figure 5 . Figure 5. Digital health company respondent use of the FHIR standard, stratified by the number of EHR vendors with which their apps are integrated.N ¼ 141. Table 1 . Characteristics of digital health company survey respondents.
6,088.6
2024-01-27T00:00:00.000
[ "Computer Science", "Medicine" ]
Design of an efficient medium for heterologous protein production in Yarrowia lipolytica: case of human interferon alpha 2b Background The non conventional yeast Yarrowia lipolytica has aroused a strong industrial interest for heterologous protein production. However most of the studies describing recombinant protein production by this yeast rely on the use of complex media, such media are not convenient for large scale production particularly for products intended for pharmaceutical applications. In addition medium composition can also affect the production yield. Hence it is necessary to design an efficient medium for therapeutic protein expression by this host. Results Five different media, including four minimal media and a complex medium, were assessed in shake flasks for the production of human interferon alpha 2b (hIFN α2b) by Y. lipolytica under the control of POX2 promoter inducible with oleic acid. The chemically defined medium SM4 formulated by Invitrogen for Pichia pastoris growth was the most suitable. Using statistical experimental design this medium was further optimized. The selected minimal medium consisting in SM4 supplemented with 10 mg/l FeCl3, 1 g/l glutamate, 5 ml/l PTM1 (Pichia Trace Metals) solution and a vitamin solution composed of myo-inositol, thiamin and biotin was called GNY medium. Compared to shake flask, bioreactor culture in GNY medium resulted in 416-fold increase of hIFN α2b production and 2-fold increase of the biological activity. Furthermore, SM4 enrichment with 5 ml/l PTM1 solution contributed to protect hIFN α2b against the degradation by the 28 kDa protease identified by zymography gel in culture supernatant. The screening of the inhibitory effect of the trace elements present in PTM1 solution on the activity of this protease was achieved using a Box-Behnken design. Statistical data analysis showed that FeCl3 and MnSO4 had the most inhibitory effect. Conclusion We have designed an efficient medium for large scale production of heterologous proteins by Y. lipolytica. The optimized medium GNY is suitable for the production of hIFN α2b with the advantage that no complex nitrogen sources with non-defined composition were required. Background The production level of heterologous proteins greatly depends on the characteristics of the host cell, the recombinant protein to be expressed, the promoter used and most importantly on the composition of the medium, showing that production can be limited at any level. Yarrowia lipolytica is a dimorphic ascomycete that naturally secretes several enzymes. In recent years it has attracted the attention of researchers as a model organism in dimorphism and secretion pathway studies [1,2]. Furthermore, developments in genetic engineering and molecular biology make the non conventional yeast Y. lipolytica as one of the most promising hosts for efficient heterologous protein expression [3][4][5]. Although extensive data on Y. lipolytica cultivation was reported in the literature, these reports mostly describe the production of citric acid, single cell proteins or cognate proteins like lipases, and may not be always fully adapted to recombinant protein production [5][6][7]. Furthermore, studies on the expression of heterologous gene in this yeast rely on the use of complex media and shake-flask cultures. Nevertheless this kind of media shows numerous short-comings such as a non defined composition, a high batch-to-batch variability and a high cost. On the other hand, the majority of publications in the field of recombinant proteins production by Y. lipolytica report the use of the constitutive promoter hp4d [8] which can be problematic when the product being expressed is toxic to the host [4] or the use of XPR2 promoter which requires high levels of peptones in the culture medium for its full induction [9]. However the use of non-defined ingredients such as peptones is not suitable to industrial processes; for these reasons it is not only important to maximize the yield of the heterologous protein but it is essential to obtain a consistent product under the most controlled culture conditions [10]. Medium composition for Y. lipolytica has not been extensively studied compared to heterologous production by the yeast Pichia pastoris. Thus careful consideration of medium selection is advisable when optimizing the production of heterologous proteins particularly for pharmaceutical application, owing to its critical impact on the economy and the feasibility of the process. In addition to conventional methods based on single factor variation used for medium optimization, statistical experimental design methodology was developed and applied for the design of new media or the screening of nutrient supplements. It is an efficient tool to identify interactions between the parameters tested, resulting therefore in a great reduction of time and cost [11,12]. We described in a previous work human interferon α2b production in Y. lipolytica under the control of a strong inducible promoter acyl-co-enzyme A oxidase (POX2), repressed by glucose and glycerol, and induced by fatty acids and alcanes. We showed that low expression of foreign genes in Y. lipolytica could be attributed to several factors: the genetic design of the construct such as codon bias optimization, the use of an appropriate signal peptide and an adequate translation initiation codon environment [13]. However, besides these factors, medium composition could also affect the yield of recombinant proteins production. It was reported that the productivity of lipase by Y. lipolytica is affected by the presence of tryptone in the culture medium and the presence of inducers [6,12,14]. In the current study, we first assessed the effect of four different minimal media as well as organic nitrogen substrates on cell growth and human interferon alpha 2b (hIFN α2b) production in a recombinant strain of Y. lipolytica. The selected medium was then further optimized using statistical experimental design approach. Cell growth and hIFN α2b production in a 5-l bioreactor in the optimal medium using oleic acid as a carbon source and inducer, were then described. Results and Discussion To produce hIFN α2b in Y. lipolytica and to target its production into the culture medium, we had previously expressed this gene under the control of POX2 promoter inducible by oleic acid. Different sequences of the signal secretion signal of Y. lipolytica extracellular lipase encoded by the LIP2 gene were tested [13]. Best results were obtained with the strain JMY1852 expressing hIFN α2b with the preLIP2 signal peptide. For media optimization studies, we first isolated a prototroph derivative of JMY1652 by its transformation with a fragment carrying the LEU2 gene as described in the Material and Methods section giving rise to the strain JMY1852p which can be grown in minimal medium. Selection of an appropriate synthetic medium Classical media Besides classical rich medium like YPD, several defined media have been previously used for Y. lipolytica cultivation such as the medium SM1 formulated by Olssen and Johnson [15], the medium (SM2) proposed by Gordilo and coworkers [16], as well as SM3 medium described by Nicaud et al. [4]. To select the best medium for hIFN α2b production by JMY1852p strain, these media as well as the SM4 medium (designed for heterologous production in Pichia pastoris) [17] were assessed in shake-flask cultures ( Figure 1). After 72 h of cultivation, culture supernatants were collected, concentrated and subjected to western blot analysis using a monoclonal antibody directed towards hIFN α2b; bands at approximately 19 kDa were detected in all media except in the chemically defined medium SM3 which was then considered as unsuitable for hIFN α2b production and was withdrawn from the study (Figure 1b). The expression levels achieved in SM1 and SM2 media were very low compared to that observed in the complex medium (CM). No hIFN α2b production was detected in cell lysates from culture pellets of hIFN α2b producing strain in all tested media (data not shown). However, a small band below the predicted size of hIFN α2b protein was observed in the supernatants of the CM and SM4 media (Figure 1b) which may be in favor of degradation. To check this hypothesis we performed another western blot using a polyclonal antibody directed towards hIFN α2b. Two bands were detected at approximately 19 kDa and 14 kDa with a large amount for the second band in the case of SM1 and CM media but not for SM4 medium (Figure 1c). Hence, it seems that proteolysis affects significantly the target protein produced in these media. Optical density (OD), pH and morphology changes during cultures were monitored to determine the effects of varying the culture medium. Table 1 shows data obtained for the different cultures. Comparing the chemical elemental composition of the different cultures media ( Table 2) important differences can be observed. Nevertheless cell growth in SM1 medium showed a similar trend to that observed in SM2 medium, for the first 24 hours of culture. The maximal specific growth rate (μ) of the recombinant strain was equal to 0.18 h -1 in SM1 and SM2 and both media exhibited a similar biological activity which was around 2-fold lower than the level obtained in CM medium ( Table 2). The complex medium (CM) showed a faster growth (μ = 0.22 h -1 ), a maximal biomass level of 40.1 g/l and the highest amount of hIFN α2b. However, its performance remains still lower in terms of productivity compared to the SM4 medium (0.32 × 10 -2 mg hIFN α2b/g biomass vs 0.84 × 10 -2 mg/g) ( Table 1). Influence of nitrogen source on biomass and hIFN a2b production One of the most important parameter in media formulation is the nitrogen source. Various mineral and nitrogen compounds were evaluated as an enhancing factor of Y. lipolytica growth and hIFN α2b production. We had particularly studied the substitution of urea by ammonium sulfate for SM1 medium and the enrichment of SM1 and SM2 media with casaminoacids, tryptone and yeast extract. SM1 and SM2 media without any supplement were used as control. As shown in Figure 2, substitution of urea by ammonium sulfate in SM1 increased greatly the biomass level whereas a slight enhancement was noticed for hIFN α2b production. This result differs significantly from other studies, which showed a repression of the extracellular lipase production by ammonium salts in Y. lipolytica [6,18].Valuable improvements of hIFN α2b production a Figure and biomass level were observed upon addition of organic compounds to the basal salt medium SM1 compared to the control. For the synthetic medium SM2, cultures containing tryptone and casamino acids showed a higher biomass level compared to the control containing inorganic nitrogen source. However neither tryptone nor casamino acids were able to generate high level of hIFN α2b production. Moreover, in contrast to cell growth, the production level of hIFN α2b was enhanced over yeast extract (YE) addition. In comparison to the control, nearby 10-fold increase was observed. These observations suggest that YE, a complex mixture of amino acids and peptides [19], provide an alternative source of vitamins and oligo-elements for Y. lipolytica. Since these media are prone to fluctuations and are referred as semi-defined media, their use is not considered as costeffective and they are not recommended for large scale production of heterologous proteins [10]. The Invitrogen medium The synthetic medium SM4 was formulated by Invitrogen Corporation (Carlsbad, CA, USA) to provide appropriate chemical and nutritional environments for Pichia pastoris growth. To our knowledge, there are no previous reports describing the use of this medium for the expression of proteins by other hosts especially by Y. lipolytica. Surprisingly, the expression level of the recombinant strain was dramatically improved when the strain was grown in SM4 medium; a strong band at the expected size was observed ( Figure 1b). Signal densities quantification of hIFN α2b band using the Image-J software revealed that signal intensities of hIFN α2b were approximately 9 and 40-fold respectively higher than those obtained in SM1 and SM2 media and was similar to that reached in the CM medium. Table 1 shows that the yield Y (hIFNα2b/X) was the highest in this medium, it was 2.7-fold higher than CM. Furthermore, the biological activity of hIFN α2b in SM4 medium was the highest compared to other minimal media. The SM4 appeared as the best medium allowing efficient hIFN α2b production in Y. lipolytica, by combining high productivity and activity with low level of proteolysis (Figures 1b & 1c). SM4 as a low-cost and defined medium is ideal for large scale production. This result emphasizes the importance of evaluating various cultivation media for the production of heterologous production. Morphology and lipid accumulation Depending on nutritional and environmental conditions Y. lipolytica grows in two distinct cellular forms (mycelia and yeast like) [2,3,20]. Morphological analysis of Y. lipolytica recombinant strain growing in different media revealed that in CM medium, the strain grew as mixture of yeast-like and short mycelial cells; yeast cells were predominant (about 80%) and pseudohyphe represented only 20%. In SM1 as well as SM4 media more than 90% of the cells remained in the yeast form. However higher amounts of mycelium was obtained in SM2 medium. Conditions that induce the dimorphic transition are extremely variable. These include the presence of specific compounds in the culture media and pH levels; neutral or alkaline pHs induce filamentation while acid pH favors the yeast form [1]. It has been suggested that in Y. lipolytica pH affects the formation of hyphae indirectly by modulating the availability and/or utilization of transportable sources of nitrogen. Strains without functional alkaline extracellular proteinase (AEP), an enzyme providing transportable organic sources of carbon and nitrogen to cells growing on proteinaceous substrates, which is the case of our strain, did not respond to changes in pH in complex medium [20]. As shown in Figures 1 and 3, there is a relation between hIFN α2b production and cell morphology. hIFN α2b expression was observed only when the yeast form was predominant. This corroborates with data reported by Madzak et al. [8] who found that higher amounts of laccase activity expressed in Y. lipolytica were obtained in media that exhibit a lack of mycelium formation. In addition, protein secretion in Y. lipolytica could be linked to dimorphism since several genes described in the secretion pathway are also implicated in morphological transition [8]. Besides morphology, cell size was notably affected by the amount of lipid bodies accumulated. As shown in Figure 3, lipid accumulation by the recombinant strain is medium dependent. Mlikova et al. [21] showed that structural changes on the surface of Y. lipolytica cells grown on oleic acid result in the formation of protrusions that enable the yeast to uptake the hydrophobic compounds from the medium. The uptake of oleic acid was very efficient in the SM4 medium compared to other media; large obese cells with discernible lipid bodies appeared in the cells grown in this medium. In this case, oleic acid was not only used as an inducer of hIFN α2b production but was also stored as lipid bodies. It is worth to mention that all media were not depleted of nitrogen. Lipid storage in yeasts could be prevented using genetic engineering tools. Four yeast genes: ARE1, ARE2, DGA1 and LRO1, were found to contribute to triacylglycerol synthesis and lipid storage in Saccharomyces cerevisiae [22]. Sandager et al. [22] conducted series of genes disruption in Saccharomyces cerevisiae; they showed that the quadruple disrupted strain lost the capacity to accumulate lipids. They also demonstrated that neither lipid storage nor lipid bodies were essential for growth. Further research, involving other strains and constructs, is needed to provide further insights about metabolic pathways of oleic acid in Y. lipolytica used for the production of heterologous proteins under the control of POX2 promoter. Adaptation of SM4 medium for optimal cultivation of Y. lipolytica In the much-cited SM2 medium of Gordillo et al. [16], no oligo-elements except Fe 3+ were used. This indicates that Fe 3+ plays a key role in Y. lipolytica growth and metabolism. Furthermore, we showed that only the addition of YE, containing trace elements and vitamins at a concentration of 2 to 2000-fold higher than other organic sources, had increased significantly hIFN α2b production [23]. To further optimize SM4 medium composition, we assessed the effect of the following factors: FeCl 3, vitamins solution enriched with myo-inositol and thiamin, and trace-elements solution on hIFN α2b expression. Nitrogen sources namely ammonium sulfate and glutamate were also evaluated. The L8 experimental Plackett-Burman matrix was applied to determine optimum conditions for hIFN α2b production by Y. lipolytica recombinant strain. The layout of the experiments carried out was given in Table 3. Rates of growth were comparable for all media conditions tested (data not shown). As shown in Figure 4, among the factors studied, ammonium sulfate showed the most negative strong influence on hIFN α2b production. On the other hand, glutamate seemed to have a positive impact on the production. Glutamate is an essential precursor of protein and nucleotide synthesis. It is also an important substrate for energy metabolism. In addition glutamate was reported as a stimulating compound for aerobic glycolysis and acetylCoA carboxylase activation [24]. Addition of thiamin, a cofactor in the pyruvate dehydrogenase complex and the alpha ketoglutaric acid dehydrogenase [25,26], and myo-inositol as well as PTM1 solution (trace-elements solution, described in Table 2 for the SM4 medium) resulted in an increase of hIFN α2b yield. This suggests that vitamins and some trace elements are involved in the activity of basic enzymes responsible for oleic acid uptake and/or metabolism by Y. lipolytica cells. Our results correlate with those reported by Boze et al. [23] who found that supplementation of basal medium with seven vitamins and two trace elements enhanced the growth and recombinant protein production in Pichia pastoris. On the other hand, our study revealed that FeCl 3 has a positive effect on protein expression. Indeed, DNA microarray analysis of S. cerevisiae yeast cells grown under iron excess or iron starvation conditions reveals a decrease in mRNA levels for many metabolic pathways protein like mitochondrial respiration, heme and biotin biosynthesis in iron depleted cells [27,28]. Transcripts coding for iron-sulfur proteins involved in the synthesis of leucine, glutamate are also diminished. The effect of olive oil, methyloleate, and oleic acid as inducer sources of hIFN α2b production in SM4 medium was evaluated. Contrary to other strains which was enhanced by olive oil and strongly inhibited by oleic acid [12], the production pattern was very similar for all the inducers tested. Slightly improvement was obtained with oleic acid (data not shown). The highest hIFN α2b yield was achieved in the minimal medium detailed in the experiment N°3 (Table 3). Such a medium gave a hIFN α2b level that was around 2-fold higher than SM4. This medium composed of SM4 as a basal medium, 10 mg/l FeCl 3 , 1 g/l glutamate, 5 ml/l PTM1 solution and the mixture of vitamins was called GNY medium. Effect of PTM1 solution Many types of yeast require for their proper propagation the presence of one or more micronutrient [11,29]. The effect of these elements on Y. lipolytica growth has not been widely investigated. In this study, the influence of PTM1 solution on cell growth and hIFN α2b production was assessed; two concentrations: 2 ml/l and 5 ml/l were tested and added to the basal salt medium SM4. In these experiments the recombinant strain was cultivated in a 5-l bioreactor in a batch mode, using oleic acid as a carbon source and inducer. Growth rates were similar in the two culture conditions; maximal growth was reached at 24 h of culture followed by an immediate severe decrease of biomass. No stationary phase was observed ( Figure 5Ia); this phenomenon could be explained by an excessive accumulation of lipid in cells that promotes cell lysis under high agitation. By contrast, the rate of hIFN α2b production was significantly enhanced at the highest concentration Figure 4 Improvement of SM4 medium composition using an L8 Plackett-Burman design matrix. Shake flask cultures were carried out with 25 ml of culture medium SM4 at 28°C and 180 rpm. FeCl 3, vitamins solution enriched with myo-inositol and thiamin, trace-elements solution, ammonium sulfate and glutamate are used as described in Table 3. After Western blot analysis, the effect of each component on hIFN α2b production was analyzed by Modde 6.0 software. Results are mean values of three independent experiments. of PTM1. Figure 5Ib showed that the production was reduced to up 80% when PTM1 was supplemented to the culture medium at 2 ml/l especially, after 48 h of growth which correlates with the beginning of the decline phase and the release of proteases due to cell lysis compared to the medium containing 5 ml/l of PTM1 solution. To investigate whether differences in hIFN α2b expression could be explained by protease degradation, supernatants from cultures were subjected to zymographic analysis. Several substrates, such as casein, gelatin and bovine serum albumin were used as substrate; a 28 kDa protease with casein specificity was identified at pH 5 in the two culture conditions. Colorimetric method with azocasein substrate, a chromogenic derivative of casein, showed no proteolytic activity at the start of cultures, appearance of proteases started at 20 h, their concentrations increased with increased cell concentration and at the end of cultures. A drastic increase of the target protein level was observed. Therefore, it seems likely that PTM1 solution appear to protect hIFN α2b against this protease (data not shown). The effect of inhibitor supplementation on protease activity was studied; zymographies of samples showed no activity with pepstatin at 10 μM. Surprisingly complete inhibition of the proteolytic activity occurred upon addition of PTM1 at 5 ml/l in the enzymatic reaction ( Figure 5 II). This result explains the low degradation in SM4 medium compared to data obtained in the other media (Figure 1c). The effect observed for PTM1 solution has not been reported previously which could be a solution to alleviate some proteolysis problems. Figure Table 4. Proteolytic activity was measured by the colorimetric azocasein method. Experimental data were statistically analyzed by Modde 6.0 software to identify the effect of each factor and interaction studied. I II As shown in Figure 6, among all various trace elements tested, FeCl 3 and MnSO 4 have the most inhibitory effect on the proteases followed by KI, CuSO 4 and Na 2 MoO 4 . Others elements like CoCl 2 , ZnCl 2 have a less pronounced effect whereas H 3 BO 3 acted in the opposite way. Furthermore, FeCl 3 without MnSO 4 or MnSO 4 without FeCl 3 , showed an inhibition of the proteolysis. Similar data were observed for H 3 BO 3 xZnCl 2 , CoCl 2 xFeCl 3 , CoCl 2 xKI, ZnCl 2 xMnSO 4 , FeCl 3 xKI, KIxMnSO 4 , KIxCuSO 4, MnSO 4 xCuSO 4 and FeCl 3 x Na 2 MoO 4 interactions. Other interactions showed a weaker inhibitory effect ( Figure 6). These data indicate the positive effect of some mineral ions for the improvement of heterologous protein expression by the non conventional yeast Y. lipolytica. Literature review shows that these ions could have an activating or inhibiting role in protein production [11,29]. Zhang et al. [29] reported that bivalent cations such as Mn 2+ and Mg 2+ increased the production of extrasucrase by Escherichia coli whereas Zn 2+ , Fe 2+ and Cu 2+ have an opposite effect. Nevertheless, the enhancement of protein production via proteolytic inhibition as described in this study has not been reported previously. Bioreactor culture in GNY medium The effect of the new formulated medium GNY on hIFN α2b production by Y. lipolytica recombinant strain under the control of the inducible fatty acid promoter POX2 was investigated in a batch bioreactor culture, under controlled conditions as described in the material and methods section. Two phases cultivation was conducted, the first one is a cell growth with glucose as carbon source and the second one is the induction of hIFN α2b biosynthesis with oleic acid. Kinetics of biomass and hIFN α2b expression over 120 h of culture on 2% oleic acid was monitored. After 24 h of culture on 20 g/l glucose, the biomass reached 20 g DW/l and 35 g/l once oleic acid was used as an inducer. Culture supernatants were analyzed by Western blot under reducing conditions, hIFN α2b production was initiated after 2 h of induction. Maximum yield of hIFN α2b was equal to 50 mg/l with a biological activity of 2.1 × 10 7 IU/mg. Compared to shake flask procedure, culture in bioreactor did not permit to enhance significantly the biomass yield however it generates a drastically higher expression level. Over 416-fold increase of hIFN α2b concentration and 2-fold enhancement of the biological activity were obtained. This is the first study describing this amount of increase; only 8-20 fold increase in heterologous protein expression has been reported in Y. lipolytica when scaling-up cultures from shake flask to bioreactor [5,8]. However, the fold of increase of the biological activity of hIFN α2b observed was not proportional to the increase of the production level. Biological activity of hIFN α2b can be influenced by several factors such as the post-translational modifications; any factor that interferes or favors the binding of this cytokine to its receptor, impacts the bioactivity of hIFN α2b [30]. Therefore, further structural studies are needed to give deeper insights about this result. Conclusions Conventional methods as well as statistical experimental designs were used in this study to select and optimize a minimal defined medium for Y. lipolytica heterologous protein expression. The optimized medium GNY is suitable for the production of hIFN α2b by Y. lipolytica JMY1852p with the advantage that no complex nitrogen sources with non-defined composition were required. Nutritional composition of the culture medium especially trace elements plays an important role in the improvement of protein production. GNY appears as an attractive medium for heterologous production in Y. lipolytica. Besides hIFN α2b production, the expression of other therapeutic proteins by this host cultivated in GNY medium is currently under investigation. Promising results could be expected from a more complete optimization strategy of growth and induction conditions in bioreactor. Microorganism The auxotrophic, recombinant strain of Y. lipolytica Po1d (JMY 1852) with the genotype [MATA, leu2-270, ura3-302, xpr2-322, pox2 -preLip2-IFNop-URA3], producing heterologous hIFN α2b was constructed previously [13]. The synthetic gene optimized for its codon bias was expressed under the control of the POX2 promoter which is induced by oleic acid. In this study, a prototrophic derivative was obtained by transformation with a SalI fragment carrying the LEU2 gene. This latter prototrophic strain was called JMY 1852p and used throughout the present investigation. Chemicals Chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA) except oleic acid which was provided by Prolab (Quebec, Canada). Thiamin was from Merck (Darmstadt, Germany) and Myo-inositol was from Calibiochem (La Jolla, Canada). Media The recombinant strain was isolated on YPD-agar (Yeast Peptone Dextrose) medium (20 g/l glucose; 10 g/l yeast extract; 10 g/l peptone and 20 g/l agar) and was grown in complex rich medium Y 1 T 2 D 1 O 2 or in defined media. The Y 1 T 2 D 1 O 2 medium consisting of 10 g/l yeast extract, 20 g/l bactotryptone, 20 g/l glucose and 2% (W/V) oleic acid was buffered with 50 mM sodium phosphate buffer, pH = 6.8. The compositions of the four mineral media used in this study are described in Table 2. Media pHs were adjusted at the required values with NaOH (5N) or ammonia prior to sterilization. The trace elements and vitamins solutions, sterilized by filtration, were added to the culture media as described in the text. Oleic acid was added to these media to a final concentration of 20 g/l. Stock solution (20% oleic acid, 0.5% Tween 20) was subjected to sonication for 2 min with a Culture conditions For all experiments, pre-inocula were grown on YPD medium. Cells in mid-exponential growth (16 h at 28°C and 180 rpm) were centrifuged, washed twice with 50 mM phosphate buffer, pH 6.8 and used to inoculate the culture at an initial optical density at 600 nm (OD 600 nm) of 0.4. All cultures were performed at least in duplicate. Shake flask cultures were carried out in 250 ml baffled Erlenmeyer flasks with 25 ml of culture medium and incubated at 28°C at 180 rpm. Samples were taken at various time intervals to monitor cell growth and hIFN α2b level. For purification purpose, cultures were dispensed in 250 ml volumes into 2-l baffled shake flasks. To study the effect of organic nitrogen source, media were enriched with either 10 g/l of tryptone, or 5 g/l of yeast extract or 5 g/l casamino acids. Cultures were carried out in a 5-l bioreactor (Infors, Switzerland) with working volume of 2 l. After sterilization at 121°C for 30 min, the medium was inoculated with 200 ml of pre-culture at an initial OD 600 nm culture of 0.3. Culture was performed at 28°C, aeration rate of 1.5 vvm and agitation speed of 600 rpm. Samples for the determination of the production and cell dry weight were withdrawn at 2 h interval. Oleic acid concentration was estimated by the colorimetric method based on a sulfophospho-vanillin reaction described by Frings and Dunn (1970) [31]. Biomass was monitored either by measuring optical density (OD 600 ) or by dry weight (DW) determination. One unit of OD was found to be equivalent to 0.3 g/l DW. When cells were grown on media containing oleic acid, samples were extracted with 2/5 (V/V) of propanol/ butanol solution prior to optical density determination. Determination of protease activity Colorimetric method Protease activity was determined by the colometric method using azocasein as a substrate. Culture supernatant (10 μl ) was mixed with 10 μl of a 2.5% azocasein solution and 70 μl of 0.1 M phosphate-citrate buffer pH 5 then incubated at 28°C for 1 h. The reaction was stopped by addition of 350 μl of 10% TCA (Trichloro acetic acid) solution. Samples were centrifuged at 13.000 rpm for 10 min, and then the absorbances of the supernatant were read at 440 nm against the blank. One unit of protease activity was defined as the amount of enzyme required for an increase in absorbance by 0.01 per hour. Zymographies For zymographic analysis, 15% separating gels were mixed with 5 mg casein; samples were treated with three-fold concentrated sample buffer. Gels were run at constant current of 100 mv Afterwards the gels were rinsed three time with 2.5% (V/V) Triton X-100 and incubated overnight in 50 mM acetate buffer pH 5 with or without inhibitors or metals (CaCl 2 5 mM, ZnCl 2 1 μM, PTM1) [32]. Gels were stained with coomassie blue and then destained until transparent zones caused by proteolytic digestion of the protein substrate in the gel, are visible against a blue background. SDS-PAGE and Western blot analysis Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) was performed in 15% polyacrylamide gels under denaturating conditions as described by Laemmli [33] Proteins secreted into the medium and collected after 72 h of culture, were concentrated by microcon (Millipore, Bedford, MA, USA). After separation, proteins were stained with coomassie brilliant blue R 250. Prestained broad range protein marker RPN756 (GE Healthcare, Uppsala, Sweden) was utilized for estimation of proteins molecular sizes. For western blot analysis, 10 μl of concentrated supernatant were separated by SDS-PAGE and transferred onto nitrocellulose membranes (Millipore, Bedford, MA, USA) by electroblotting. The membranes were blocked with PBS-5% skimmed milk, 0.1% Tween 20 overnight at 4°C. Membranes were incubated for 1 hour with antihuman IFN-α monoclonal antibody produced in-house and diluted to 1/500 followed by incubation with a goat anti-mouse IgG peroxidase conjugated monoclonal antibody (Sigma, St Louis, USA) diluted at 1/5000 or with polyclonal antibody also produced in-house and diluted to 1/200. The immunoreactive protein was visualized by ECL (GE Healthcare, Uppsala, Sweden). Western blots were scanned and analyzed by J-image software (Image-J 1.42) to determine the area of each band. hIFN α2b was quantified using a calibration curve established with purified hIFN-α2b produced in Pichia pastoris. Fluorescence microscopy To visualize lipid bodies, LipidTOX™ Green neutral lipid stains (2.5 mg/ml in ethanol from Invitrogen) was added to a cell suspension that has an OD 600 of 5. Microscopy was performed with a fluorescence microscope AXIO Imager.M1 (Zeiss, Le Pecq, France) at 495 nm with a 100 × oil immersion objective. AxioVision Rel. 4.6 software was used for recording the images. Biological activity The biological activity of the hIFN α2b preparation was determined by gene report assay as described by Meager [34]. Briefly HEK (Human Embryo Kidney) 293P cell line stably transfected with IFN-inducible promoter sequence (ISRE, Interferon Stimulated Response Element) linked to SEAP gene (secreted alkaline phosphatase) and exposed to human IFNα increase expression of the reporter gene product in direct relation to the dose of human IFNα. The readout is a measure of this product's enzymatic action. hIFN α2b reference standard (code: 95/566) was kindly provided by Dr Meager (NIBSC, United Kingdom). Experimental design The software Modde 6.0 (Umetrics, Sweden) was used in this study for the design of the experiments and statistical analysis of the data. This approach was previously applied for other optimization studies in our laboratory [35]. To identify the significant factors that affect the response (hIFN α2b yield, proteolytic activity), normalized coefficients are calculated by the software. To make the coefficients comparable when responses have different ranges, the coefficients are normalized, that is the coefficients are divided by the standard deviation of their respective response. The overview plot displays the coefficients values for the response as bar graphs. This plot shows therefore how the factors affect the response. Coefficient values higher than zero indicate that factor/ interaction studied has a positive effect on the response; the highest the coefficient, the more important is the contribution of factor/interaction studied. On the opposite negative values show that the response is impaired by the factor/interaction studied.
7,814.8
2011-05-20T00:00:00.000
[ "Biology" ]
Prediction of re-oxidation behaviour of ultra-low carbon steel by different slag series A kinetic model was developed using FactSage Macro Processing to simulate the re-oxidation of ultra-low carbon steel via different oxidising slags. The calculated results show good agreement with experimental laboratory thermal simulation data. Therefore, the model can be used to predict the change behaviour of slag-metal-inclusion in the re-oxidation reaction of liquid steel. It can provide prediction and guidance for an accurate secondary oxidation control process. During the slag re-oxidation process, when the oxygen in the steel is supersaturated and the slag is low in oxidation, it can easily form stick-like and dendritic shape inclusions of Al2O3 in steel. As the (FeO) content increases in slag, the oxygen transfer from slag to steel is evident, and the inclusion size increases, showing clusters and spherical shapes. In addition, supersaturated oxygen in steel easily forms unstable Al2O3-TiOx inclusions with [Ti]. As the components of liquid steel tend to be uniform, the Al2O3-TiOx inclusions will decompose and disappear, forming stable Al2O3 and TiO2 inclusions. The number of inclusions can be reduced by increasing the basicity and the ratio of CaO to Al2O3 in the initial slag. production. With the development of computer and big data technology, it has become a low-cost and efficient method to study the relatively complex metallurgical reaction process. Thus, The complex metallurgical reaction process that cannot be explained and analyzed before was expressed by big data and computational model, and it was analyzed and discussed. It is proved that the calculation model is more practical and accurate than the traditional metallurgical theory in predicting and analyzing a complex metallurgical reaction process. The method of model prediction has greatly improved the understanding of complex metallurgical reactions. It is of great practical significance for the researchers and operators to optimize the process parameters and improve production efficiency. Recently, some scholars [13][14][15][16][17] proposed some kinetic models, based on thermodynamic theoretical calculation and fluid dynamics equations, which can successfully predict changes in slag, liquid steel, and inclusions at the slag-metal interface. However, these models also have problems, such as decreased prediction accuracy and incomplete prediction of actual production requirements, that accompany changes in thermodynamic data and complex slag systems [18][19][20] . This study uses ultra-low carbon steel as the research object and develops a kinetic model based on laboratory thermal simulation data combined with FactSage 7.2 Macro Processing software. The model can more intuitively describe the re-oxidation behaviour of slag with different oxidisability than liquid steel and more accurately predict the evolution of slag, liquid steel, and inclusions during the slag-metal interface reaction. It can also formulate more accurate and reasonable control technology for liquid steel re-oxidation. experimental methods and procedures Materials. A steel mill was used to produce ultra-low carbon steel. When the steel was RH outbound, five samples were taken from the same ladle, each approximately 100 g. The chemical composition of the samples is shown in Table 1. In this experiment, CaO, SiO 2 , FeO, and MnO were used to simulate the main components of the ladle top slag. Five groups of slag were prepared separately. Their mass fractions are shown in Table 2. These slag systems were prepared and mixed, then placed in a corundum crucible and calcined at 1000 °C for pre-melting before use in the experiments. Experimental process. In this study, the constant basicity of the experimental I-IV slag was 3.0, and the MnO content was 1%. The experimental IV and V slag series use the same oxidisation and different basicity. The re-oxidation behaviour of the slag upon the liquid steel and the evolution of slag, liquid steel, and inclusions during the slag-metal reaction were studied by changing the oxidation (change in single-variable FeO content) and slag basicity. These changes provide thermodynamic experimental data for the establishment of a kinetic model. The schematic diagram of the experimental device is shown in Fig. 1. A silicon molybdenum furnace was used to provide a stable heat source for the whole experiment. The bottom of the furnace injects Ar gas throughout the experiment. The top of the furnace was sealed with a lid, and Ar gas was introduced into the whole furnace to Sample preparation. The experimental test analysis steel sample is shown in Fig. 2. The central part of the steel sample was processed into a ϕ5 × 40 mm steel rod. The total oxygen (T. [O]) content in the steel after the reaction was measured by an EMGA-620 oxygen-nitrogen detector, and the remaining elements were analysed by atomic emission spectrometry. Steel cakes of ϕ40 × 15 mm were cut from the bottom of the reaction steel samples from each group, and the inclusions were analysed using an ASPEX PSEM Explore scanning electron microscope, which can detect the number, size, and composition of inclusions in a specified area, in addition to performing conventional electron microscope functions. The composition, size, and area information for each inclusion was analysed using the analysis software of AZtecSteel. The slag composition of each stage of the experiment was obtained by X-ray fluorescence spectrometer analysis. The detection area of all steel sample inclusions in this study was approximately 100 mm 2 . The typical characteristic inclusions in the scanning area were statistically analysed, and inclusions of the same type, shape, and size were not detected repeatedly. Results and Discussion Kinetic prediction of slag-steel-inclusion reaction during slag re-oxidation process. The re-oxidation behaviour of liquid steel via slag in the ladle mainly involves the chemical reactions and processes as shown in Fig. 3. During the oxidation reaction, R1 and R3 determined the rate and direction of the slag-metal interface reaction. When the diffusion of [O] in steel is a limiting link, oxygen will diffuse from the slag to the liquid steel. At this time, the oxidation reaction R4 between oxygen and deoxidised elements such as [Al] s and [Ti] in the steel occurs in inside of liquid steel. When the mass transfer of (SiO 2 ), (FeO) and (MnO) in the slag phase is a limiting link, the oxidation reaction occurs in the slag-metal interface, that is, the R2 reaction, occurs. When the chemical potential of oxygen in the slag-metal phase is relatively close, the oxidation reaction is simultaneously restricted by mass transfer of [O] in steel and (SiO 2 ), (FeO) and (MnO) in slag. The slag-metal interface is in equilibrium, and no obvious re-oxidation behaviour will occur at this time. The accuracy and validity of the hypothesis conditions will be checked and corrected by the test data of the steel sample and slag sample. To facilitate the establishment and calculation of the model, the thermodynamic experiment and the kinetic model need to make the following assumptions: (1) In the thermal simulation experiment, when the slag is added to the steel to participate in the reaction, the slag reaction step size is Δt = 5% m slag 21 . (2) The slag-metal interface reaction can reach an equilibrium state. I I-5 I-10 I-15 I-20 I-25 I-30 II II-5 II-10 II-15 II-20 II-25 II-30 III III-5 III-10 III-15 III-20 III-25 III-30 IV IV-5 IV-10 IV-15 IV-20 IV-25 IV- Table 3. Sample number with regard to different slag and reaction time. In the FactSage software, the Macro Processing module was invoked to create a balanced reaction program. All input data for the model are based on the data from this thermal simulation experiment, and reference data are appropriately cited. The output data were all organised into data lists or Microsoft Excel tables, which were processed and plotted into corresponding charts at a later stage 22 . All the major chemical reactions in Fig. 3 were formed into small programs by the Macro Processing code. All thermodynamic reaction equilibrium phase diagrams were calculated using FactPS, FToxid, and FTmisc databases in FactSage software under adiabatic conditions 23 . Model validation. To verify the accuracy of the model, the data adopted in this study were mainly based on the thermal simulation experiment, and a few references are cited. The basic conditions and data used in the reaction of slag I with ultra-low carbon steel were selected randomly as the input data for the kinetic model. The actual experimental value was compared with the output data calculated by the model, and the results are shown in Fig. 4. The inner diameter of the zirconia crucible used in the experiment was 60 mm, and the height was 80 mm. Thus, the reaction area of the interface is 0.0028 m 2 . The densities of slag and steel used in the calculation were, respectively, 2500 kg/m 3 and 7200 kg/m 3 . The mass transfer coefficient of steel in this kinetic model can be estimated through the fitting experiment by measuring the concentration changes of [Al] s , [Ti], and [Si] in the steel, and repeated modifications were carried out to obtain the best fitting parameters. Figure 4 shows that when the estimated mass transfer coefficient of steel is 1.25 × 10 −5 m 3 /s, the calculated results of this kinetic model show good consistency with the experimental values under the same conditions, which fully indicates that the mass transfer coefficient value can meet the calculation needs of the existing model. The total oxygen (T. [O]) content in steel includes dissolved oxygen in liquid steel and oxygen in inclusions. When the slag oxidises the liquid steel, the chemical potential of the dissolved oxygen in steel and the oxide in slag tends to gradually balance, and the floating removal rate of the inclusions also has a significant impact on the oxygen in the inclusions. [O] content in steel. Therefore, it can be considered that the kinetic model can meet actual production needs and provide prediction and guidance for the formulation of an accurate re-oxidation control process. Re-oxidation behaviour of ultra-low carbon steel by different oxidisability slags. To predict the re-oxidation behaviour of ultra-low carbon steel with differing oxidising slag, the process of oxidisation was simulated by FactSage Macro Processing. For the materials used for the model calculations, the steel was 100 g, and the five groups of different oxidising slags were 30 g each. The specific components of steel and slag systems are shown in Table 1 and Table 2, respectively. The calculated temperature was 1853 K. When the slag interacts with ultra-low carbon steel, reactions (1-7) may occur in the slag-metal interface and the inside of liquid steel. Fig. 6a. The model calculation shows that the oxygen generated by the inclusions mainly comes from the self-dissociation reaction of (SiO 2 ) in the slag, and the [Si] content in steel increases significantly. In addition, because of the low (FeO) content in the slag, its activity is also lower. Therefore, under the influence of slag interfacial tension, there is no large-scale self-dissociation reaction to transfer oxygen to the steel 20 . Therefore, it can be inferred that the oxidation property of slag to liquid steel is mainly affected by (SiO 2 ), but not by the (FeO) in slag. At this point, reactions (1) and (3) occur in the slag-metal interface. As the (FeO) content in the slag increases to 2%, the total number of inclusions in the steel increases slightly, as shown in Fig. 6b. [O] content in the steel increases sharply. The oxygen transfer from slag to liquid steel is obvious, and the total inclusions in the steel increase correspondingly. Figure 6c indicates that when the reaction proceeds to approximately 25 minutes, TiO 2 inclusions appear in the steel and Al 2 O 3 inclusions decrease. During the whole reaction process, the oxidation of [Al] s occurred in steel before [Ti] in the early stage of the reaction, and a large number of Al 2 O 3 inclusions were formed. This is mainly because the binding force of aluminium to oxygen is much greater than that of titanium to oxygen. As the reaction proceeded, the [Al] s content in the steel decreased in the later stages of the reaction, and some oxygen immediately oxidised the [Ti] to form TiO 2 inclusions. According to the test results from the steel samples after the reaction, there was no significant change in [Si] content in the steel. Therefore, it can be inferred that the oxygen in steel mainly comes from the decomposition reaction of (FeO) in the slag. Moreover, its decomposition reaction inhibits the self-dissolution reaction of (SiO 2 ) in the slag. The oxidation of slag mainly manifested as the (FeO) content in the slag, which is not directly related to the (SiO 2 ) content in the slag. At this point, main reactions (2) Fig. 6c, it can be seen that the total inclusions decrease throughout the whole reaction process. This is mainly due to the increase in Al 2 O 3 inclusions generated in the early stage of the reaction to the slag, which improves the w(CaO)/w(Al 2 O 3 ) in the slag, a condition conducive to improving the slag's ability to absorb the inclusions. Figure 6d,e show the re-oxidation behaviour of liquid steel via slag with the same oxidation properties but a different basicity. By comparing the calculation results in Fig. 6e,d, it can be seen that the basicity of slag has a great influence on the inclusions in steel. According to Fig. 6e, TiO 2 and Ti 2 O 3 inclusions began to appear in the steel at approximately 25 min into the late stage. The analysis shows that this is due to the increase in basicity. Meanwhile, the Al 2 O 3 formed in the early stage of the reaction also increases the w(CaO)/w(Al 2 O 3 ) of slag. These two aspects are conducive to inhibiting the premature oxidation of [Ti] in steel and avoiding the premature formation of Ti inclusions. The Al 2 O 3 -TiO x solid solution phase appears in the steel at a later stage of the reaction. This is due to the decrease in [Al] s content in steel in the later reaction stage and [Ti] may combine with a high concentration of oxygen. This forms a solid solution phase with TiO x as the core and Al 2 O 3 as the outer layer. This is consistent with the results in ref. 24 .With the solidification of liquid steel, the Al 2 O 3 -TiO x solid solution phase has no time to rise and remove, so it remains in the steel to form inclusions. Such inclusions are unstable and can be removed by raising the temperature of the liquid steel and increasing the duration of ladle bottom blowing. Main reactions (2), (3), (4), (5), (6), and (7) occur in the slag-metal interface and on the inside of the liquid steel. It can be seen from the comparison of Fig. 6d,e that with the increase in slag basicity, total inclusions in steel can be reduced, and the purity of liquid steel can be improved. Figure 7 shows the stable phase diagram of Al-Ti-O inclusions formed in steel at 1853 K. This was calculated using the FactPs, FToxid, and FTmisc databases in Evolution of inclusions in steel during slag re-oxidation process. The steel samples at all stages of the experiment were examined and analysed by Aspex. To reflect the difference in the change of characteristic inclusions in the steel, this study selected three groups of steel samples with different slag oxidation and basicity change time points for comparative analysis, as shown in Fig. 8. www.nature.com/scientificreports www.nature.com/scientificreports/ conclusion predicted by the model. The morphology of inclusions shows that Al 2 O 3 -TiO x inclusions generally have a spherical shape with TiO x as the nucleation centre and a size of 4-6 μm, and the edges are covered by Al 2 O 3 . According to research by Park 26 and Doo 27 , when Ti-Fe alloy is added to steel, it will lead to locally high [Ti] or [O] concentration. Al 2 O 3 -TiO x inclusions are easily formed in the locally high [Ti] concentration area. In this study, the [Ti] distribution in steel after melting with ultra-low carbon steel was relatively uniform. There was no locally high [Ti] concentration region. Therefore, the formation of Al 2 O 3 -TiO x inclusions was caused by high oxidising slag rather than the local oxidation of [Ti]. After bottom blowing and standing, the composition of the liquid steel was uniform. The Al 2 O 3 -TiO x inclusions decomposed and disappeared, forming stable Al 2 O 3 and TiO 2 inclusions. These experimental results are consistent with those predicted by the model. conclusions (1) The kinetic model developed by the Macro Processing module in the FactSage software program can predict the re-oxidation behaviour of ultra-low carbon steel via slag with different oxidisabilities. The calculated results of the model show good agreement with the experimental data from the laboratory thermal simulation. This model can predict and guide the accurate secondary oxidation control process for the actual production. (2) During the slag re-oxidation process, when w(FeO)% = 1%, the oxygen that generates inclusions is mainly derived from the self-dissociation reaction of (SiO 2 ) in the slag. With the increase in (FeO) content, the oxygen transfer from slag to steel is marked. The oxygen in steel mainly comes from the decomposition reaction of (FeO) in the slag. Moreover, the decomposition reaction of (FeO) suppresses the self-dissociation reaction of (SiO 2 ) in the slag. In addition, the oxidation of [Ti] in steel can be suppressed by increasing the basicity and ratio of CaO to Al 2 O 3 in the initial slag, while the number of inclusions can be reduced. (3) When the oxygen in steel is supersaturated and the slag is low in oxidation, stick-like and dendritic inclusions of Al 2 O 3 form easily. As the oxidation of the slag increases, the size of the inclusions increases, and it presents clusters and spheres shape. In addition, the supersaturated oxygen in steel easily forms unstable Al 2 O 3 -TiO x inclusions with [Ti]. As the components in the liquid steel tend to be uniform, Al 2 O 3 -TiO x inclusions decompose and disappear, forming stable Al 2 O 3 and TiO 2 inclusions.
4,233.6
2020-06-10T00:00:00.000
[ "Materials Science" ]
Modeling and analyzing the propagation of COVID-19 in Wuhan based on game theory: quarantine or not? The isolation strategy and quarantine strategy played a crucial role in the prevention of the Corona Virus Disease 2019 in China. This paper establishes a two-layer network model that couples epidemics and the behavior of individuals based on game theory. We calculated the basic reproduction number of the infectious disease, and analyzed the existence and stability of the positive equilibrium point in the behavioral dynamic model. Through simulation, we adjusted the behavior parameters to fit the actual data, and then analyzed the sensitivity of each parameter to the system. The contradiction between national strategy and individual behavior was found in the simulation process. The simulation results show that increasing the awareness of people can accelerate changes in behavior, and improving the efficiency of working at home can reduce the relative loss of isolation, all of which can reduce the severity of the infectious disease. China. the results show that increasing sensitivity to take infection prevention actions and the effectiveness of infection prevention measures are likely to mitigate the COVID-19 outbreak in Wuhan. Introduction The recent outbreak of coronavirus disease 2019 in many countries has aroused great attention from governments and researchers [1][2][3] . Under strong government measures, the spread of COVID-19 in China has been controlled. Looking back at the entire outbreak, human behavior has played an important role in containing the spread of the epidemic 4,5 . Through domestic and foreign comparison, it is found that wearing masks, isolation and other non-drug measures are effective in curbing the epidemic 6,7 . There has been a lot of papers on COVID-19 8 . Oliver N et al. 9 summarized the important role of different types of mobile phone data during a pandemic. Different types of mobile phone data can be used to study population estimates and mobility information to better understand trends in COVID-19. At the same time, mobile phone data can help determine the effectiveness of implementing different measures to contain the spread of COVID-19. Studies have shown that isolation is an effective control measure for COVID-19. Chinazzi M et al. 10 uses a disease transmission model to evaluate the impact of travel restrictions on the domestic and international transmission of COVID-19, The results show that, regardless of domestic or international, travel restrictions have delayed the COVID-19 pandemic. Kraemer M U G et al. 11 used mobility data from Wuhan and case data to elucidate the role of case importation in transmission of COVID-19 and to figure out the impact of control measures, the results shows that the control measures implemented in China substantially mitigated the spread of COVID-19. Giordano G et al. 12 have proposed a SIDARTHE model to predict the course of the epidemic and help develop effective control strategies in Italy. The results suggest combining limiting social distance with extensive testing and contact tracing to contain the COVID-19 pandemic. These studies used mobile phone data, travel data, and case data to assess the effectiveness of total lockdowns of region and countries during the COVID-19 epidemic. In addition to travel restrictions, isolation, face masks 6 , and eye protection are also key measures to prevent the transmission COVID-19 13 . Muhammad Altaf Khan et al. 14 formulated a model for the dynamics of COVID-19 with quarantine and isolation, in which isolation is one of the states (Q). A novel model in line with the process of current epidemic and control measures was proposed in 15 , in which quarantined susceptible (Sq) and quarantined suspected individuals (B) are considered, the quarantined suspected compartment consists of exposed infectious individuals resulting from contact tracing and individuals with common fever needing clinical medication. Shilei Zhao et al. 16 developed a Susceptible, Un-quanrantined infected, Quarantined infected, Confirmed infected (SUQC) model to characterize the dynamics of COVID-19. The results show that the quarantine and control measures are effective in preventing the spread of COVID-19. In these papers, isolation is represented as a state that changes at a fixed rate, reducing the randomness of human behavior. Shi Zhao et al. 17 developed a compartmental epidemic model based on the classic SEIR model and behavioral imitation through a game theoretical decision-making process to study and project the dynamics of the COVID-19 outbreak in Wuhan, China. the results show that increasing sensitivity to take infection prevention actions and the effectiveness of infection prevention measures are likely to mitigate the COVID-19 outbreak in Wuhan. The application of game theory to the coupling model of epidemic disease and human behavior has been studied for a long time 18 . Zhang et. al. 19 proposed an evolutionary epidemic model coupled with human behaviors. Individuals have three strategies: vaccination, self-protection and laissez faire. they found that increasing the successful rate of self-protection does not necessarily reduce the epidemic size or improve the system payoff. Arefin et. al. 20 built a mean-field vaccination game scheme to analyze the effect of an imperfect vaccine on a two-strain epidemic spreading taking into account vaccination behavior of individuals. Lim et.al. 21 experimentally investigated vaccination choices of people in the context of a nonlinear public good game. Ning et. al. 22 studied the epidemic transmission process using an evolutionary game model of complex networks. In this article, we propose a model that couples human behavior and epidemics, in which human behavior is described by game theory. During the entire process of the COVID-19 epidemic, WHO collected various case data and organized national strategies to provide assistance for future research on the disease. The domestic epidemic has ended, but the international epidemic continues. In the future, similar epidemics may occur, so the study of defense strategies is of great research significance. Results According to the case data of the new crown pneumonia virus, we first fit the model parameters, use the fitted parameters as the baseline value, and then analyze the sensitivity of a single parameter and multiple parameters. By analyzing the sensitivity of parameters and their impact on epidemics and human behavior, we will give many suggestions on the defense of new coronary pneumonia. Using baseline parameters, we replaced the scale-free network with a homogeneous hybrid network, simulated the model in this paper, and found that the network played a great role in the defense of epidemics. Parameter estimation The Chinese Center for Disease Control and Prevention collected key data during the epidemic, including new confirmed cases, new deaths, new cured and discharged cases, cumulative confirmed cases, etc. We use the cumulative confirmed cases to subtract the cumulative cured cases and deaths due to illness, and use the results to fit the value of I(t). The total population of Wuhan is more than 10 million. It is very difficult to construct such a network. Therefore, we use the population percentage to express the dynamics of the epidemic. Based on this, we have established a scale-free network with 1,000 nodes. When constructing a scale-free network, there are 3 initial nodes, a new node is added, and two edges are added between the new node and the old network. The average degree of the generated network is 3.99. Then calculate Θ based on the generated network. On January 23, Wuhan blocked all traffic. On January 30, the WHO declared the new crown pneumonia epidemic as a public health emergency of international concern. Due to the serious shortage of medical staff and nucleic acid detection and treatment capabilities in the early stage of the epidemic, there were late reporting, under-reporting and false-reporting phenomena, so there was a gap between the data of early case reports and the real situation 23 . We select data from February 2 to April 15 to estimate the parameters of the model. After April 15, the epidemic has been brought under control and there are no more new cases. The incubation period of new coronary pneumonia is 5.1 days on average. It takes approximately 7 days for asymptomatic individuals to recover. The average hospital stay for infection is 14 days 6 . Based on this, the values of α, γ 1 , and γ 2 are 0.2, 0.1429, 0.0714 respectively. According to Reference 6 , we set the infection rate to 0.583. The values of these four parameters depend on the transmission characteristics of the new crown pneumonia virus. Choose appropriate values for other parameters, and use the model to fit the actual cases of infection. The fitting result is shown in Figure 1. By fitting the proportion of infected persons with the actual number of confirmed diagnoses, we determined that the value of ε is 0.3, the value of q 1 is 0.005, and the value of δ is 60. The value of the above parameters is defined as the baseline value. Next we will analyze the impact of parameters on the spread of epidemics. Table 1 summarizes the meaning and value of each parameter in this article. The time t Sensitivity analysis of parameters Among the parameters in this article, α, and γ 1 are not controllable, γ 2 can be improved by improving medical standards and increasing medical staff, but it is not within the scope of this article. other parameters can be controlled through prevention and control strategies. Among the controllable parameters, β is directly related to the epidemic. We first analyze the impact of infection rate β on epidemics and human behavior. Based on the baseline values of other parameters, by changing β , we obtain the corresponding time series of the proportion of infected persons and the proportion of quarantined persons, as shown in Figure 2. From the Figure 2, we can find that as β increases, the peak of I also increases, and X will increase faster. The infection rate increases, the number of people infected will increase, and the coverage of the epidemic will be greater. We need to reduce the possibility of infection by reducing exposure. From Figure 2 we also find that as the epidemic becomes more serious, the speed at which people take quarantine measures will increase. δ represents the adjusting factor of the speed of human behavior change. q 1 represents the loss of the quarantined person relative to the infected person. ε represents the sensitivity of people to the difference in payoffs. δ , q 1 and ε are all direct factors affecting human behavior, which in turn affect the spread of epidemics. The analysis of their impact on epidemics and human behavior is shown in Figure 3. The upper three graphs of Figure 3 show the time series of the proportion of infected persons under the influence of the three parameters, and the three graphs below show the corresponding time series of the proportion of quarantined people. , We found that as δ increases, the peak of I will decrease, and X will increase faster, but the situation of parameter q 1 and ε are just the opposite. As the relative losses of the quarantined people increase, from an economic perspective, the willingness of completely rational people to quarantine will decrease, and the spread of epidemics will be relatively serious. δ directly affects the rate of change of human behavior, thereby controlling the spread of epidemics. We can speed up the rate of change of human behavior by improving human safety awareness. Therefore, improving human safety awareness and reducing the relative loss of quarantine have a great effect on the suppression of epidemics. Corresponding measures to raise human awareness include: Internet and broadcast propaganda on the hazards and defenses of epidemics, transparency of epidemic-related data, school education on epidemic-related knowledge, etc. Corresponding measures to reduce the loss of quarantine include online office, online teaching, discovering their various interests, etc. Interestingly, as human sensitivity to the difference in returns increases, the peak of the epidemic has increased and the rate of change in human behavior has slowed down. Through analysis, we found that the reason for this phenomenon is: because the country has already strictly controlled people during the epidemic, even if a person is not isolated, the possibility of him being infected is very small, so the loss of not being quarantined will be very small, resulting in the more rational people are, the less they will voluntarily quarantine. This phenomenon reflects the contradiction between national policy and individual rational response. Increasing human sensitivity to income cannot make people more voluntarily isolated under strict national policies. The above analysis can understand the influence of the parameters on the epidemic and human behavior in the entire time interval, but only the influence of some local parameters can be understood. Next, we will analyze the influence of parameters on epidemic peaks, time to peaks, and human behavior through heat maps. β directly affects the spread of epidemics, which in turn affects the dynamics of human behavior. Other parameters affect the spread of epidemics by changing the dynamics of human behavior. Therefore, we divide the parameters into whether to include A to discuss. Figure 4 shows the impact of β and other parameters on epidemics and human behavior. From the previous simulation, we can see that the larger the peak of I, the more serious the epidemic. Therefore, we use the peak of I to denote the severity of the epidemic. The first column of Figure 4 describes the change of the peak value of I with the change of β and other parameters. The earlier the infection peak is reached, the more timely and effective the control strategy. We use T to indicate the time when the proportion of infected people reaches its peak. The second column of Figure 4 shows the time to peak value of the epidemic as each two parameters change. It can be seen from the previous simulation that the proportion of people who choose the quarantine strategy is monotonically increasing after 15 days, so we choose the value of X at 20 days to measure the impact of the parameters on human behavior. The last column of Figure 4 shows the effect of each two parameters on the proportion of people who choose the quarantine strategy. The larger the value of X(20), the faster human behavior changes. From the first column of Figure 4, we can see that the influence of the parameters on the peak value of I is monotonous, which is consistent with the previous simulation results. At the same time, we can see that q 1 has the greatest impact on the peak value. As can be seen from the last row of Figure 4, the effects of β and ε on epidemics and human behavior are monotonous. As β and ε increase, the epidemic will become more and more serious. But as β increases and ε decreases, people will take preventive measures faster. The time for I to reach the peak value is consistent with human behavior. The faster human behavior changes, the sooner I reaches the peak value. The reaction of ε on epidemics and human behavior is caused by the contradiction between national control and personal payoff maximization. The first row of Figure 4 shows that the effect of q 1 on human behavior is non-monotonic. When β is small, q 1 has a great influence on the speed of human behavior, and also has a great influence on the time when I reaches peak value. The results of these two figures are relatively consistent. The impact of β and δ on the time I reaches the peak is a bit complicated. When δ is very small, the result is more sensitive to β . The time decreases monotonically with δ . We need to increase δ to improve the timeliness of the strategy. We analyzed the impact of β , directly related to the epidemic, on the epidemic and human behavior, and then we will analyze the impact of other parameters on them. Figure 5 is about the impact of δ , ε and q 1 on epidemics and human behavior. Similarly, the first column of Figure 5 indicates the peak value of I with the change of the two parameters, the second column of Figure 5 indicates the time to reach the peak value, and the last column of Figure 5 indicates the quarantine proportion on the 20th day. From the first column of Figure 5, it can be seen that the effects of parameters on the peak proportion of infected persons are monotonically increasing, contrary to the previous simulation conclusion about δ . This result shows that ε and q 1 will change the impact of δ on the epidemic. It can be seen from the last two columns of Figure 5 that the time for the epidemic to reach its peak is consistent with the speed of behavior change, that is, the faster human behavior changes, the faster the epidemic will reach its peak. When q 1 changes, the influence of other parameters on human behavior will not be monotonous. When q 1 is larger, as human sensitivity to the income difference increases, the speed of human behavior changes will increase, that is, when the relative loss of quarantine is large, the contradiction between national strategy and human rationality will disappear. In addition to the impact of human behavior on epidemics, the Internet also has a great impact on the spread of epidemics. In this paper, we analyze the impact of the network on the epidemic by simulating the epidemic on the scale-free network and the uniformly mixed network respectively. We first simulate system (3) on the homogeneous mixed network, and obtain the time series of the proportion of infected persons as shown in Figure 6a. We also simulate system (3) on the uniformly mixed network, and plot the time series of the epidemic, as shown in Figure 6b. To verify the accuracy of the results, we use cellular automata to simulate the epidemic on the homogeneously mixed network, as shown in Figure 6c. The monte carlo is used to control the spread of epidemics. It can be seen from figure above that under the same average degree, the proportion of infected people on a homogeneous 6/11 mixed network is much smaller than that on a scale-free network. From Figure 6a and 6b, we can conclude that preventing people from running around can effectively prevent the spread of epidemics. It can be seen from Figure 6b and 6c that the results of the model solution are consistent with the simulation results. Discussion In this article, we propose a coupled model of the epidemic of new coronary pneumonia and human behavior. The epidemic is a SEIR model based on a scale-free network. Unlike other models, the latent person is infectious, but the infected person is not infectious due to hospitalization and isolation treatment. Human behavior is a game model based on a homogeneous mixed network. According to disease status and whether isolated, people are divided into: healthy and quarantine, healthy without quarantine, and infection and quarantine. We calculated the basic reproduction number of the epidemic. We simulated and analyzed the impact of various parameters in the epidemic on the epidemic and human behavior. We also analyzed the impact of the Internet on the epidemic, and found that the epidemic on the scale-free network is more serious than the epidemic on the homogeneous mixed network. Therefore, we recommend that in the control strategy of the epidemic, focus on controlling people with a large number of contacts. Methods In this article, we propose a coupled model of epidemics and human behavior based on a two-layer network. COVID-19 can only spread through contact, so we simulate the spread of the epidemic on the actual contact network and assume that the actual contact network is a scale-free network. Whether people choose to self-quarantine at home is affected by information about epidemics on the Internet, including social network information and self-media networks, etc. There are different opinions on the Internet, but the information about epidemic data is consistent, so we assume that human behavior changes are based on a fully connected network. The network structure diagram is shown in Figure 7. The two-layer network has the same number of nodes, and there is a one-to-one correspondence between nodes. The spontaneous isolation of people influences the spread of epidemics, and the spread of epidemics also influences the change of human behavior. Nodes within a single-layer network affect each other, and nodes between two-layer networks also affect each other. The epidemic model In the models describing COVID-19, people were divided into multiple compartments, including susceptible, latent, symptomatically infected, asymptomatically infected, hospitalized, recovered, etc. In this article, we only divide people into susceptibility(S), latency(E), infection(I) and removal(R) compartments. The latent state includes all undetected individuals such as latent persons, asymptomatic infected persons, etc. Infected status refers to people who are infected and hospitalized. Due to the national importance and rapid action, there are no individuals who have been detected and have not been hospitalized after February. Removed compartments include those who recovered and those who died due to illness. New coronavirus is infectious during incubation period. Figure 8 shows a schematic diagram of the epidemic. In response to the outbreak of infectious diseases, China established a hospital within a few days and established many sheltered hospitals to receive mild patients. Due to the strong support of the state, on February 5th, the number of beds for receiving infected persons was sufficient. Due to the harmfulness of the disease and the high attention paid by the state and doctors, once an infected person is found, he/she will be immediately isolated and treated, that is, the confirmed infected persons are isolated and have no ability to spread the disease. Considering the population mobility of Wuhan, we use a scale-free network to model the epidemic. In the context described above, we will build the following epidemic model: Here Θ(t) = ∑ k kp(k)E k (t)/ k , is the probability that any edge points to the infected node. S k (t), E k (t), I k (t), R k (t) respectively represent the proportion of susceptible, latent, infected, and removing nodes in the node with degree k. S k (t) + E k (t) + I k (t) + R k (t) = 1. The distribution of node degrees in the network is p(k). β is the infection rate. α represents the rate of undiagnosed cases to confirmed cases. γ 1 represents the rate of recovery of undiagnosed individuals. γ 2 indicates the rate of recovery of the diagnosed and hospitalized individual. Model with quarantine In the epidemic prevention and control, travel restrictions are of great significance to reduce the number of infected people and control the spread of the virus. When an epidemic spreads, quarantine can protect not only yourself, but also those around you. In this article, we use game theory to model human spontaneous quarantine behavior. People will decide whether to change their behavior by comparing their own benefits with those of others. Assume that human behavior update strategy conforms to Fermi rule. Individual i randomly selects an individual, assuming individual j, compares the payoffs and change his/her strategy with the following probability: ε indicates the impact of payoff differences on individuals desire to change strategies. The larger the ε, the greater the impact of the payoff difference on the strategy choice. Human behavior are divided into quarantine (q) and refuse to quarantine (r). x is the proportion of people in quarantine. Due to the severity of the epidemic and the sequelae after treatment, it is assumed that the recovered individuals will be quarantined by themselves, and the infected individuals are forcibly quarantined, so the freely moving individuals are all susceptible or latent. Before the diagnosis, the latent individuals look no different from the susceptible individuals, so we represent both susceptible and latent people as healthy people, but there is a risk of infection. Quarantine of latent people can avoid infecting neighbors, and quarantine of uninfected people can avoid being infected. The total population is divided into three categories based on whether they are hospitalization and whether they are spontaneous quarantine: infected and quarantine (Iq), health and quarantine (Hq), health and refuse to quarantine (Hr). For the quarantine strategy, individuals will not only lose their freedom, but also no income. Suppose the relative loss of quarantine is q 1 and 0 < q 1 < 1. Accordingly, we assume that the loss of an infected person is 1. For those who are not quarantined, there is a risk of infection, so we assume that their benefits are proportional to the number of unquarantined lurkers. 8/11 The benefits are summarized as follows: in f ected and quarantine, −q 1 health and quarantine, health and re f use to quarantine. (2) Since E(t) represents undetected infected persons, the value of E(t) is unknown, and people can only evaluate their benefits by Estimating the proportion of latent, which is denoted by E. We use E(t) to denote the extent of the spread of infectious diseases in the network, Similarly, is the probability of a latent person who is not quarantined, that is, the probability of a susceptible person being infected. When a disease is prevalent, disease-related information spreads through various online platforms, so people receive the same information about the epidemic. Human behavior is based on epidemiologically related information, and we are going to model human behavior based on a homogeneous hybrid network. The disease state of an individual will not actively change, people can only choose whether to quarantine. People diagnosed with the infection will be forced to be quarantined in hospitals, so they cannot actively choose a strategy. In this evolutionary game model, an infectious individual represent individual in E-state, that is, latent persons and asymptomatic and undiagnosed infected persons. The quarantined individual learns the strategy of the unquarantined individual and changes his strategy with a certain probability, so that x decreases: The unquarantined individual learns the strategy of the quarantined individual and changes his strategy with a certain probability, so that x increases: Through the above two formulas, we can derive the evolutionary game equation of x as follows: Here δ represents the speed of behavior change relative to the speed of the epidemic. Spontaneous quarantine reduces epidemic infections and inhibits the spread of epidemics. Coupling the epidemic model and behavioral dynamics, the system will become: Here The quarantine strategy avoids infection by controlling contact between people, so x only affects the infection rate. The disease-free equilibrium (DFE) of the subsystem of degree k is E 0 k = (1, 0, 0, 0). The DFE of the system (1) is E 0 = (E 0 1 , E 0 2 , ...E 0 N ). The next generation matrix method is adopted to solve the basic regeneration number 24 . The non-negative matrix, F, of the infection terms and the non-singular M-matrix, V, of the transition terms, are given by The dynamic system of individual behavior is divided into two parts: one is the interaction between quarantined and uniquarantined individuals in the population of undetected infected persons and susceptible persons, dx dt = δ x(1 − x)(S(t) + E(t))(S(t) + E(t))(ω Hr→Hq − ω Hq→Hr ). and the other is the influence of isolated and hospitalized infected persons on the behavior of uniquarantined individuals, dx dt = δ x(1 − x)(S(t) + E(t))(I(t) + R(t))ω Hr→Iq , In the second part of the behavioral dynamics, hospitalized infected persons are forcibly isolated, so they cannot change their strategy, and ω Hr→Iq > 0, so the proportion of isolated individuals increases monotonically. Next, we analyze the second part of individual behavior dynamics. Obviously, 0 and 1 are the two solutions to the dynamical system.
6,458.2
2020-12-07T00:00:00.000
[ "Medicine", "Mathematics", "Environmental Science" ]
Characterization of the First Virulent Phage Infecting Oenococcus oeni, the Queen of the Cellars There has been little exploration of how phages contribute to the diversity of the bacterial community associated with winemaking and may impact fermentations and product quality. Prophages of Oenococcus oeni, the most common species of lactic acid bacteria (LAB) associated with malolactic fermentation of wine, have been described, but no data is available regarding phages of O. oeni with true virulent lifestyles. The current study reports on the incidence and characterization of the first group of virulent oenophages named Vinitor, isolated from the enological environment. Vinitor phages are morphologically very similar to siphoviruses infecting other LAB. Although widespread during winemaking, they are more abundant in musts than temperate oenophages. We obtained the complete genomic sequences of phages Vinitor162 and Vinitor27, isolated from white and red wines, respectively. The assembled genomes shared 97.6% nucleotide identity and belong to the same species. Coupled with phylogenetic analysis, our study revealed that the genomes of Vinitor phages are architecturally mosaics and represent unique combinations of modules amongst LAB infecting-phages. Our data also provide some clues to possible evolutionary connections between Vinitor and (pro)phages associated to epiphytic and insect-related bacteria. INTRODUCTION Lactic acid bacteria (LAB) are among the most important groups of microorganisms used in food fermentations (Bintsis, 2018). Sustained consideration has been given to their cognate bacteriophages which traditionally cause fermentation failure and product inconstancies. The largest and best documented problems originated by phage presence have been described in dairy industries, leading to substantial economic losses (Pujato et al., 2019). This has driven extensive diversity and ecological studies of LAB-infecting phages within dairy environments, resulting in some improvement of the phage-resistance of starter cultures McDonnell et al., 2018). The vast amount of information produced also yielded valuable insights into the mechanism which govern phage recognition and penetration into Gram positive bacteria (Dunne et al., 2018;Martínez et al., 2020), and provided many key advances that now define our understanding of the complex evolutionary relationships between tailed phages (Kupczok et al., 2018). Hence, data progressively put forward the notion that some LAB phages could inherit structural genetic modules from various phages of dairy as well as non-dairy species (Mikkonen and Alatossava, 1995;Labrie et al., 2008;Samson and Moineau, 2010;Szymczak et al., 2017;Philippe et al., 2020). Such connections between phages infecting different species can now be nicely captured by genome-level network analyses (Iranzo et al., 2016;Shapiro and Puntonti, 2018). Using this perspective, the analysis of phages which have no genes in common, can show the existence of a set of possible paths that can connect each of their genes in relatively few steps . Such a holistic approach is developing, and continued efforts are now needed to better characterize the globally connected phage gene pool and increase their connectivity. Adding more LAB phage sequences from phylogenetically related hosts to reference databases may be helpful and will require investigations of other types of fermented foods such as meat, cereals, vegetable and fruits (Kot et al., 2014;Chukeatirote et al., 2018). Wine-associated communities may represent promising candidates for such exploration, as they include various LAB genera belonging to the emended family Lactobacillaceae (Zheng et al., 2020) such as Leuconostoc, Fructobacillus, Oenococcus, and Weissella, whose infection by phages is so far poorly documented (Kot et al., 2014). The enological environment is complex and characterized by temporal succession of distinct communities of microorganisms, with highly interconnected networks of metabolic and ecological interactions with other niches such as plants and their rhizosphere, soils, insects, and humans (Morgan et al., 2017;Morrison-Whittle and Goddard, 2018). The hypothesis that wine-associated communities may turn out to represent a valuable source for hitherto undescribed phages is supported by the recent description of novel genera of phages infecting L. plantarum (Kyrkou et al., 2019(Kyrkou et al., , 2020, a LAB capable of performing the malolactic fermentation (MLF) in high pH wine (Krieger-Weber et al., 2020). The latter process, which reduces acidity and increases microbial stability, creates good quality grape wine, and is important for the aging of red wines and certain white wines. Current projects of our group focus on phage-host interaction in the other and uncontested queen of the cellar, namely Oenococcus oeni (Grandvalet, 2017). This LAB species is better adapted to the limiting conditions imposed by the wine matrix and performs the crucial role of MLF under regular winemaking conditions, especially in wines with a pH of below 3.5. Similar to other food fermentations, cases of stuck and sluggish fermentations are annually reported worldwide. They lead to depreciation of product quality, have negative economic impact and are difficult to manage in the wine industry. Commercial MLF starter cultures which mostly consist of strains from the O. oeni species have been selected. However, their use does not yet ensure a problem-free fermentation. Difficulties arise from a combination of factors, including the presence of phages infecting O. oeni (oenophages) (Malherbe et al., 2007). We have previously isolated and characterized various oenophages which originated from distinct wine fermentation samples, and a range of geographical locations and time points (Jaomanjaka et al., 2016;Philippe et al., 2017). Most corresponded to temperate or ex-temperate siphophages suggesting that bacterial strains are the main reservoir for oenophages (Jaomanjaka et al., 2016;Philippe et al., 2017). Accordingly, in silico analyses showed that O. oeni genomes are replete with integrated prophages (Bon et al., 2009;Borneman et al., 2012;Jaomanjaka et al., 2013). Noteworthy, lysogeny was shown to be prevalent among MLF commercial cultures and further studies are now needed to better undertand lysis-lysogeny decision strategies in wine. Intringuinly, a small set of oenophages collected in 2015 drew our attention as they produced clear plaques and did not exhibit several key genomic features which are globally applicable to previously described temperate oenophages (Philippe et al., 2017;Chaïb et al., 2019). In the current study, whole-genome sequencing of two phages of the group, further named Vinitor, confirmed their virulent nature, reinforcing the notion that phage predation of O. oeni occurs during wine making. Despite absence of homology at the DNA level with other fully sequenced phages, several Vinitor-encoded putative proteins showed sequence similarity to proteins of dairy LAB phages. Our data also provide somes clues to possible evolutionary connections between Vinitor and (pro)phages associated to epiphytic and insect-related bacteria. Propagation of Vinitor Phages Lysates of Vinitor162 or Vinitor27 were produced on O. oeni IOEB-SARCO 277 which does not contain endogenous phage and is sensitive to all oenophages so far isolated in our laboratory (Chaïb et al., 2019). Phage amplification was carried out in MRS broth supplemented with MgSO 4 (3.75 g/L) and CaCl 2 (2.375 g/L) (MRS ) with a multiplicity of infection of 0.03. After 3 days, the lysed culture was centrifuged and the supernatant was filtered through a 0.2 µm polyethersulfone (PES) membrane filter (Jaomanjaka et al., 2013). The lysate was titered on the same host using the classical double-layer plating technique. MRS agar plates were placed in unsealed plastic bags and incubation was carried out at 25 • C for 4-7 days to allow plaque formation. Each lysate was subsequently typed using four PCR tests that distinguish the Int A , Int B , Int C , and Int D groups among oenophages, based on their integrase (int) sequence (Jaomanjaka et al., 2013;Philippe et al., 2017). Alternately, phage particles were replaced with 0.5 µL of template DNA (50 ng). A Biorad i-Cycler was used for the amplification reactions, which were achieved in a 25 µL volume using the Bio-Rad Taq PCR Master Mix kit and 0.2 µmol/L of each primer. The phage DNA released by heat lysing of particles (10 3 PFU per reaction) served as template for PCR. All oligonucleotides were purchased from Eurofins MWG-Operon (Munich, Germany). Incidence of Vinitor Phages During the 2015 Vintage We explored the presence of oenophages along 23 red and white wine fermentations in 7 wineries during the 2015 vintage (Supplementary Table 1). Most were sampled at different steps of the process (must, alcoholic fermentation (AF), MLF and postblending) yielding a set of 127 samples. Two distinct protocols were used as previously described (Philippe et al., 2017). Briefly, in protocol 1, samples were centrifuged (5,000 g, 10 min) and filtered on 0.2 µm membrane filters made of polyether sulfone. In protocol 2, samples were 10-fold diluted in MRS liquid medium supplemented with 0.1 mg/mL pimaricin, incubated for 5 days at 25 • C, centrifuged and filtered as described above. All samples were stored at 4 • C. Phages present in the samples were titered using the classical double-layer plating technique as described above. Following incubation, single plaques were picked, suspended in 0.5 mL of sterile MRS medium and stored at 4 • C. They were propagated on the same strain and reisolated two more times to ensure the purity of the phage lysates. To obtain high-titers phage lysates, 10 confluent lysis plates were prepared. Soft agars were collected, centrifuged, and the supernatant was filtered. Phage titers ranging from 10 8 to 10 9 PFU/mL were obtained. Phages recovered from the vintage 2015 were tested using the integrase PCR tests. The lysates with no amplification signals were considered as putative Vinitor phages. They were subsequently typed using primer couples designed in five genes from Vinitor162 which specify the Terminase large sub-unit, Major Capsid protein, Tail length tape-measure, Putative Tal and phage replisome organizer ( Table 1). Primer design was achieved by using the eprimer3 0.4.0 and Oligo analyser 1.0.3 software. Host Spectra The propagation of phage Vinitor162 was tested on a panel of O. oeni strains and other LAB ( Table 2 and Supplementary Table 2). The resistance level of bacterial strains to phages was expressed using the efficiency of plating (EOP) ratio. The EOP was defined as the ratio between PFU/mL obtained on each putative resistant strain and PFU/mL obtained on the strain initially used for the phage propagation (O. oeni IOEB S277). Resistant strains were represented by EOP values < 1. Genome Sequencing The phage lysate was concentrated by ultracentrifugation and double-stranded DNA was extracted as described previously (Chaïb et al., 2020). Whole-genome sequencing was performed at the Genome-Transcriptome facility of Bordeaux 1 . DNA libraries were prepared using the Nextera XT DNA library preparation kit (Illumina, San Diego, CA). Genomic DNA was sequenced using an Illumina MiSeq using 2 × 250 bp pairedend libraries. Reads were assembled using SPAdes (Bankevich et al., 2012) with default parameters (read correction and assembler). The assembly of the whole-genome sequences was verified using HindIII-restriction profiles of phage DNAs and gel electrophoresis. The full genome of O. oeni phage Vinitor162 and Vinitor27 were deposited in GenBank under the accession numbers MF939898 and MT859305. Genomic Comparisons and Phylogenetic Analyses The in silico-translated protein sequences were used as queries to search for sequence homologs in the non-redundant protein database at the National Centre for Biotechnology Information using BLASTP (Altschul et al., 1997) with an upper threshold E-value of 1 × 10 −3 . Searches for distant homologs were performed using HHpred (Söding et al., 2005) against different protein databases, including PFAM (Database of Protein Families), PDB (Protein Data Bank), CDD (Conserved Domains Database), and COG (Clusters of Orthologous Groups), which are accessible via the HHpred website. Searches against the CDD database at NCBI were also performed using CD-search (Marchler-Bauer and Bryant, 2004). Vinitor162 Is a True Virulent Oenophage To date, limited studies of O. oeni phages have been undertaken despite the commercial relevance of this bacterial species during winemaking. The few available studies have mostly focused on the characterization of mitomycin C-induced temperate oenophages, showing a high incidence of lysogeny in the species (Jaomanjaka et al., 2013). The lack of isolated virulent (nontemperate) oenophages, although intringuing, was interpreted as an indication of an insufficiently sampled environment. As from 2013, we decided to explore the whole oenological reservoir for this specific phage hunt. Our strategy was to sample all wine-types and essential steps (must, AF, MLF, aging) of the winemaking process and to achieve a faster processing of the samples as possible. As a first result, a set of 31 putative virulent phages of O. oeni were succesfully isolated from the 2014 vintage (Philippe et al., 2017). Briefly, these phages exhibited unusual genomic characteristics, of which the lack of the integrase and endolysin genes that are the signature of temperate oenophages. They were initially called unk (for unknown) and renamed Vinitor (latin name for winemaker) in our laboratory to reflect their origin. Vinitor162 phage isolated from white wine (Sauternes, France) was arbitrarily chosen for further characterization. It was propagated on O. oeni IOEBS277 to titres of 10 8 to 10 9 PFU/mL. Both the clear phenotype and small size of the plaques observed earlier were confirmed (Figure 1). Concentrated phages were recovered as previously reported (Chaïb et al., 2020) and TEM observations showed that phage Vinitor162 belongs to the Siphoviridae family, with an icosahedral head (55 ± 3 nm in diameter, n = 20), and a non-contractile tail (205 ± 8 nm, n = 20) with an extended unique thin tail fiber at its extremity (Figure 2). To determine the host range of Vinitor162, 22 representative O. oeni strains were chosen from the CRBO collection, based on their assignment to one of the three main phylogroups (A, B, and C) described in the species (Lorentzen and Lucas, 2019) and their distinct prophage contents ( Table 2). The host range test indicated that Vinitor162 is able to efficiently infect 12 different O. oeni strains, all from phylogroup A, including some commercial MLF starters. Intermediate EOPs (10 −2 to 10 −5 ) were observed against 4 strains (AWRIB429, IOEB0608, IOEBCine, IOEB9517), suggesting impaired adsorption or the presence of a resistance mechanism against Vinitor162. Eight out of the 12 sensitive strains were mono-or poly-lysogens and resident prophages belonged to so far identified Int A , Int B , Int C or Int D groups (Jaomanjaka et al., 2013). Those temperate phages therefore did not exhibit immunity to superinfection by phage Vinitor162 (Table 2). Last, Vinitor162 could not form plaques on any of the tested strains belonging to other species of the genera Oenococcus, Pediococcus and Lactiplantibacillus, Lactobacillus, Lacticaseibacillus, and Limosilactobacillus isolated from the enological environment and beyond ( Genomic Characteristics of Phage Vinitor162 The phage genome was sequenced using the Illumina MiSeq platform, resulting in a complete genome sequence with a length of 36,299 bp, and an average coverage of 2869 X. The G+C content was 36.14%, which is close to that of the bacterial host (37.9%). No matches to any other entity were found by BLAST searching at the nucleotide level. The genome had 51 predicted orfs and contained no tRNA genes. The smallest gene preceded by an adequate ribosome binding site (RBS) would encode a protein of only 49 amino acids (gp34). The sizes of the remaining gene Frontiers in Microbiology | www.frontiersin.org products varied from 55 (gp32) to 2,170 amino acids (gp18). All orfs, but orf 25, were positioned on the same DNA strand. The BLAST program was used to compare the protein sequences from phage Vinitor162 to sequence databases, and the salient genome characteristics are outlined in Supplementary Table 3, with a detailed list of top BLAST identities. A large proportion of the predicted gene products (31 of 51, 61%) showed no obvious predicted biological function. The majority of them (18 of 31) had no matches in the databases, reinforcing the idea that Vinitor162 presents high levels of divergence from known oenophage genomes, and confirming that the LAB phage pool is still largely unexplored. Orphans were not randomly distributed on the genome and a total of 14 orfs were located in two specific regions (orf 19-orf 22; orf 34-orf 47) (Figure 3). Protein homology was detected for 33 of the 51 deduced proteins and identity levels were ranging from 29 to 95%. Of note, a vast majority of homologs (n = 29) were clearly present in prophages (usually unannotated), suggesting their role as a possible sequence reservoir. Half of these Best BLAST hits (BBH) results (n = 15; 29.4% of total ORFs) were related to prophages/phages infecting species of the Oenococcus genus (O. oeni and O. sicerae) or closely related taxa such as Weisella sp., and lactiplantibacilli such as L. plantarum, which tend to share similar ecological niches with O. oeni. Architecture of the Genome of Vinitor162 The architecture of the phage genome is shown in Figure 3. We assigned the bp +1 to the first base pair of the predicted small terminase subunit gene, so that its map can be easily compared to other phages of LAB. Of the 51 predicted genes, functions could only be predicted for 20 of the putative proteins they encode, and the majority of these were phage structural proteins (see below). The phage did not encode recognizable integrase, excisionase or repressor genes confirming that Vinitor phages have an exclusively lytic life cycle. Despite the peculiarity of its genome sequence and high number of orphans, phage Vinitor162 shared synteny with other reported temperate/ex-temperate oenophages in terms of genome organization, with the DNA packaging module followed by the structural module, the lysis module and the replication module (Jaomanjaka et al., 2013;Chaïb et al., 2019). Packaging Mechanism and Genome Extremities We did not obtain experimental evidence of the presence of cohesive genomic extremities in Vinitor162 during our previous work (Philippe et al., 2017). Analysis of the raw Illumina reads by PhageTerm (Garneau et al., 2017) also ruled out their presence in the phage genome, but no packaging mechanism could be further assigned to phage Vinitor162. This situation has been also reported for other phages such as the Clostridium phage phiCD11 (Garneau et al., 2017) and the Streptococcus Str01 and Str03 phages (Harhala et al., 2018) and could be a result of Nextera library preparation method. The reconstructed phylogeny based on amino acid sequence of phage terminase large subunits (TerL) was recently shown to produce clusters associated with types of genome terminus and encapsidation mechanism (Clokie and Kropinski, 2009). We analyzed a data set of TerL sequences from phage Vinitor, as well as representatives of the inducible prophages belonging to the Int A , Int B , and Int D groups in O. oeni, and other phages of LAB present in public databases ( Figure 4A). TerL sequences segregated phages into 2 well-supported clusters. We first observed a joint branching of the cos-containing oenophages, namely the Int A and Int B oenophages (Gindreau et al., 1997; FIGURE 3 | Architecture of the oenophages Vinitor162 and Vinitor27 genomes. The predicted functions of the Vinitor genes are indicated below the maps. The presumptive modules are colored. The gray vertical blocks between phage sequences indicate regions of shared homology according to BLASTn, and the degree of nucleotide identity is indicated by the intensity of gray. ORFs in white characters have small differences in their nucleic sequences and those in red are specific to each Vinitor phage. The figure was generated with Easyfig. Jaomanjaka et al., 2013Jaomanjaka et al., , 2016 with some well characterized LAB phages such as Ldl1 from L. delbrueckii (Casey et al., 2015) and phage 7201 infecting S. thermophilus (Stanley et al., 1997), which have similar genome extremities. In contrast, both the Int D prophage and virulent phage Vinitor belong to a second cluster, which harbors many terminally redundant and circularly permuted phages due to headful packaging (pac phages) (Figure 4A). Of note, the temperate Int D prophage is predicted to be a pac-type by Phage Term (data not shown). It can therefore be inferred that phage Vinitor may have headful packaging. The Morphogenesis Module The genes immediately downstream of orf2 encode a set of structural proteins which are presumed to be involved in phage head assembly. gp3 encodes the portal protein, which assembles in a multi-subunit annular structure at one corner of the icosahedral shell, where it serves as the entrance and exit door for the viral DNA, as the site for head assembly, and the attachment site for the tail (Dedeo et al., 2019). BLASTP searches showed significant sequence homology with the portal protein of uncharacterized prophages found in L. sakei, and various phages infecting pathogenic Streptococcus species such as S. pneumoniae (Croucher et al., 2016) and S. gordonii (Javan phages) (36% identity and e-values of 1e-100) (Rezaei Javan et al., 2019; Supplementary Table 3). The module also comprised other putative structural proteins (gp5 to gp13) of which gp7 may represent the capsid scaffolding protein, gp8 the major capsid protein (MCP), gp9 the connector and gp13 the major tail protein. The structural components of the bacteriophage tail resembled the complex system found in a majority of characterized LAB phages, comprising a triad of proteins (gp16, tape measure protein, TMP; gp17, distal tail protein, Dit and gp18, tail associated protein, Tal) which allow the recognition and binding to host cell surface-located polysaccharides, or more rarely proteins . Gp16 was assigned to the TMP, which determines the length of the tail and participates in the infection mechanism. The length of the siphophage tail is directly proportional to the size of the TMP, where one amino acid of the TMP equals 0.145 nm of tail length (Mahony et al., 2016). Using this equation, the length of the tail in Vinitor162 was expected to be 185 nm (TMP, 1,274 amino acids), which correlated well with our estimation from the electron micrographs (Figure 2). Downstream, the first gene encountered in Vinitor162 was that of the distal tail protein (Dit), gp17. Hexameric Dit proteins are conserved in Siphoviridae infecting LAB and are attached to the last tail hexamer MTP (major tail protein) (Dunne et al., 2018). Of note, Dit from Vinitor162 is 243 aa long and falls within the range of reported sizes for classical-length Dits (∼260-300 amino-acids) (Dieterle et al., 2017;Hayes et al., 2018). On its other side, the Dit protein is known to bind to a trimeric protein named Tal (tail associated lysin). The latter belongs to the vision-associated peptidoglycan hydrolases (VAPGHs) and their main function is the peptidoglycan layer degradation, to facilitate phage adsorption. Recent progress in LAB phage research illustrates that Tal can represent the most complex and heterogeneous elements of the virions (Lavelle et al., 2018). As for Vinitor162, we found that the Tal protein (gp18) was very long (2,170 aa), comparable to the length of Tals of cos and pac-containing phages from Streptococcus thermophilus (now Moineauvirus and Brüssowvirus) genera (Lavelle et al., 2020). This orf was indeed the longest of the Vinitor162 genome. Other Predicted Proteins The gene product of orf24 has the hallmarks of an endolysin and contains a muramidase domain. It is highly homologous to several LAB phage endolysins, of which that of a prophage of O. sicerae, a species found in ciders (Cousin et al., 2019). Further sequence analysis by HHpred revealed that a large part of gp23 (residues 1-109) was predicted as a holin domain associated to phage LL-H (27% identity). Hence, as usually observed amongst siphophage genomes, the holin gene directly precedes the endolysin gene. Downstream the lysis module, the phage genome contained a large region of about 23 short orfs (orf25-orf47) encoding 12 proteins with no significant hits to any sequences in GenBank. The proteins deduced from 6 orfs had homologs in O. oeni or O. sicerae, but no function or conserved domain was identified. Only four proteins had significant homology to proteins involved in replication: gp27, a replication protein; gp30, a putative phage replisome organizer and gp33, a YopX-like protein found in the replication module of a variety of phages. Corresponding BBH were associated with the Enterococcus phage phiEf11 (29% identity) and prophage sequences from W. fabalis (50% identity) and O. oeni (74% identity). Of note, gp31 was highly related to a protein associated with the ex-temperate oenophage OE33PA (61% identity). Strikingly, orfs 48-49-50 reassembled 3 of the 4 moron genes also found upstream of terS in the LAB temperate phage P335 and transcribed autonomously in the Lac. lactis host (Labrie et al., 2008). The moron sequences were not found in the other characterized lactophages of the P335 group. Yet homologous sequences were detected in the genomes of Ent. faecalis, S. pyogenes, and L. innocua prophages. Our data broaden the network of phages sharing this specific module of genes, and also confirms the hypothesis of a specific role in providing fitness factors to phages. Diversity in the Vinitor Group and Relevance of Members During Winemaking In an attempt to assess the incidence and diversity of Vinitor phages in the enological environment, we performed an additional phage survey during the 2015 vintage. The study was not designed to be an epidemiologically representative survey of phage distributions, so prevalences are not meaningful, but some observations can be made. We explored the presence of oenophages along in 127 samples, and 64 pure lysates were produced (Supplementary Table 1). Based on PCR tests with intspecific primers, a total of 58 newly isolated phages were classified in one of the four distinct groups of temperate oenophages (Supplementary Table 1). The 6 remaining phages which did not contain any of the identifiable sequences conserved in the integrase of oenophages formed clear plaques on the sensitive host IOEBS277. Presence of presumptive virulent oenophages during two successive years suggests that such phages are components of the phage community associated to winemaking. Next, we designed novel Vinitor-specific PCR assays to further explore the diversity of the set of virulent phages isolated during the 2014 (n = 31) and 2015 (n = 6) vintages. They targeted five genes in the genome of Vinitor162: terL (orf 2), mcp (orf 8), tmp (orf 16), tal (orf 18), and rep (orf 27) sequences ( Table 1). A total of 35 phage DNAs were tested positive and yielded the expected amplicons in the five PCR assays. A single phage, namely Vinitor27, isolated from a red wine (Merlot) exhibited slightly different results as no amplicon were produced with the primers designed in the rep gene, suggesting variability in this region of the chromosome. The genome of Vinitor27 was sequenced and revealed little variation in genome size and number of orfs. It was 35,279 bp in length, with 48 predicted genes (Figure 2). Both Vinitor phage genomes shared a 97.6% identity at the nucleotide level and should therefore be considered as the same species. Genomic synteny tests with Easyfig visualized the previously manifested homology among the two Vinitor phages and provided evidence for their conserved genome architecture, with 47 common genes. Of note, the poorly characterized replication module was affected by some insertion/deletion events, explaining the differences observed earlier in the PCR-based assays. As a consequence, four small orphans in the replication module (orf31, orf43, orf 45, orf 46) were found only in Vinitor162, while orf35 (also an orphan gene) was specific to Vinitor27 (Figure 2). An interesting question was to assess whether virulent phages have specific temporal patterns of abundance during winemaking compared to temperate oenophages. We reexamined the 83 positive samples for the presence of phages which were collected during the 2014 and 2015 vintages. Vinitor phages were prevalent in all types of wines (20 phages were isolated from red wines and 16 from dry and sweet white wines). We next analyzed the abundance of the distinct groups of oenophages at the different steps of the winemaking process (must, AF, MLF and aging) ( Table 3). Ten out of 16 samples of musts contained a Vinitor phage (62.5%). The frequencies were reduced in samples collected during AF (21.7%), MLF (22.2%) and aging (50%). The results obtained with «aging» samples should, however, be treated with caution as only 8 samples were analyzed. In comparison, about 95% of the phage-containing samples originating from AF and MLF contained temperate oenophages, while their frequency was reduced to 37.5% in must samples (Table 3). Last, Vinitor27 was isolated from a sampling time serie, allowing us to catch a glimpse of its persistence during the fermentation of Merlot grapes (Entre-Deux Mers, France). The virulent phage was detected in must (2 × 10 2 PFU/mL) and not in any further steps (AF, MLF). Of note, Int A oenophages were detected later, during MLF. Taken together, these preliminary data suggest that a temporal succession of distinct oenophages occurs throughout the winemaking process. Vinitor phages may be transiently active during the early stage of winemaking when grapes are crushed, while temperate phages are more prevalent in subsequent steps, probably upon excision from indigenous lysogenic strains of O. oeni. Topology Studies Predict a Peculiar Adhesion Apparatus of Vinitor Phages With the aim of getting some structural information on the adhesion device of Vinitor phages, we performed HHpred (Söding et al., 2005) analysis of the orfs comprised between the TMP and the Lysin (TMP+8). HHpred of Dit (TMP+1) confirmed that it is a classical Dit, with the topology of phage SPP1 Dit (PDB ID 2X8K, probability 100%, Veesler et al., 2010). Tal (TMP+2) analysis yielded several hits (Supplementary Figure 1). Contrary to most cases, the classical N-terminal structural domain found in all Tals is identified by HHpred not at the N-terminus, but between residues 156 and 505 (PDB ID 3GS9, Listerial phage protein). Preceeding it, the domain comprised between amino-acids 1 and 155 in Vinitor162 is identified as Carbohydrate Binding Module 1 (CBM), a BppA (base plate protein) domain of lactococcal phage Tuc2009, a module widely spread in siphophages infecting Gram-positive hosts (PDB ID 5E7T, Legrand et al., 2016). Noteworthy, Tal sequences of both Vinitor phages differ largely within the 141 first residues corresponding to this predicted module (Figure 3 and Supplementary Figure 1) while they share ∼98% identity between residues 142 and 2,170. This is suggesting that the domain represents a variable region amongst Vinitor phages. The biological significance of this difference is currently unclear. Following residue 505, three CBMs (CBMs 2-4) were identified by HHpred at positions 784-929, 1,182-1,329, and 1,640-1,788, separated by linkers, along the extended Tal (Supplementary Figures 2, 3). Finally, a fifth CBM domain (CBM5) was identified at the C-terminus (amino-acids 2,038-2,167), corresponding to the Receptor Binding Protein (RBP) of Salmonella phage vB_senMS16 (Supplementary Figures 2, 3). Our analysis returned very limited data for the five proteins encoded by the orfs between the tal and the lysin (tmp+8) genes, which usually may comprise the RBP, as well as baseplate ancillary proteins, which help maintaining the RBPs or providing additional binding domains. However, TMHMM (Käll et al., 2004) analysis reports that out of these five proteins, two contain one trans-membrane helix (TMP+5 and TMP+7) and one contains four trans-membrane helices (TMP+4). These three proteins are obvious candidates for a holin function. Based on these HHpred reports, we established a topological model of Vinitor phages adhesion device ( Figure 5). All 5 putative CBMs may participate in host adhesion. As we failed to identify any other CBM in the phage using HHpred, it is very likely that the tail extension analyzed here harbors the receptor binding modules and constitutes therefore the bona fide RBP. This is in contrast with the reports of the simultaneous presence of a Tal extension carrying CBMs and a RBP in Moineauvirus and Brussowvirus S. thermophilus phages (Lavelle et al., 2020). The fact that our current data are reflecting some diverging structure of the phage tail tip between Vinitor and other phages of LAB is not so surprising and is probably linked to the complex and specific outer structures of their bacterial hosts. Oenophage Phylogenies The phylogenetic tree comparing the TerL proteins confirmed that Vinitor phages are only remotely related to other oenophages. They grouped with the dairy phages 5093 from S. thermophilus and LL-H from L. delbrueckii (Mikkonen and Alatossava, 1995), as well as (pro)phages collected from different sources such as phig1e from L. plantarum isolated from plant materials (Kodaira et al., 1997) and C. intestini (47% identity), a LAB isolated from the bumble bee gut (Praet et al., 2015). To better understand the evolutionary trajectory of the Vinitor phages, we built a second phylogenetic tree based on the MCP sequences ( Figure 4B). This confirmed that the virulent oenophages are more related to the pac-than the cos-containing phages of O. oeni. Consistent with earlier observations was the clustering of Vinitor phages, LL-H from L. delbrueckii, a prophage from C. intestini and a short list of other LAB phages in a sister group. Phage LL-H and these latter phages shared the common feature to represent reference members of rare groups of phages/prophages infecting their cognate host species: Lj928 (L. johnsonii) (Ventura et al., 2004), 1358 (Lac. lactis) (Dupuis and Moineau, 2010) and P35 (Listeria monocytogenes) (Hodgson, 2000). These peculiar phages have been proposed to result from illegitimate recombination between dairy as well as non-dairy phages infecting distinct bacterial species. This has probably taken place in environments containing multiple bacterial genera and species, such as plants. Accordingly, phage P35 from L. monocytogenes has been isolated from silage (Hodgson, 2000). Along with this, the importance of living and decomposing plants as habitats for many LAB species is well demonstrated (Yu et al., 2020), and it is likely that epiphyte communities contribute to the emergence of novel LAB phages with shuffled gene organization. Genome mosaicism varies depending on the host, lifestyle, and genetic constitution of phages. Accordingly, Vinitor phages are probably evolving differently due to their specific environment eventhough the group shares common ancestors with other LAB phages. An interesting point worthy of further investigation is that grapes and grapevine are associated to many insects, including pest insects. The phylogenetic trees constructed using the whole nucleic acid sequences as well as the MCP and TerL proteins indicate that Vinitor phages and a prophage in C. intestini are more closely related to each other than to any other sequenced LAB phages (Figure 4). Of note, the closest homologs to gp7 (capsid scaffolding protein) and gp13 (major tail proteins) are also found associated with recently acquired and uncharacterized sequences from metagenomic investigations of insects. Taken together our analyses may provide somes clues to the possible origin of Vinitor phages, resulting from interactions between plant-and insect-related bacteria and their phages. These data also bring us closer to an explanation of the uniqueness of Vinitor phages, which may result from the current under-representation of phage proteins from insect-associated microbial communities in databases. Future exploration of this niche will probably assist phage classification of Vinitor phages, as the VICTOR analysis currently suggests that the group is distantly related to the LAB phages which have been recently classified into novel sub-families and genera in the ICTV database ( Figure 4C). CONCLUSION Oenococcus phages are understudied compared to other industrially relevant LAB phages. Our current study described a new phage species called Vinitor, which represents the first virulent phage in the O. oeni species. Vinitor phages differ markedly from other oenophages and LAB phages, in terms of their very low level of DNA sequence identity and the number of unique genes. Reconstruction of its evolutionary history is currently difficult. However, some structural similarities exist in the capsid and endolysin/holin proteins of other LAB phages, and also in the moron region, which suggest common ancestors for these LAB phages. In contrast, Vinitor phages have some specificities linked to their host, lifestyle and niche. They infect their bacterial host through highly specific tail tips determinants probably adapted to recognize specifically receptors at the surface of O. oeni that are still unknown. In addition, a few putative DNA replication elements show homologous ORFs in prophages infecting related species or genera (Oenococcus-Weissella), as well as a number of unidentified functions which seem phage-specific. This would suggest that some steps in the common mechanisms for the recruitment of the host replication machinery would be conserved among phages infecting members of the Oenococcus and Weissella genera. Furthermore, our study now urges for phage structural analysis and host cell wall biochemical studies in the genera Oenococcus, Leuconostoc and Weisella sp., to better understand phage-host interactions in these important group of food-associated LAB. Further comprehensive study of how Vinitor phages can persist and possibly shape the indigenous population of O. oeni present on grapes will be of great value for understanding the early steps of spontaneous MLF fermentation. On the other hand, presence of virulent phages may bias the outcome of bacterial enrichment cultures, explaining the repeated difficulty in the isolation and cultivation of O. oeni strains from grapes (Lorentzen and Lucas, 2019). DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. AUTHOR CONTRIBUTIONS CP, AC, and FJ carried out the main body of research. FJ and CP collected the samples, isolated the phages, and performed the genome sequencing. CP, FJ, and OC performed bioinformatics analysis. PL contributed the bioinformatics analysis. AC performed isolates characterization with EM and host spectrum. JS contributed to the phage host spectrum experiments. CC ran the adhesion topology analysis. CP, AC, and CL wrote and edited the manuscript. CL supervised the work progress. All authors contributed to the article and approved the submitted version. FUNDING Support for this project was provided by the French ANR (grant ANR-14-20CE-0007).
8,631
2021-01-13T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Localization of CO2 Leakage from a Circular Hole on a Flat-Surface Structure Using a Circular Acoustic Emission Sensor Array Leak localization is essential for the safety and maintenance of storage vessels. This study proposes a novel circular acoustic emission sensor array to realize the continuous CO2 leak localization from a circular hole on the surface of a large storage vessel in a carbon capture and storage system. Advantages of the proposed array are analyzed and compared with the common sparse arrays. Experiments were carried out on a laboratory-scale stainless steel plate and leak signals were obtained from a circular hole in the center of this flat-surface structure. In order to reduce the influence of the ambient noise and dispersion of the acoustic wave on the localization accuracy, ensemble empirical mode decomposition is deployed to extract the useful leak signal. The time differences between the signals from the adjacent sensors in the array are calculated through correlation signal processing before estimating the corresponding distance differences between the sensors. A hyperbolic positioning algorithm is used to identify the location of the circular leak hole. Results show that the circular sensor array has very good directivity toward the circular leak hole. Furthermore, an optimized method is proposed by changing the position of the circular sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the circular leak hole. Experiential results obtained on a 100 cm × 100 cm stainless steel plate demonstrate that the full-scale error in the leak localization is within 0.6%. Introduction Storage vessels and containers are widely used in a range of industries. For instance, in the carbon capture and storage (CCS) process, storage vessels are used to store and transport the captured CO 2 [1]. Accidental leaks from CO 2 storage vessels may compromise the stability and safety of the CCS system and cause serious environmental pollution and financial damage [2]. Therefore, it is crucial to identify and locate any accidental leak rapidly when it occurs. At present, CO 2 transportation can be performed by several ways such as pipelines, ships, motor carriers, and railways [3]. The current research of CO 2 leak detection in the transportation process has mainly focused on pipelines [4,5]. There has been very little reported research on the leak detection for CO 2 storage vessels. The practical storage vessel used in the CO 2 transportation process is commonly a spherical or cylindrical tank, so it is, in fact, a three-dimensional structure. The shell of the vessel is very large, so if a local area of the shell is to be analyzed and studied for leakage detection, this area of interest can be treated as a flat-surface structure. The flat-surface structure belongs to a two-dimensional (2D) construction, which is different from the one-dimensional long pipelines [6]. Therefore, the leak detection methods including sensor arrangement and localization algorithms are different from those for long pipelines. The detection methods that may be considered for 2D leakage detection include tracer, concentration alarm, and infrared imaging, etc. [7]. The tracer and concentration alarm methods depend on the diffusion of CO 2 , so it is time-consuming. Infrared imaging is a potential method but it is not cost-effective. For most leak detection applications, many infrared cameras are needed in order to cover a large area from multiple directions. The acoustic emission (AE) method has advantages of rapid response, compact sensor structure, low capital cost [8] and, thus, has a high potential for the leak detection and localization of CO 2 on a flat-surface structure. The AE method needs a number of sensors to obtain as much information as possible from the detection area. In previous studies, several sensor array configurations have been proposed for the localization of a damage source in a 2D space [9]. The damage source was usually generated by pencil-lead breaks, impact of a foreign object, fiber or metal breakage, matrix cracking [10]. Although these damage sources were not generated by leakage and the sensors used were not always of the AE type, the array configurations provide useful reference in this study. It is well known that, at least three sensors are required to locate a damage source in a 2D structure. In the traditional damage localization method based on time-of-flight triangulation at multiple receiving points, the crossing point or zone is the location of the damage source [11]. For a regular square plate, a sensor array with four sensors arranged on the four corners is a common layout [12]. In practical applications, the crossing zone may be large and, thus, increase the localization error because the distance between the sensors is large. For a large flat-surface structure, the resolution of source localization will increase with the number of sensors used. Niri et al. [13] proposed a source localization model based on a sparse array with a set of piezoelectric sensors. The distance between sensors in the sparse array was large in order to cover the whole structure. It is worth noting that most sensor arrays for the localization of a damage source in a 2D space are sparse or loose types, and the damage source is burst-type [14]. For the burst-type signal, it can be separated in the time-domain waveforms because of sharp rising and descending edges. Thus, it is relative easy to determine the time difference between different sensors through threshold-based procedures, peak detection techniques and, more robustly, cross-correlation methods. However, the signal generated by leakage is a continuous-type and it cannot be separated in the time-domain waveforms because of the difficulty to identify the first wavefront. Therefore, the localization of the continuous-type leakage is more challenging than the burst-type damage source to some extent. There has been very little reported research on continuous leak localization on storage vessels based on the AE method [15,16], especially for CO 2 leakage detection. This paper proposes a novel circular AE sensor array to realize the continuous CO 2 leak localization from a flat-surface structure on a storage vessel. The localization performance of the proposed sensor array in combination with ensemble empirical mode decomposition (EEMD), cross-correlation and hyperbolic positioning algorithms are investigated in this study. The practical leak hole on a structure is of all shapes and sizes, but the circular hole is a typical and fundamental case, as long as the size of the hole is "regular" (circular, roundish, etc.) and significantly smaller than the size of the structure. Sensing Arrangement The key parameters of the circular array are the number of sensors, the angle between the adjacent sensors, and the diameter of the circle, as shown in Figure 1. In comparison with the sparse sensor arrays, the circular arrangement has advantages of compact layout, omnidirectional sensitivity, and similar signal attenuations and dispersions between the different sensors in the array, which are beneficial to correlation signal analysis. In addition, the distance difference between any two sensors in the array with reference to the leak hole is no greater than the diameter of the circle regardless of the location of the leak hole. Only when the leak hole on a diameter, such as the one connecting Sensors i and j in Figure 1, the distance difference between the two sensors is equal to the diameter of the circle. For all other cases, the distance difference is smaller than the diameter. Therefore, this restricted condition can be used as a threshold to assess the correlation results. Sensors 2016, 16,1951 3 of 14 diameter of the circle. For all other cases, the distance difference is smaller than the diameter. Therefore, this restricted condition can be used as a threshold to assess the correlation results. Leak Localization Principle Assume two sensors are installed on a plate, then a coordinate system can be established where the middle point of these two sensors is the origin, and the connecting line of these two sensors is the horizontal x-axis. Suppose the spacing between the two sensors is 2c, then the coordinates of Sensors 1 and 2 are (−c, 0) and (c, 0), respectively, as shown in Figure 2. If the leak hole is located at the point P(x, y), the following equations can be derived based on the fundamental principle that a wave takes the minimum energy path to travel between two points [3]: Leak Localization Principle Assume two sensors are installed on a plate, then a coordinate system can be established where the middle point of these two sensors is the origin, and the connecting line of these two sensors is the horizontal x-axis. Suppose the spacing between the two sensors is 2c, then the coordinates of Sensors 1 and 2 are (−c, 0) and (c, 0), respectively, as shown in Figure 2. Sensors 2016, 16,1951 3 of 14 diameter of the circle. For all other cases, the distance difference is smaller than the diameter. Therefore, this restricted condition can be used as a threshold to assess the correlation results. Leak Localization Principle Assume two sensors are installed on a plate, then a coordinate system can be established where the middle point of these two sensors is the origin, and the connecting line of these two sensors is the horizontal x-axis. Suppose the spacing between the two sensors is 2c, then the coordinates of Sensors 1 and 2 are (−c, 0) and (c, 0), respectively, as shown in Figure 2. If the leak hole is located at the point P(x, y), the following equations can be derived based on the fundamental principle that a wave takes the minimum energy path to travel between two points [3]: If the leak hole is located at the point P(x, y), the following equations can be derived based on the fundamental principle that a wave takes the minimum energy path to travel between two points [3]: where PF 1 and PF 2 are the distances from the leak hole to the Sensors 1 and 2, respectively, v is the speed of AE wave, t 1 and t 2 are the arrival times of Sensors 1 and 2, and ∆t is the time difference between t 1 and t 2 . According to the geometrical relationship, the leak hole (point P) is on a hyperbolic curve when Equation (4) is satisfied. Therefore, the leak hole can be accurately located through the hyperbolic curves' intersection when the number of sensors is more than three: Ensemble Empirical Mode Decomposition The original signal collected by the sensor array contains a lot of noise. Dispersion phenomenon will also make the signals produce distortion when the signals propagate along the flat-surface structure. Therefore, it is not accurate or sometimes even not feasible to locate the leak hole by directly cross-correlating the signals. In order to solve this problem, an appropriate denoising method or a mode decomposition algorithm should be applied. EEMD is an improved algorithm on the basis of the empirical mode decomposition (EMD) and overcomes some drawbacks of EMD such as modal mixing problem [17]. It is an effective approach to processing non-linear and non-stationary signals [18,19]. In comparison with other signal decomposition techniques, EEMD is an adaptive signal processing method which does not need prior information about the signal to be processed. In view of the non-linear and non-stationary characteristics of the leak signal, EEMD is a suitable method to decompose the signals in both time and frequency domains. EEMD is usually realized by decomposing the signal into a series of intrinsic mode functions (IMFs). The computational process of the EEMD is described as follows [20]: (1) Add a Gaussian white noise signal ω(t) to the original signal x(t) to obtain a synthesized signal X(t). (2) Decompose the synthesized signal using EMD into IMFs c i (t), as shown in Figure 3: (3) Repeat Steps (1) and (2) N times, but add different white Gaussian noise each time: The residue of added white noise should satisfy the following statistical rule [21]: where N is the number of calculations, ε the root mean square (RMS) amplitude of the added noise, and ε n is the difference between the original data and the reconstructed data. (4) Compute the ensemble means of corresponding IMFs as the final result: Figure 3. Flowchart of EMD. Cross-Correlation The cross-correlation method is widely used for estimating the time delay in many research fields and has shown a very good performance. In this paper, the time difference is estimated through the following cross correlation computation [22]. where xk and yk denote the two leak signals from the two AE sensors and N is the length of the signal. The time difference corresponds to the location of the dominant peak in the correlation function Rxy(m), whilst the peak value is the correlation coefficient representing the similarity of the two signals. Experimental Setup Laboratory-scale experiments were carried out on a 316L stainless plate with dimensions of 100 cm × 100 cm × 0.2 cm. A continuous leak of CO2 was created at a pressure of 2 bar from a hole of 2 mm in diameter in the center of the plate. An array with six identical high-frequency AE sensors (RS-2A, Softland Co., Ltd., Beijing, China) was mounted in a circular form on the plate using vacuum grease couplant. The angle between the adjacent sensors is 60° and the diameter of the circle is 10 cm. The sensor arrangement and the frequency response characteristics of all high-frequency AE sensors are shown in Figure 4. The consistency of the AE sensors is quite high, especially in the frequency band 150-200 kHz. The main technical specifications of the used AE sensors are shown in Table 1. Cross-Correlation The cross-correlation method is widely used for estimating the time delay in many research fields and has shown a very good performance. In this paper, the time difference is estimated through the following cross correlation computation [22]. where x k and y k denote the two leak signals from the two AE sensors and N is the length of the signal. The time difference corresponds to the location of the dominant peak in the correlation function R xy (m), whilst the peak value is the correlation coefficient representing the similarity of the two signals. Experimental Setup Laboratory-scale experiments were carried out on a 316L stainless plate with dimensions of 100 cm × 100 cm × 0.2 cm. A continuous leak of CO 2 was created at a pressure of 2 bar from a hole of 2 mm in diameter in the center of the plate. An array with six identical high-frequency AE sensors (RS-2A, Softland Co., Ltd., Beijing, China) was mounted in a circular form on the plate using vacuum grease couplant. The angle between the adjacent sensors is 60 • and the diameter of the circle is 10 cm. The sensor arrangement and the frequency response characteristics of all high-frequency AE sensors are shown in Figure 4. The consistency of the AE sensors is quite high, especially in the frequency band 150-200 kHz. The main technical specifications of the used AE sensors are shown in Table 1. The acoustic signal were pre-amplified using AE amplifiers with a bandwidth of 10 kHz-1 MHz and a gain of 40 dB to boost the signal and reduce the effects of noise and interference. A holographic AE signal recorder (DS-8A, Softland Co., Ltd., Beijing, China) was used to acquire the waveforms at a sampling rate of 3 MHz. The A/D conversion resolution and input range of the signal recorder are 16 bits and ±10 V, respectively. The experimental set-up and sensor arrangement are shown in Figure 5. The acoustic signal were pre-amplified using AE amplifiers with a bandwidth of 10 kHz-1 MHz and a gain of 40 dB to boost the signal and reduce the effects of noise and interference. A holographic AE signal recorder (DS-8A, Softland Co., Ltd., Beijing, China) was used to acquire the waveforms at a sampling rate of 3 MHz. The A/D conversion resolution and input range of the signal recorder are 16 bits and ±10 V, respectively. The experimental set-up and sensor arrangement are shown in Figure 5. Characteristics of the AE Leak Signal AE leak signals from the six AE sensors show very similar characteristics in view of the fact that they are mounted close to each other and are used to detect the same leak source. Take the signal from Sensor 1 (Figure 4a) as an example: the time domain waveform and corresponding frequency spectrum are plotted in Figure 6. Characteristics of the AE Leak Signal AE leak signals from the six AE sensors show very similar characteristics in view of the fact that they are mounted close to each other and are used to detect the same leak source. Take the signal from Sensor 1 (Figure 4a) as an example: the time domain waveform and corresponding frequency spectrum are plotted in Figure 6. Characteristics of the AE Leak Signal AE leak signals from the six AE sensors show very similar characteristics in view of the fact that they are mounted close to each other and are used to detect the same leak source. Take the signal from Sensor 1 (Figure 4a) as an example: the time domain waveform and corresponding frequency spectrum are plotted in Figure 6. It can be seen from Figure 6 that the signal is continuous in the time domain and has a wide spectral range of 10-300 kHz. The signal contains frequency components in three main regions, with one in the high-frequency band (150-200 kHz) and the other two in the low-frequency band (10-50 kHz). Since the high-frequency region is not adversely affected by the common ambient noise, the signal in this region is utilized for the localization of the leak hole in this study. The original signal is decomposed using EEMD as discussed in Section 2. Figure 7 shows the EEMD decomposition results of the original signal from Sensor 1. Figure 7a is the decomposed time domain signal waveforms and Figure 7b is the corresponding frequency spectra. It can be seen that seven IMF components are generated. IMF1 has the highest frequency components while other IMF components contain lower frequency components. However, the energy of IMF1 is relatively low. Therefore, IMF2 is extracted to identify the location of the leak hole by comprehensively considering the frequency and energy of the signal. It can be seen from Figure 6 that the signal is continuous in the time domain and has a wide spectral range of 10-300 kHz. The signal contains frequency components in three main regions, with one in the high-frequency band (150-200 kHz) and the other two in the low-frequency band (10-50 kHz). Since the high-frequency region is not adversely affected by the common ambient noise, the signal in this region is utilized for the localization of the leak hole in this study. The original signal is decomposed using EEMD as discussed in Section 2. Figure 7 shows the EEMD decomposition results of the original signal from Sensor 1. Figure 7a is the decomposed time domain signal waveforms and Figure 7b is the corresponding frequency spectra. It can be seen that seven IMF components are generated. IMF1 has the highest frequency components while other IMF components contain lower frequency components. However, the energy of IMF1 is relatively low. Therefore, IMF2 is extracted to identify the location of the leak hole by comprehensively considering the frequency and energy of the signal. Leak Localization Results and Error Analysis The time difference between any pair of signals from the sensor array can be calculated through cross-correlation. The sensor array contains six sensing elements; therefore, there is a set of 15 crosscorrelation results. If the speed of the AE signal is known, the distance difference can be calculated and then the leak hole located. The speed is found to be 4610 m/s, which was measured by conducting the Nielsen-Hsu pencil lead break test [23]. Table 2 shows the measured time difference and distance difference between the signal pairs. The implementation of leak localization consists of four key stages. In the first stage, EEMD is deployed to extract the useful signal from the noise. The characteristics of the leak signal are analyzed in both the time and frequency domains to select the proper frequency band. The second step is to estimate the time differences between the sensor signals through correlation signal processing. The time difference between any pair of the sensor signals is calculated in this step. The third stage estimates the distance difference between the sensing elements from the measured time differences and wave speed. It is worth noting that the distance difference must satisfy the restricted condition of the circular array and the hyperbolic curve as analyzed in Section 2. Finally, a hyperbolic positioning algorithm is used to locate the leak hole by finding the crossing points of the hyperbolic curves. It can be seen from Table 2 that the absolute error in the determination of the distance difference is no greater than 0.6 cm. This result indicates good cross-correlation performance of the AE sensor Leak Localization Results and Error Analysis The time difference between any pair of signals from the sensor array can be calculated through cross-correlation. The sensor array contains six sensing elements; therefore, there is a set of 15 cross-correlation results. If the speed of the AE signal is known, the distance difference can be calculated and then the leak hole located. The speed is found to be 4610 m/s, which was measured by conducting the Nielsen-Hsu pencil lead break test [23]. Table 2 shows the measured time difference and distance difference between the signal pairs. The implementation of leak localization consists of four key stages. In the first stage, EEMD is deployed to extract the useful signal from the noise. The characteristics of the leak signal are analyzed in both the time and frequency domains to select the proper frequency band. The second step is to estimate the time differences between the sensor signals through correlation signal processing. The time difference between any pair of the sensor signals is calculated in this step. The third stage estimates the distance difference between the sensing elements from the measured time differences and wave speed. It is worth noting that the distance difference must satisfy the restricted condition of the circular array and the hyperbolic curve as analyzed in Section 2. Finally, a hyperbolic positioning algorithm is used to locate the leak hole by finding the crossing points of the hyperbolic curves. It can be seen from Table 2 that the absolute error in the determination of the distance difference is no greater than 0.6 cm. This result indicates good cross-correlation performance of the AE sensor array in a circle. In addition, the results from 1 and 4, 2 and 3, 2 and 6, 3 and 5, and 5 and 6 cannot satisfy the condition of the hyperbolic curve as analyzed in Section 2; therefore, there are a total of ten hyperbolic curves created. The leak localization results, arising from the hyperbolic positioning algorithm, are shown in Figure 8a. The crossing points of hyperbolic curves around the leak hole are seen in a zoomed-in version in Figure 8b. array in a circle. In addition, the results from 1 and 4, 2 and 3, 2 and 6, 3 and 5, and 5 and 6 cannot satisfy the condition of the hyperbolic curve as analyzed in Section 2; therefore, there are a total of ten hyperbolic curves created. The leak localization results, arising from the hyperbolic positioning algorithm, are shown in Figure 8a. The crossing points of hyperbolic curves around the leak hole are seen in a zoomed-in version in Figure 8b. In theory, all hyperbolic curves should intersect at one point (i.e., the leak hole); however, in practice there is more than one crossing point formed by two, three, or more curves due to errors in measurement, as shown in Figure 8a. It can be seen from Figure 8b that three crossing points formed by at least three curves around the leak hole and their coordinates are (−0.2 cm, −9.2 cm), (−2.2 cm, 2.6 cm), and (−2.2 cm, 10.2 cm), respectively. Among them, crossing Point 1 is formed by five curves while crossing Points 2 and 3 are formed by three curves, respectively. The rule to locate the leak hole is based on the fact that the crossing point has a higher probability to be the leak source if it formed by more curves. In this study, the location of the leak hole is, thus, estimated using the following equation. In theory, all hyperbolic curves should intersect at one point (i.e., the leak hole); however, in practice there is more than one crossing point formed by two, three, or more curves due to errors in measurement, as shown in Figure 8a. It can be seen from Figure 8b that three crossing points formed by at least three curves around the leak hole and their coordinates are (−0.2 cm, −9.2 cm), (−2.2 cm, 2.6 cm), and (−2.2 cm, 10.2 cm), respectively. Among them, crossing Point 1 is formed by five curves while crossing Points 2 and 3 are formed by three curves, respectively. The rule to locate the leak hole is based on the fact that the crossing point has a higher probability to be the leak source if it formed by more curves. In this study, the location of the leak hole is, thus, estimated using the following equation. (11) where (x i , y i ) is the coordinate of the ith crossing point, and n i is the number of the crossing curves of the ith crossing point. The resulting coordinates of the leak hole in this example are (−1.3 cm, −0.7 cm). The absolute error in this localization is no greater than 2 cm on the 100 cm × 100 cm plate. It must be noted that the time difference measurement is crucial in the whole localization process and even a small error can corrupt the localization result. The time difference calculated through cross-correlation usually contains several peak values. Errors will be introduced if the wrong peak is selected. In order to enhance the stability and accuracy of the localization, an optimized method is proposed by changing the position of the circular AE sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the leak hole. It can be seen from Figure 8a that intensive curves are toward the direction of the leak hole, although some curves do not pass through the leak hole. This phenomenon shows another advantage of the circular sensor array, i.e., it has a very good directivity. If changing the position of the sensor array or adding another array, a new direction will be toward to the leak hole. Thus, the leak hole can be located by the two directions. The optimized sensor arrangement is shown in Figure 9 and the localization results using this optimized method are shown in Figure 10. where (xi, yi) is the coordinate of the ith crossing point, and ni is the number of the crossing curves of the ith crossing point. The resulting coordinates of the leak hole in this example are (−1.3 cm, −0.7 cm). The absolute error in this localization is no greater than 2 cm on the 100 cm × 100 cm plate. It must be noted that the time difference measurement is crucial in the whole localization process and even a small error can corrupt the localization result. The time difference calculated through cross-correlation usually contains several peak values. Errors will be introduced if the wrong peak is selected. In order to enhance the stability and accuracy of the localization, an optimized method is proposed by changing the position of the circular AE sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the leak hole. It can be seen from Figure 8a that intensive curves are toward the direction of the leak hole, although some curves do not pass through the leak hole. This phenomenon shows another advantage of the circular sensor array, i.e., it has a very good directivity. If changing the position of the sensor array or adding another array, a new direction will be toward to the leak hole. Thus, the leak hole can be located by the two directions. The optimized sensor arrangement is shown in Figure 9 and the localization results using this optimized method are shown in Figure 10. where (xi, yi) is the coordinate of the ith crossing point, and ni is the number of the crossing curves of the ith crossing point. The resulting coordinates of the leak hole in this example are (−1.3 cm, −0.7 cm). The absolute error in this localization is no greater than 2 cm on the 100 cm × 100 cm plate. It must be noted that the time difference measurement is crucial in the whole localization process and even a small error can corrupt the localization result. The time difference calculated through cross-correlation usually contains several peak values. Errors will be introduced if the wrong peak is selected. In order to enhance the stability and accuracy of the localization, an optimized method is proposed by changing the position of the circular AE sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the leak hole. It can be seen from Figure 8a that intensive curves are toward the direction of the leak hole, although some curves do not pass through the leak hole. This phenomenon shows another advantage of the circular sensor array, i.e., it has a very good directivity. If changing the position of the sensor array or adding another array, a new direction will be toward to the leak hole. Thus, the leak hole can be located by the two directions. The optimized sensor arrangement is shown in Figure 9 and the localization results using this optimized method are shown in Figure 10. Figure 10 shows that both sensor arrays can find the direction of the leak hole, which is in the narrow crossing zone. This narrow crossing zone is shown more clearly in the upper right dashed box, a zoomed-in version. It can be seen from the zoomed-in version that the coordinates of four points of the crossing zone are A (−1.5 cm, 2.0 cm), B (−1.1 cm, −1.5 cm), C (0.2 cm, −2.2 cm) and D (0.1 cm, 1.1 cm), respectively. This result suggests that the leak hole can be located even when some hyperbolic curves deviate from the actual leak hole. Moreover, it can be seen that the directivity of Group 2 is better than that of Group 1. This is because that Group 2 is farther away from the leak hole, thus, the distance difference from any two sensors in the array with reference to leak hole (|PF 1 − PF 2 |) is smaller. Therefore, the opening angle of the hyperbolic curve is greater and the curve is more like a straight line (blue line in Figure 11), and the directivity of the sensor array is better. The final localization results using the optimized method are (−0.6 cm, −0.1 cm) by calculating the average of coordinates of four points in the narrow crossing area. In summary, the absolute error is 1.5 cm and the full-scale error is 0.6% (the full-scale error is defined as the absolute error normalized to the full length of the square plate). Sensors 2016, 16,1951 11 of 14 Figure 10 shows that both sensor arrays can find the direction of the leak hole, which is in the narrow crossing zone. This narrow crossing zone is shown more clearly in the upper right dashed box, a zoomed-in version. It can be seen from the zoomed-in version that the coordinates of four points of the crossing zone are A (−1.5 cm, 2.0 cm), B (−1.1 cm, −1.5 cm), C (0.2 cm, −2.2 cm) and D (0.1 cm, 1.1 cm), respectively. This result suggests that the leak hole can be located even when some hyperbolic curves deviate from the actual leak hole. Moreover, it can be seen that the directivity of Group 2 is better than that of Group 1. This is because that Group 2 is farther away from the leak hole, thus, the distance difference from any two sensors in the array with reference to leak hole (|PF1 − PF2|) is smaller. Therefore, the opening angle of the hyperbolic curve is greater and the curve is more like a straight line (blue line in Figure 11), and the directivity of the sensor array is better. The final localization results using the optimized method are (−0.6 cm, −0.1 cm) by calculating the average of coordinates of four points in the narrow crossing area. In summary, the absolute error is 1.5 cm and the full-scale error is 0.6% (the full-scale error is defined as the absolute error normalized to the full length of the square plate). For a large detection area, the accuracy of leak localization will decrease when reducing the number of sensors used. In order to study the accuracy of the system with one or more faulty sensors in a practical application [24,25], the localization results of six, five, four and three sensors are compared, respectively, as shown in Figure 12. For a large detection area, the accuracy of leak localization will decrease when reducing the number of sensors used. In order to study the accuracy of the system with one or more faulty sensors in a practical application [24,25], the localization results of six, five, four and three sensors are compared, respectively, as shown in Figure 12. Sensors 2016, 16,1951 11 of 14 Figure 10 shows that both sensor arrays can find the direction of the leak hole, which is in the narrow crossing zone. This narrow crossing zone is shown more clearly in the upper right dashed box, a zoomed-in version. It can be seen from the zoomed-in version that the coordinates of four points of the crossing zone are A (−1.5 cm, 2.0 cm), B (−1.1 cm, −1.5 cm), C (0.2 cm, −2.2 cm) and D (0.1 cm, 1.1 cm), respectively. This result suggests that the leak hole can be located even when some hyperbolic curves deviate from the actual leak hole. Moreover, it can be seen that the directivity of Group 2 is better than that of Group 1. This is because that Group 2 is farther away from the leak hole, thus, the distance difference from any two sensors in the array with reference to leak hole (|PF1 − PF2|) is smaller. Therefore, the opening angle of the hyperbolic curve is greater and the curve is more like a straight line (blue line in Figure 11), and the directivity of the sensor array is better. The final localization results using the optimized method are (−0.6 cm, −0.1 cm) by calculating the average of coordinates of four points in the narrow crossing area. In summary, the absolute error is 1.5 cm and the full-scale error is 0.6% (the full-scale error is defined as the absolute error normalized to the full length of the square plate). For a large detection area, the accuracy of leak localization will decrease when reducing the number of sensors used. In order to study the accuracy of the system with one or more faulty sensors in a practical application [24,25], the localization results of six, five, four and three sensors are compared, respectively, as shown in Figure 12. It can be seen from Figure 12 that the localization errors with six, five, four and three sensors are 1.5 cm, 1.7 cm, 3.5 cm and 13 cm, respectively. These localization results and errors are calculated according to the process described in Section 3.3. Thus, the localization accuracy can satisfy requirements of most engineering applications when the number of sensors are more than four. In fact, the array will not be called a circular array and the localization method is not suitable if the number of sensors is less than four. It is believed that the proposed circular sensor array and localization method will show better performance if more sensors are used. However, this will require more computational and hardware costs. Conclusions In this study, a novel circular sensor array has been proposed to locate the CO2 leak hole on a flat-surface structure. Advantages of the proposed sensor array have been analyzed. The AE leak signals are decomposed into seven IMF components using EEMD and the signal component of IMF2 with high frequency and high energy has been used to predict the location of the leak hole through estimation of the time differences and distance differences of the sensor array. A total of ten hyperbolic curves are generated and intensive hyperbolic curves are toward the direction of the leak hole. There are three crossing points formed by at least three curves around the leak hole. A localization rule is defined based on the fact that the crossing point has a higher probability to be the leak source if it is formed by more curves. In order to enhance the stability and accuracy of the localization, an optimized method has been proposed by changing the position of the circular AE sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the leak hole. Experiential results demonstrate that the full-scale error in the leak localization is within 0.6% on a 100 cm × 100 cm stainless steel plate. Such an accuracy in leak localization should meet the requirement of most practical applications. It can be seen from Figure 12 that the localization errors with six, five, four and three sensors are 1.5 cm, 1.7 cm, 3.5 cm and 13 cm, respectively. These localization results and errors are calculated according to the process described in Section 3.3. Thus, the localization accuracy can satisfy requirements of most engineering applications when the number of sensors are more than four. In fact, the array will not be called a circular array and the localization method is not suitable if the number of sensors is less than four. It is believed that the proposed circular sensor array and localization method will show better performance if more sensors are used. However, this will require more computational and hardware costs. Conclusions In this study, a novel circular sensor array has been proposed to locate the CO 2 leak hole on a flat-surface structure. Advantages of the proposed sensor array have been analyzed. The AE leak signals are decomposed into seven IMF components using EEMD and the signal component of IMF2 with high frequency and high energy has been used to predict the location of the leak hole through estimation of the time differences and distance differences of the sensor array. A total of ten hyperbolic curves are generated and intensive hyperbolic curves are toward the direction of the leak hole. There are three crossing points formed by at least three curves around the leak hole. A localization rule is defined based on the fact that the crossing point has a higher probability to be the leak source if it is formed by more curves. In order to enhance the stability and accuracy of the localization, an optimized method has been proposed by changing the position of the circular AE sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the leak hole. Experiential results demonstrate that the full-scale error in the leak localization is within 0.6% on a 100 cm × 100 cm stainless steel plate. Such an accuracy in leak localization should meet the requirement of most practical applications.
9,241.4
2016-11-01T00:00:00.000
[ "Engineering" ]
Step-downs reduce workers’ compensation payments to encourage return to work: are they effective? Objective To determine whether step-downs, which cut the rate of compensation paid to injured workers after they have been on benefits for several months, are effective as a return to work incentive. Methods We aggregated administrative claims data from seven Australian workers’ compensation systems to calculate weekly scheme exit rates, a proxy for return to work. Jurisdictions were further subdivided into four injury subgroups: fractures, musculoskeletal, mental health and other trauma. The effect of step-downs on scheme exit was tested using a regression discontinuity design. Results were pooled into meta-analyses to calculate combined effects and the proportion of variance attributable to heterogeneity. Results The combined effect of step-downs was a 0.86 percentage point (95% CI −1.45 to −0.27) reduction in the exit rate, with significant heterogeneity between jurisdictions (I 2=68%, p=0.003). Neither timing nor magnitude of step-downs was a significant moderator of effects. Within injury subgroups, only fractures had a significant combined effect (−0.84, 95% CI −1.61 to −0.07). Sensitivity analysis indicated potential effects within mental health and musculoskeletal conditions as well. Conclusions The results suggest some workers’ compensation recipients anticipate step-downs and exit the system early to avoid the reduction in income. However, the effects were small and suggest step-downs have marginal practical significance. We conclude that step-downs are generally ineffective as a return to work policy initiative. Postprint link: https://www.medrxiv.org/content/10.1101/19012286 InTrOduCTIOn Step-downs reduce the rate of income replacement paid to injured workers after they have been on benefits for a period of several months. They are found in a number of workers' compensation systems around the world, including several in Europe (Andorra, Croatia, Slovakia and Sweden), Africa (Ethiopia, Republic of Congo, São Tomé and Príncipe and Zimbabwe), Asia (Indonesia, Laos, Singapore and Taiwan), Central America (Belize and Panama), the Middle East (Kuwait, Oman and Qatar), South America (Ecuador) 1 and one American state (Ohio). 2 Unique among these is Australia, where each of its nine major workers' compensation systems implements step-downs. 3 Step-downs are promoted as an incentive for claimants to return to work. [4][5][6] However, there is little direct empirical evidence to support this claim 7 8 and that which exists is generally inconclusive. 6 9 It also contrasts with the original purpose of step-downs when introduced across Australia in the 1980s and 1990s, which was to reign in the rising cost of employers' insurance premiums. 7 Nevertheless, evidence that more generous benefits increase time off work indicates that an incentivising effect is plausible. 4 We test whether step-downs increase the rate at which claimants exit workers' compensation and moderating effects of their timing and magnitude. Building on evidence that effects of benefit generosity vary by injury, 10 we also tested effects in claims for fractures, mental health conditions, musculoskeletal conditions and other trauma subgroups. MeTHOds Study questions and analyses were preregistered with the Open Science Framework. 11 We reproduce the analytical approach here and note any deviations. step-downs in Australia Australia's six states, two territories and Commonwealth government have their own workers' compensation system for injured workers, which cover 94% of the workforce. 12 Each scheme is cause based, meaning benefits are contingent on attribution of the condition, whether an acute injury or gradual onset disease (collectively referred to as 'injury' in this paper), to employment. 13 There are considerable differences in overarching policy settings, including whether the scheme allows common law claims, is publicly or privately underwritten and generosity of benefits. 3 While each system employs step-downs, they vary in both timing and magnitude, as illustrated in table 1. Most of these systems have wage replacement caps that set a nominal maximum on what claimants may earn, and a few have minimums. In Queensland, claimants with an industrial agreement, which is a certified specification of industrial matters between employees and employers, are initially compensated at the greater of 85% their Normal Weekly Earnings (NWE; based on individual preinjury earnings) or the industrial instrument, which at 26 weeks steps down to the greater of 75% NWE or 70% Queensland Ordinary Time Earnings (QOTE; based on state mean earnings). Claimants not under an industrial instrument are initially compensated at the greater of 85% NWE or 80% QOTE and step down to the greater of 75% NWE or 70% QOTE. In the Northern Territory, step-downs are the greater of: (1) 75% of weekly earnings up to a maximum nominal cap or (2) the lesser of a flat rate plus additional income for each dependent or 90% of NWE. In Western Australia, claimants with an industrial agreement are not subject to step-downs and are compensated at 100% of their regular earnings throughout the life of the claim. However, overtime, bonuses and allowances are compensated up to 13 weeks but not afterwards, 14 meaning workers who rely on these extra sources of income are effectively subject to step-downs, though of varying magnitudes. Step-down rates are higher in Tasmania and Comcare (the Commonwealth system) if the claimant is back at work in some form of partial capacity. 4 14 In these cases, the magnitude of initial and step-down compensation rates vary, though timings remained the same. In Victoria and, to a lesser extent, New South Wales, claimants from unionised industries often have industrial awards and enterprise agreements that top up payments and can make up any gaps between preinjury earnings compensation. 7 8 We were unable to account for these arrangements nor determine what proportion of the population was affected by them. data Data were derived from the National Data Set for Compensationbased Statistics, an amalgamation of case-level administrative claims data from each system that is compiled by Safe Work Australia. 15 The preregistered inclusion criterion restricted eligibility to claims lodged since either July 2009 or the most recent change to step-down arrangements, whichever is latest, up to June 2015. For instance, in July 2011, Tasmania altered step-down arrangements via legislative amendment. Only claims lodged afterwards were included in analyses. Post hoc, we added several other exclusions: ► Claims affected by minimum and maximum caps for weekly payments ► Claims lodged after June 2014 in South Australia to allow a 1-year buffer with the change in step-down arrangements implemented in July 2015. ► Claims exempted from New South Wales' 2012 legislative amendments, including several occupations (police, paramedics, firefighters and coalmine workers) and dust diseases. 3 Our outcome-weekly scheme exit rate-was determined using cumulative compensated time off work. While scheme exit does not necessarily entail return to work, and cumulative compensated time off work underestimates the total actual duration, it is nevertheless considered the most accurate measure of time off work when using administrative data. 16 Several jurisdictions including Victoria and South Australia determine the application of step-downs by counting any calendar week in which there was compensated time loss as a full week, 6 17 18 whereas Comcare uses cumulative compensated time off work. 4 In the Victorian and South Australian systems, this means that for some claims, step-downs applied earlier than specified in our analyses. Analysis We calculated scheme exit rates by dividing the number of claims exiting the system each week by those in it at the start of that week. Injury subgroups included fractures, mental health conditions, musculoskeletal conditions and other trauma. Our preregistered categorisation separated back and neck from other musculoskeletal conditions, though we have since decided to keep them together as a better conceptual fit. Neurological conditions and all other conditions were excluded due to low numbers. Data were left-censored at 4 weeks to exclude residual effects of employer excess, which are the postinjury periods for which employers are responsible for compensation payments. Anecdotal reports suggest claims are less likely to persist only a day or two beyond the employer excess period, tending either to resolve before the employer excess period ends or to persist for a few days beyond that. In Australia, the longest employer excess periods are 10 working days/2 weeks in Victoria/South Australia. 3 We determined a priori that 4 weeks, while arbitrary, would be sufficient to remove any confounding due to this effect. Exit rates were calculated up to 2 years or 104 weeks. Effects were evaluated with a regression discontinuity design, a powerful quasiexperimental approach that compares outcomes on either side of an arbitrary cut-off. When individuals are unable to control which side of the cut-off they are on, regression discontinuity simulates a randomised control trial. 19 20 In this study, the assumption was inverted in that we evaluated whether claimants crossed this threshold. This means we cannot treat individuals on either side of the step-down cut-off as exposed or control groups and must interpret the results more cautiously. 21 We incorporated parametric polynomial estimators to account for non-linear patterns in exit rates, testing up to 10 polynomial terms with separate or same slopes, erring on the side of overfitting, 20 and selected best-fit models based on the Akaike Information Criterion. 19 Initially, we tested only separate slopes, but in several cases the fitted lines noticeably diverged from data points near the step-down cut-off. Testing same-slope models as well addressed these issues. Results are reported as the percentage point change to the exit rate. Coefficients and SEs were combined into random effects meta-analyses to determine combined effects and the proportion of variance attributable to heterogeneity. We tested the moderating effect of step-down timing and magnitudes using meta-regressions. Exit rates within a few subgroups became unstable as the number of claims in the system diminished over time. To account for this, we excluded data points where the number of remaining claimants for the week was <500 and did not conduct analyses where there were <20 aggregated data points after the stepdown. To illustrate the issue, data points in regression discontinuity plots are coloured black where included and grey where excluded. These exclusions were an ad hoc approach to an analytical problem that only became apparent as we examined the full dataset. As a result, neither Tasmania nor the Australian Capital Territory had sufficient data and were thus excluded from analyses. resulTs Data counts with crosstabulations for jurisdiction and injury type are summarised in table 2. In total, there were n=292 060 claim records in this study, the majority of which were musculoskeletal (n=176 297, 60%). The findings were first presented at the Actuaries Institute Injury and Disability Schemes Seminar in Canberra on 11 November 2019. step-down impact on scheme exit rates Across jurisdictions, the combined effect of step-downs on exit rates was a reduction of 0.86-percentage points (95% CI −1.45 to −0.27). A significant, moderate proportion of the variance in effects was attributable to heterogeneity between jurisdictions (I 2 =68%, p=0.003). Within individual schemes, all significant effects were negative. Three of four significant effects were observed in jurisdictions with the earliest step-downs, occurring at 13 weeks: New South Wales (−1.65 to -3.25 to −0.06), Western Australia (−1.65 to -3.07 to −0.23) and South Australia (−2.24 to -3.38 to −1.10). Victoria also had a 13-week step-down, though the effect was non-significant (0.03, 95% CI -0.88 to 0.95). The only significant effect outside of 13 weeks was in Comcare, where step-downs occur at 45 weeks (−1.29 to -2.25 to −0.34). However, meta-regressions found that neither the timing (0.01, 95% CI -0.08 to 0.09) nor magnitudes (0.02, 95% CI -0.13 to 0.17) of step-downs significantly moderated the effect on exit rates. Results are summarised in figure 1, and regression discontinuities are plotted in figure 2. sensitivity analysis: confounding from competing incentives We identified potential confounding from competing scheme incentives such as 10% insurance premiums discounts in New South Wales for employers who return claimants to work within 13 weeks 33 and bonuses for claims agents in Victoria who keep the rate of claims reaching 13 weeks low. 34 Other such incentives may exist, though consultation with scheme representatives indicated this information is often confidential as a private arrangement between insurers and employers. We conducted sensitivity analyses on claimants unaffected by step-downs, which were identified based on preinjury wages and maximum and minimum wage replacement caps. Significant changes among these claims would be evidence of confounding. Only three jurisdictions (Victoria, Queensland and Western Australia) had sufficient data for this analysis. Effects were nonsignificant individually and combined (0.16, 95% CI -0.50 to 0.82). These results are summarised in online supplementary figure 1. step-down impact by injury type Combined effects were significant only among fracture claims (−0.84 to -1.61 to −0.07). Heterogeneity between sites was non-significant (I 2 =25%, p=0.087). Meta-analyses by injury type are summarised in figure 3, and regression discontinuity plots are presented in online supplementary figures 2-5. sensitivity analysis: step-down impact by injury type While combined effects were non-significant in mental health, musculoskeletal and other trauma claims, magnitudes were similar across all injury types (−0.50 to −1.45) with considerable overlap in confidence intervals. There were also indications that a single jurisdiction was responsible for attenuation to non-significance in some injuries, such as the lone positive effect among musculoskeletal conditions in the Northern Territory (1.00, 05% CI 0.04 to 1.96). We conducted 'leave one out' sensitivity analyses, 25 Interpretations of step-down effects on scheme exit rates The local effect of step-downs on scheme exit rates was negative. The first potential explanation is that step-downs reduce the likelihood of return to work. This seems implausible given its lack of theoretical coherence and evidence that greater benefit generosity is positively associated with claim duration. 10 35 The second interpretation is that step-downs have an anticipatory effect, where claimants leave the compensation system early to avoid reductions in income. As evidence for this interpretation, regression discontinuity plots suggest that where effects were statistically significant, scheme exit rates increased in the week prior to step-down. An alternative explanation posits that we mis-specified stepdowns as occurring earlier than they actually do. This would be the result of our use of cumulative determinations of when stepdowns apply contrary to jurisdictions that use calendar determinations, leading to discrepancies. For instance, in Victoria and South Australia, a claimant who works 1 day in a 5-day workweek would be subjected to a step-down after 13 weeks. In our dataset, this would correspond to 13 days or 2.6 weeks of compensated time off work, and we would not count them as being affected by step-downs. However, we have identified several reasons to reject these discrepancies as the driver of the negative effect. For one, there were significant anticipatory effects in Comcare, where step-downs are determined by cumulative compensated time off work, 4 as in our determination. For another, divergent estimates would be attributable to failed return to work attempts and graduated/partial working arrangements. Such claimants have demonstrated positive action to return to work, and financial incentives may not provide a sufficient motivation to achieve sustained return to work. Additionally, claimants with graduated/ partial working arrangements are less affected by step-downs since only the compensated portion of their wages are reduced. In Comcare, step-downs magnitudes decrease for claimants with partial working arrangements. 4 We would also expect such exits Workplace Figure 2 regression discontinuity plots illustrating impact of step-downs on exit rates by jurisdiction. grey datapoints indicate excluded data (<500 denominator cases). to be more evenly distributed prior to step-downs. Instead, as noted above RDD plots suggest, they are clustered in the week prior to step-down in a manner that deviates from the secular trend. This suggests these claimants are maximising payments under the higher initial rate of compensation. Our analytical approach-the regression discontinuity designcan only test local effects, that is, at the cut-off. Evidence that greater benefit generosity increases time off work 10 35 suggests step-downs may still have longer term effects, even where there are no local effects. Plotted exit rate patterns generally indicate continuing logarithmic decay, particularly where local effects were non-significant. While this does not rule out longer term effects, it suggests they are at most relatively small. Heterogeneity of effects and potential causes A moderate proportion of the variance in effects was attributable to heterogeneity. While neither the timing nor magnitude of step-downs were a significant moderator, there were only eight data points for the meta-regression, limiting statistical power. These factors may yet explain some of the differences in effect. For instance, most significant effects were observed among stepdowns occurring at 13 weeks, the earliest timing. This aligns with employer and policymaker opinion that delaying stepdowns diminishes their effectiveness. 4 5 However, the 45-week Comcare step-down, the latest tested in this study, also had a significant effect. This suggests unmeasured factors such as the presence of organised unions, who can warn claimants about impending step-downs, may modify step-down effects, regardless of timing. effects by type of injury There were significant combined effects in fracture claims and more tenuous evidence for effects in mental health and musculoskeletal condition claims. Fractures are generally considered less responsive to benefit generosity since they are more visible and easier to diagnose 35 with less variability in recovery time. 36 In other words, there is less discretionary time off work that may be influenced by benefits. Though contrary to expectations, the findings are not unprecedented. We previously found time off work among fracture claims sharply increased after Victoria raised the maximum wage replacement cap from 150% to 200% of average state earnings. 10 This may be explained by the subset of fracture claims exposed to step-downs. Online supplementary figure 2 illustrates that unlike other injuries, fracture exit rates peak around 2 months postclaim, possibly reflecting the natural course of recovery. 36 Claims exceeding this peak will be more complex on average and may be more responsive to benefit generosity. 37 Mental health conditions are less visible and harder to diagnose, characteristics thought to increase sensitivity to benefit generosity. To our knowledge, our previous work is the only empirical investigation of how such claims respond to rate of compensation, though we found no evidence of an effect. 10 However, the previous study examined the effect of initial rates of compensation, while here we measure a change in that rate. The psychological vulnerability of mental health claimants may mean the act of cutting benefits has a greater effect on scheme exit than variations in what they are paid from the start. Musculoskeletal conditions are similarly less visible and harder to diagnose, with a substantial body of literature demonstrating sensitivity to benefit generosity. 35 The findings for other trauma conditions were non-significant, though it would be premature to dismiss this as no effect given the combined point estimate was the largest in magnitude. Null results do not necessarily entail null effects. statistical versus practical significance of findings While the findings were statistically significant, practical significance is less clear. For one, effects were fairly small. At the system level, the largest effect was −2.24 in South Australia. At the injury level, the biggest effect was −5.28 among mental health claims in South Australia, though this and the other larger injury effect estimates had wide CIs. Nevertheless, if these are reflective of the maximum potential impact of step-downs, they Workplace remain marginal, and if they are indeed anticipatory, the effects may be short lived, with scheme exit rates returning to normal shortly after step-downs apply. Step-downs may have negative side effects on claimants. They have been linked to financial strain, 6 38 which could worsen outcomes or even delay scheme exit, particularly later in the process. 37 Furthermore, economically motivated return to work such as that driven by compensation benefits can increase the likelihood of reinjury. 39 Scheme exit does not necessarily entail return to work and may result in cost-shifting to other income replacement systems. 8 40 However, it seems unlikely that those who leave workers' compensation in response to step-downs would go elsewhere if the causal mechanism is financial pressure. Other government-provided incapacity benefits are less generous than workers' compensation. 3 40 Some claimants may retire as this option generally entails less financial stress. 41 However, these inferences assume an informed, calculated and rational economic response to financial incentives. The cut in benefits may induce a negative psychological reaction in some claimants and lead to a scheme exit that is neither return to work nor an alternative that improves financial well-being. Meta-analyses suggested there was a moderate amount of heterogeneity between jurisdictions, which makes it difficult to make inferences about generalisability. However, the effects varied from small to approximately null, with a positive effect in a single subgroup (musculoskeletal conditions in the Northern Territory). The findings may be applicable to similarly causebased, devolved workers' compensation systems in developed economies like Canada and the USA, or other disability-based systems in developed countries, though it is unclear what the effects may be in underdeveloped settings. strengths and limitations This study has several limitations, some we have already mentioned including discrepancies in determination of stepdowns and inconsistent application of step-downs for some claimants. Regression discontinuity designs assume populations around the cut-off are unable to manipulate what condition they are exposed to. 19 Our study inverted this assumption, since claimants were reacting to the step-down cut-off rather than being allocated by it to separate conditions. The theoretical implications are unclear, though it may provide greater flexibility in interpretation. Rather than simulating a randomised controlled trial as is the case when regression discontinuities meet certain assumptions, 20 we can interpret the findings more qualitatively. 21 Similar natural experiment designs like the interrupted time series also consider anticipatory effects. 42 However, this means we also lose some of the strength in making causal attributions that a simulated randomised controlled trial would provide. This study also has several strengths. We applied a robust quasiexperimental approach, the regression discontinuity design, to national workers' compensation data with populationlevel coverage. There were sufficient data to investigate impact by jurisdiction and most injury subgroups, and meta-analysis increased precision of estimates and provided evidence that effects varied by jurisdiction. Sensitivity analyses provided evidence that effects were not attributable to co-occurring incentives that may have confounded results. COnClusIOns The findings suggest that step-downs have an anticipatory effect, leading some workers' compensation recipients to leave the system early in anticipation of a reduction in income. However, the effects are small and probably short lived. Step-downs may still reduce costs to workers' compensation systems, which is a legitimate policy goal. However, our findings suggest step-downs have marginal practical significance and are generally ineffective as a return-to-work policy initiative. Correction notice This article has been corrected since it published online to reflect the correct link to the datat in Data availability statement.
5,233.2
2019-11-19T00:00:00.000
[ "Economics", "Medicine" ]
Characterisation of marsupial PHLDA2 reveals eutherian specific acquisition of imprinting Background Genomic imprinting causes parent-of-origin specific gene expression by differential epigenetic modifications between two parental genomes. We previously reported that there is no evidence of genomic imprinting of CDKN1C in the KCNQ1 domain in the placenta of an Australian marsupial, the tammar wallaby (Macropus eugenii) whereas tammar IGF2 and H19, located adjacent to the KCNQ1 domain in eutherian mammals, are imprinted. We have now identified and characterised the marsupial orthologue of PHLDA2, another gene in the KCNQ1 domain (also known as IPL or TSSC3) that is imprinted in eutherians. In mice, Phlda2 is a dose-sensitive negative regulator of placental growth, as Cdkn1c is for embryonic growth. Results Tammar PHLDA2 is highly expressed in the yolk sac placenta compared to other fetal tissues, confirming a similar expression pattern to that of mouse Phlda2. However, tammar PHLDA2 is biallelically expressed in both the fetus and yolk sac placenta, so it is not imprinted. The lack of imprinting in tammar PHLDA2 suggests that the acquisition of genomic imprinting of the KCNQ1 domain in eutherian mammals, accompanied with gene dosage reduction, occurred after the split of the therian mammals into the marsupials and eutherians. Conclusions Our results confirm the idea that acquisition of genomic imprinting in the KCNQ1 domain occurred specifically in the eutherian lineage after the divergence of marsupials, even though imprinting of the adjacent IGF2-H19 domain arose before the marsupial-eutherian split. These data are consistent with the hypothesis that genomic imprinting of the KCNQ1 domain may have contributed to the evolution of more complex placentation in the eutherian lineage by reduction of the gene dosage of negative regulators for both embryonic and placental growth. Background Genomic imprinting produces monoallelic gene expression resulting from the parent-of-origin-dependent epigenetic modifications. Both DNA methylation and histone modifications are required to establish the paternal and maternal imprinting during development of the germ cells and to maintain it after fertilisation [1][2][3][4]. In humans and mice defects in some epigenetic modifiers or co-factors cause global disorders of genomic imprinting and of imprinted gene expression with early embryonic lethality, demonstrating that genomic imprinting is essential for mammalian development [5][6][7][8][9][10]. It is still unclear, however, why genomic imprinting has arisen in mammalian evolution, because adopting monoallelic gene expression means abandoning the merits of diploidy. In higher vertebrates, genomic imprinting has been found so far only in the viviparous therian mammals (eutherians and marsupials), but not in the egg-laying mammals, the monotremes [11,12]. Since only viviparous mammals have genomic imprinting, and many imprinted genes regulate fetal and placental growth, some authors have suggested that genomic imprinting is correlated with the evolution of mammalian viviparity [13][14][15][16][17]. It is therefore of great interest to compare genomic imprinting between eutherians and marsupials that diverged between 130 and 148 million years ago [18][19][20]. Most eutherians form a chorioallantoic (allantoic) placenta that is the site of highly efficient nutritional exchange between fetus and mother, allows lengthy intra-uterine growth, and in many cases supports the growth of a precocial young. In contrast, most marsupials depend on a relatively short-lived chorio-vitelline (yolk sac) placenta. Although often ignored, a yolk sac placenta is also present and functions for varying periods of time in all eutherian mammals [21][22][23]. Marsupials give birth to altricial young that are at a much earlier developmental stage than the neonates of most eutherians, but have developed a complex and advanced lactation system that supports further development and growth after birth, usually in a pouch [24,25]. Nearly 100 imprinted genes have been isolated in mice and humans. Imprinting has been studied in fourteen orthologues of these genes in marsupials but only 6 are imprinted [11,17,[26][27][28][29][30][31][32][33]. These are IGF2, IGF2R, PEG1/ MEST, PEG10, INS and H19, and are from 4 independent domains. We have previously reported that there is no evidence of genomic imprinting of CDKN1C (also known as p57 KIP2 ) in a marsupial, the tammar wallaby (Macropus eugenii) [28,34]. CDKN1C is located in the KCNQ1 domain mapped adjacent to the IGF2-H19 domain in eutherians and marsupials. Genomic imprinting of the IGF2-H19 domain is highly conserved between eutherians and tammars [33]. Although the imprinting regulatory mechanisms of the KCNQ1 and IGF2-H19 domains are known to be independent in mouse, the two domains are only 300 kb distant from each other and both contain several important genes that control fetal and placental growth. Therefore, to confirm whether the only gene in the domain that is not imprinted in the tammar is the CDKN1C gene, we examined the imprinting status of the orthologue of the PHLDA2 gene from the tammar wallaby KCNQ1 domain. PHLDA2 negatively controls growth of the chorioallantoic placenta in both human and mouse. In mice, deletion of Phlda2 causes placental overgrowth [35]. In contrast, biallelic expression of Phlda2, due to loss of imprinting, contributes to placental growth retardation and results in conceptuses with intrauterine growth restriction (IUGR) [36]. Furthermore, a single extra dose of Phlda2 has serious consequences for placental development, driving the loss of the junctional zone and reducing the amount of stored glycogen [37]. In humans, whilst there is silencing of PHLDA2 in complete hydatidiform moles [38], there is upregulation in placentae of fetuses with IUGR [39,40]; consistent with the results of genetic experiments in mice. Thus, the importance of gene dosage of PHLDA2 in eutherian placentation has been demonstrated by a number of studies. In this study, we characterise the orthologue of PHLDA2 in a marsupial, the tammar wallaby and examine its imprinting status in the chorio-vitelline placenta to clarify its possible contribution for the evolution of chorioallantoic placenta in the eutherian linage by dosage reduction consequent to acquisition of genomic imprinting. Characterisation of tammar PHLDA2 A 272 bp fragment was amplified by RT-PCR using a primer pair designed to a highly conserved sequence in the open reading frame (ORF) of the PHLDA2 gene among multiple species. Given the PCR product sequence was highly similar to PHLDA2 of other species, we next carried out 3' RACE to obtain 3' UTR sequence of tammar PHLDA2 using the same forward primer used to amplify the 272 bp fragment as the gene specific primer. The 3' UTR (477 bp) of tammar PHLDA2 consisted of a short intron (937 bp) similar to eutherian PHLDA2 ( Figure 1A). The expected genomic location of tammar PHLDA2 close to CDKN1C was confirmed by tammar BAC clone sequences in GenBank (NCBI). A 426 bp ORF encoding 142 amino acids was predicted with the supplemental sequence data from trace archive database (NCBI). Consistent with a previous comparison across vertebrates, that included fish, frog, chicken, mouse and human [41], the amino acid sequence of tammar PHLDA2 was also highly conserved within the PH (pleckstrin homology) domain, but there was lower conservation in the flanking sequences of both terminals ( Figure 1B). The PH domain in tammar PHLDA2 shares 78% amino acid sequence similarity with human, 67% with mouse, 73% with platypus and 77% with chicken PHLDA2 orthologues. Tissue specific expression pattern of tammar PHLDA2 As PHLDA2 is highly expressed in the yolk sac and placenta in human and mouse, we next analysed the expression pattern of tammar PHLDA2 in the yolk sac placenta as well as in several fetal tissues by quantitative PCR (QPCR). The marsupial yolk sac placenta consists of two regions, a bilaminar, avascular region and a trilaminar, vascular region. Both regions are the sites for fetal-maternal nutritional exchange while gases appear to be transferred principally via the vascular system of the trilaminar region [22,23,42,43]. The yolk sac placenta also synthesizes and stores nutrients required for fetal growth [22,23]. PHLDA2 mRNA expression in both bilaminar and trilaminar yolk sac was dramatically upregulated between day 24 to 26 of gestation (1-3 days before birth), although the relative expression level was lower in the bilaminar yolk sac ( Figure 2). A lower level of tammar PHLDA2 expression was also observed in several fetal tissues, as observed for Phlda2 in the mouse, but not the human [44]. Tammar PHLDA2 protein distribution in the yolk sac placenta To confirm tammar wallaby PHLDA2 protein expression and distribution in the yolk sac placenta, we carried out immunohistochemistry using a mouse monoclonal antibody raised against a partial recombinant human PHLDA2. The immunogen included aa 1-110, encompassing the whole PH domain. There is a high degree of similarity of amino acid sequences between human and tammar PHLDA2 over this region ( Figure 1B). Furthermore, we performed a genome-wide "TBLASTN" search for the published tammar genome sequence in the Ensembl database using the antigen peptide sequence for the query. It revealed the highest similarity of 76.5% for tammar PHLDA2 against whole sequence query (1-110/110 aa) as expected. The second highest hit was tammar PHLDA1, but this was aligned only partially (40-107/110 aa) with a much lower similarity of 51.5% for the aligned region. These data suggest that the immunostaining is positive for tammar PHLDA2 protein, although the possibility of some cross-reaction with PHLDA1 has not been completely excluded. Tammar PHLDA2 protein was present in both bilaminar and trilaminar regions in the yolk sac placenta, with strong immuno-staining in the cytoplasm of trophoblast cells of both parts of the yolk sac ( Figure 3A, B), despite the substantially lower mRNA relative expression level in the bilaminar yolk sac (Figure 2). Allelic expression analysis of tammar PHLDA2 Finally, we analysed allelic expression pattern of tammar PHLDA2 to determine whether it was imprinted. We searched for polymorphisms to allow us to distinguish between the two parental alleles. No exonic polymorphisms were found in any of the individuals (n = 18) tested. However, there was a length polymorphism in some individuals in the intron characterised by the presence or absence of repeats in the 31 bp of intronic sequence ( Figure 4A). Therefore, allelic expression could be determined directly by RT-PCR amplifying the unspliced PHLDA2 transcript using a primer pair designed to amplify the length polymorphic site. All RNA samples were DNase I treated and the lack of detectable contamination by genomic DNA was confirmed by PCR using the templates without reverse transcription (data not shown). Hence all intronic fragments amplified by RT-PCR were derived from unspliced transcripts, not from genomic DNA. The genomic PCR products showed that all four individuals were heterozygous for the length polymorphism and both alleles can be amplified equally ( Figure 4B). All samples tested, had clear biallelic expression, demonstrating no evidence of genomic imprinting of tammar PHLDA2 ( Figure 4B). On the other hand, monoallelic expression of tammar IGF2 could be confirmed by the amplification of the unspliced transcript using an intronic primer in the same way as the analysis on PHLDA2 ( Figure 4C). Discussion In this study, we identified and characterised the marsupial orthologue of PHLDA2. The amino acid sequence of tammar PHLDA2 shared highest conservation within the PH domain and lower conservation in the flanking sequences of both terminals, suggesting the essential role in the PH domain in contrast to the flanking regions, consistent with previous reports [41]. There was a similar high conservation of the amino acid sequences within the PH domain in the platypus, tammar and human PHLDA2, suggesting that this domain has a significant role with a similar function in marsupials and monotremes. The high level of mRNA expression in the trilaminar yolk sac and the protein localisation to the cytoplasm of trophoblast cells suggest that PHLDA2 functions in the tammar yolk sac placenta during pregnancy. However, although murine Phlda2 has the highest expression in the yolk sac [44], in mice with a disrupted Phlda2 gene the only abnormalities reported are in the chorioallantoic placenta [36]. Therefore, an ancestral role for PHLDA2 in the yolk sac might A B En T En T d.25 d.25 have been transferred to the chorioallantoic placenta during the evolution of the mouse. There was no evidence of genomic imprinting of tammar PHLDA2 in this study. The mouse Kcnq1 domain forms a large imprinted gene cluster including Phlda2, Slc22a18, Cdkn1c, Kcnq1, Ascl2 (also known as Mash2) and some placenta-specific imprinted genes. However, we know now that at least two genes, CDKN1C and PHLDA2 that are involved in embryonic and placental growth in eutherians, are not imprinted in this marsupial [28]. Considering that both genes are located to the middle of the domain and that all the imprinted genes in this domain are co-ordinately regulated by a single imprinting centre in the mouse, our data strongly suggests that the whole KCNQ1 domain lacks genomic imprinting in marsupials. Interestingly, the IGF2-H19 imprinted domain, located adjacent to the KCNQ1 domain shares a highly conserved imprinting regulatory mechanism complete with a differentially methylated region and associated miRNA between eutherians and marsupials [33]. This study thus confirms that the origin of imprinting of the KCNQ1 domain evolved in the eutherian lineage after the divergence of marsupials, whereas that of the IGF2-H19 domain appeared before the marsupial-eutherian split, regardless of the close proximity of these two domains [34]. In the Kcnq1 domain of mice, while Cdkn1c is a negative regulator for embryonic growth [45], Phlda2 negatively controls placental growth [35][36][37] and acts as a true rheostat for placental growth [36]. Recently, using a single copy transgenic mouse, Tunster et al., (2010) reported that Phlda2 regulates extraembryonic energy stores. Two-fold over-expression of Phlda2 caused a 60% loss of the spongiotrophoblast layer with a 25-35% reduction of glycogen storage. Since acquisition of genomic imprinting of PHLDA2 in the KCNQ1 domain by silencing of the paternal allele was accompanied by gene dosage reduction in eutherians, this might have affected the evolution of placental structure and/or energy stores. In laboratory mice that have two active copies of Phlda2 with the second copy provided by the BAC transgene, there was only a slight progressive slowing of embryonic growth [37]. However, greater reduction of fetal growth may have been seen if the mice had had restricted food intake, as is often the case in the wild, so that limited nutrition would need to be partitioned between mother and fetuses. In this situation, reduced expression of PHLDA2 could have had a selective advantage through greater placental development. We hypothesise that acquisition of imprinting in the KCNQ1 domain in the ancestral line that gave rise to the eutherian mammals may have allowed increased the placental growth and extended gestation that characterises this group of mammals. Conclusions The high level of mRNA expression in the trilaminar yolk sac placenta and the protein localisation to the cytoplasm of trophoblast cells suggest that tammar PHLDA2 is functional in their placenta. The lack of imprinting in the tammar PHLDA2 confirms an earlier conclusion that acquisition of genomic imprinting to the KCNQ1 domain occurred specifically in the eutherian lineage after the divergence of therian mammals into marsupials and eutherians, despite the fact that imprinting of the adjacent IGF2-H19 domain arose before the marsupialeutherian split ( Figure 5). Thus genomic imprinting of the KCNQ1 domain might have contributed to the development of complex placentation and the lengthening of gestation in the eutherian lineage by reducing gene dosage of negative regulators for both embryonic and placental growth. Animals and tissue collection Tammar wallabies of Kangaroo Island origin were maintained in our breeding colony in grassy, outdoor enclosures. Lucerne cubes, grass and water were provided ad libitum and supplemented with fresh vegetables. Fetuses and yolk sac placenta tissue were collected between days 21 and 26 of the 26.5 days gestation as previously described [24,42]. Experimental procedures conformed to Australian National Health and Medical Research Council (2004) guidelines and were approved by the Animal Experimentation Ethics Committees of the University of Melbourne. Amplification of tammar PHLDA2 sequence The following primer pair for the amplification of the 272 bp tammar PHLDA2 fragment was designed from the highly conserved region in the multi-species sequence alignment: 272 forward 5'-GCGAGGGCGAGCTGGAGAAGCG-3' 272 reverse 5'-GATGGCCGCGTTCCAGCAGCTCT-3' Thirty five cycles of PCR amplification were carried out in 25 μl total volume with 5-10 ng tammar cDNA from the yolk sac placenta using 0.5 U "TaKaRa Ex Taq Hot Start Version" (TaKaRa), 10 pmol each primers and 5 nmol each dNTP mixture under the following cycle conditions: 96°C × 15 s, 60°C × 30 s and 72°C × 30 s. PCR product was purified by "ExoSAP-IT" (GE) before sequencing. The 3' terminal of PHLDA2 mRNA was determined by "3' RACE System for Rapid Amplification of cDNA Ends" (Invitrogen) using the same forward primer described above as the gene specific primer. The intronic sequence was amplified by genomic PCR under the same conditions described above with 25 ng genomic DNA and the following primer pair: Exon1 forward 5'-CGACTTCCGCTGCCCCGACG-3' Exon2 reverse 5'-AAGACAAGGTCCCCATCGAG-3' Calculation of the amino acid sequence homology The percentage homology of the amino acid sequence in the PH domain between tammar and multiple species was calculated using the homology search program in the "GENETYX-MAC" software. Total RNA was extracted from the fresh frozen fetal tissues and yolk sac placentas using "TRI Reagent Solution" (Applied Biosystems) and reverse transcribed using "SuperScript III First-Strand Synthesis System" (Invitrogen) with Oligo(dT) primer. Immunohistochemistry Tissue sections (8 μm) were treated with 5% hydrogen peroxide in dH 2 O for 15 min to quench endogenous peroxidase activity. Slides were blocked in 10% normal goat serum in 0.1% BSA/TBS. Mouse monoclonal antibody raised against a partial recombinant human PHLDA2 (ABNOVA, H00007262-M01) was applied to sections at a 1:100 dilution at 4°C overnight. Antibody binding was detected with goat anti-mouse biotinylated secondary antibody (Dako) and amplified using the "Strept ABC Complex/HRP" (Dako). Antibody localisation was visualised using "Liquid DAB+ Substrate-Chromogen System" (Dako). Tissues were counterstained with haematoxylin. Allelic expression analysis RNA was isolated using the "ISOGEN" (Nippongene). Extracted RNA was then treated with DNase (RT grade; Nippongene) at room temperature for 1 hr. Reverse transcription was performed using "SuperScript III First-Strand Synthesis System" (Invitrogen) with Oligo(dT) primer. RT-PCR amplifications were carried out at the same conditions as described in the previous section for the fetus #1 and #2, and 30 cycles with 68°C of annealing temperature for the fetus #3 and the pouch young #1, using following primer pair: Exon1 forward 5'-CGACTTCCGCTGCCCCGACG-3' Intron reverse 5'-TAGAGACTCCAGGAGCTGGC-3' Three percent agarose gels were used for the electrophoresis. For amplification of IGF2, PCR conditions were the same in the previous section except the annealing temperature was 65°C and the primer pair: Intron forward 5'-GACTCCACTTTCTTCCT TCCCTT-3' Exon reverse 5'-AAAGCATGGCAGCCCACACT-3' PCR products were purified by "ExoSAP-IT" (GE) before sequencing. Imprinting of KCNQ1, SNURF-SNRPN and DLK1-GTL2 Domains Imprinting of IGF2-H19 Domain, IGF2R, PEG1/MEST and PEG10 Figure 5 Summary illustration. The branched black arrow represents the evolutionally divergence between marsupials and eutherians which occurred at least 130-148 million years ago. The broken red arrow represents the evolution of eutherian-type gestation including the prolongation of inter-uterine development with a chorioallantoic placenta. The broken green arrow represents the evolution of the advanced complex lactation system as one of the remarkable and specialised features of marsupials. The acquisition of genomic imprinting in the KCNQ1 domain, accompanied with gene dosage reduction of CDKN1C and PHLDA2, occurred only in the evolution of the eutherian linage as well as the SNURF-SNRPN and DLK1-GTL2 domains [29,32]. On the other hand, imprinting of IGF2-H19 domain, IGF2R, PEG1/MEST and PEG10 occurred before the divergence of marsupials [26][27][28]30,31,33]. This study and others provide evidence that imprinting occurred at two critical time points during the evolution of mammals. For the third time point, whether marsupial-specific imprinting occurred or not, is currently still unknown. Authors' contributions SS conceived and designed the research, carried out all the analyses and drafted the manuscript. MBR and GS collected the embryos and placentas. GS, TK-I, FI and MBR participated in the design and coordination of the study and edited the manuscript. All authors read and approved the final manuscript. Submit your manuscript at www.biomedcentral.com/submit
4,582.2
2011-08-19T00:00:00.000
[ "Biology" ]
Efflux Pump AdeABC Assessment in Acinetobacter baumannii Strains Isolated in a Teaching Hospital Over the past twenty years the worldwide clinically impact of Acinetobacter baumannii (A. baumannii) demonstrated its etiopathogenetic relevance. During a previously retrospective study in a teaching hospital, between January 2011 and February 2015, we observed increasingly infections caused by A. baumannii associated with antibiotic multi-resistance. Tigecycline, the first member of the glycylcycline class, is an effective option for the treatment of such infections even if, due to its increased clinical use, tigecycline resistant isolates have recently emerged. In A. baumannii several mechanisms are associated with a tigecycline decrease susceptibility, among these, expression efflux pump AdeABC and the presence of insertion sequence (IS) in the adeRS operon. About that, we decided to analyze adeB and adeS genes in 24 MDR A. baumannii clinical isolates, selected on the different tigecycline phenotype. The study of adeB and adeS genes was performed by an in-house polymerase chain reaction (PCR) and by Sanger sequencing method. According to literature adeB and adeS genes were detected in all MDR A. baumannii isolates tested. Therefore our attention has focused on two resistant tigecycline clinical strains (ACI 2313 and ACI 1213), with a MIC value >8. In particular the ACI 2313 strains, showed the presence of an IS in the adeS gene. Then, adeS sequence analysis identified ISAba1 insertion. Moreover, adeB gene expression was evaluated by an in-house SYBR Green I-based real-time RT-PCR. We found an over expression of adeB gene in ACI 2313 strain, according to IS presence on adeS gene, while the lack of adeB overexpression in ACI 1213, still resistant to tigecycline, could be due to different resistance mechanisms. Introduction Acinetobacter baumannii (A. baumannii) is an opportunistic pathogen that commonly causes nosocomial infections, as pneumonia, bloodstream and urinary tract infections, particularly in the intensive care unit [1]. Multi drug resistant (MDR) A. baumannii isolates have been reported worldwide and their increasing prevalence has led to limited therapeutic choice [2]. Tigecycline, the first member of the glycylcycline class of antibacterial agents, remain effective option for the treatment of these infections. However, due to its increased clinical use, tigecycline resistance is recently emerging [3]. Several studies have indicated that tigecycline resistance of A. baumannii is associated with the over expression of AdeABC efflux system [4,5]. A two component system containing adeS and AdeR, a sensor kinase and a response regulator respectively, are responsible for modulating AdeABC efflux pump [6]. Moreover, nucleotide/amino acid variations as well as the presence of insertion sequences (IS), such as ISAba1, in the adeRS operon have been related to the over expression of the adeABC efflux pump, decreasing A. baumannii susceptibility to tigecycline [7]. However, the exact mechanisms of resistance and the relationship between the level of expression of efflux pumps and the minimal inhibitory concentration (MIC, mg/liter) of tigecycline have not yet been clearly elucidated. Also, whether clinical isolates with resistance to tigecycline, originating from the same geographic locations, possess similar mechanisms of resistance is still unclear. During a retrospective study in a teaching hospital, between January 2011 and February 2015, we evaluated distribution and antibiotic resistance of A. baumannii strains isolated from patients admitted to four hospital units (medical units, surgical units, cardiac intensive care unit and the intensive care unit). A. baumannii isolates were collected from several sites such as blood culture, bronchial aspirate, bronchoalveolar lavage, central venous catheter, urine, and bladder catheter tip. Data collected showed an increasingly infections caused by A. baumannii associated with antibiotic multi-resistance (unpublished data). In particular on 83 strains, isolated in the last year, the percentages of MDR and pan drug resistant (PDR) A. baumannii were 75% and 13% respectively. Objective Since the observed high frequency of multi drug resistant A. baumannii in our hospital, the aim of this study was to assess efflux pump AdeABC in 24 MDR A. baumannii strains, selected on the different tigecycline phenotype. Study Design Twenty-four clinical isolates of A. baumannii, collected at "Mater Domini" University Hospital of Catanzaro, Southern Italy, from January 2013 to February 2015, were selected according to tigecycline phenotype (0.5≤ MIC ≥8). Isolates were identified by using VITEK 2 system (bioMérieux) and by mass spectrometry MALDI-TOF MS (bioMèrieux, France). A. baumannii ATCC 19606, an environmental A. baumannii strain, previously isolated from Mediterranean Sea water sample, and A. haemolyticus clinical isolate were also included in our study, as controls. For MALDI-TOF MS identification, bacterial cells from blood agar culture were processed according to the manufacturer's instructions. MALDI-TOF peaks were compared with reference spectra using SARAMIS integrated database. Antimicrobial susceptibility to tigecycline was determined by VITEK 2 system (AST-N201/AST-N203 cards) and United States Food and Drug Administration (FDA) breakpoint criteria for Enterobacteriaceae [8], to perform gene sequencing, bacterial DNA was extracted using UltraCleanTM Microbial DNA-MoBio Kit. The adeB and adeS genes were amplified by an in-house PCR and following primers: adeB The ACI 4614, ACI 2313, ACI 1213, and ATCC 19606 isolates were also evaluated for adeB gene expression using an in-house developed SYBR Green I-based real-time RT-PCR. Total RNA was isolated by Trizol® Reagent (AmbionTM, life technologies). The High Capacity cDNA Reverse Transcription Kit (Applied BiosystemsTM) for cDNA synthesis was used. Real-Time PCR was run in a LightCycler Instrument (Roche Molecular Biochemicals, Indianapolis, IN), using same primers described above. Of each cDNA, housekeeping gene and target gene in triplicate were determined. The amplicon specificity was confirmed by melting curve analysis, previously established (adeB melting temperature 85.28°C ± 0.5°C). Expression level of housekeeping gene was used to normalize the abundance of the tested transcripts. Comparative threshold cycle (CT) was used to determine transcript fold changes present in ACI 4614, ACI 2313 and ACI 1213 compared to ATCC 19606 [9]. Results The identification analyses of 24 strains were confirmed by two methods. At first, isolates were identified by VITEK 2 system. Since Acinetobacter calcoaceticus-baumannii (ACB complex) are phenotypically indistinguishable when evaluated by biochemical characteristics, the clinical isolates were also identified by mass spectrometry MALDI-TOF MS. Mass spectrometry MALDI-TOF MS identified A. baumannii specie with high reliability, founding m/z 5747/5749 range peaks [10]. The VITEK 2 system was also used to determine antibiotic susceptibilities. The MIC ≥8, related to tigecycline resistance, was found in ACI 1213 and ACI 2313 strains, while, the others isolates showed a MIC value ranging 0.5-4. Following in-house PCR, in all MDR A. baumannii clinical isolates as well as in A. baumannii ATCC 19606 reference strain, adeB ( Figure 1) and adeS (data not shown) were found. Conversely, as expected, both environmental A. baumannii strain and A. haemolyticus clinical isolates were lacking in these genes. The adeB Sanger sequencing from ACI 4614, ACI 1213, ACI 2313 and ATCC 19606 isolates was performed. Sequencing analysis showed 99% homology with the adeB sequence of A. baumannii strains included in NCBI-BLAST. Additionally, in ACI 2313 isolate, adeB showed a threefold higher relative expression (Figure 2A and 2B). Moreover, we sequenced adeS gene from the same isolate (ACI 2313), showing a singular electrophoretic migration pattern, and ATCC 19606 strain, as control. Sequencing analysis demonstrated that ACI 2313 tigecycline resistant isolate carried ISAba1 insertion sequence. Discussion Different studies have reported the role of A. baumannii efflux pumps in resistance to clinically relevant antibiotics [11]. AdeABC efflux pump has been well characterized, it is apparently not well expressed in wild-type strains [12], and contributes significantly to acquire multi drug resistance in worldwide clinical isolates, including resistance to tigecycline increasingly reported since 2007 [13][14][15][16][17]. However tigecycline is one of the few remaining therapeutic options for treating infections caused by MDR A. baumannii. Previous report [4] suggested that decreased susceptibility to tigecycline in the complex Acinetobacter calcoaceticus-Acinetobacter baumannii is associated with overexpression of efflux pump AdeABC. Recently, the overexpression of AdeABC was referred as the prevalent mechanism in tigecycline resistant A. baumannii clinical isolates, and a linear relationship was found between adeB gene expression levels and tigecycline MICs [17]. Our data showed that, even if adeB were detected in all MDR A. baumannii strains tested, differences in adeB gene expression have been found. Indeed, in the 2 clinical isolates, sharing a MIC value >8, we determined substantial differences; in particular, in just ACI 2313 isolate, adeB showed a higher relative expression. The AdeABC efflux pump is regulated by a two component system, AdeS sensor kinase and AdeR response regulator, encoded by the adeRS operon. It has been reported that overexpression of the AdeABC system is due to mutation in adeRS operon, included the presence of insertional sequence, such as ISAba1, one of the most frequent IS found in clinical isolates [6][7].When we detected and analyzed adeS in our clinical isolates, ACI 2313 strain showed a distinct electrophoretic migration pattern and its partial sequencing matched to ISAba1. The analysis to cover the internal gap (around 400 bp) and to obtain the entire IS is still in progress. In conclusion, further examination on additional tigecycline resistant A. baumannii clinical isolates, spreading in our area, are required. Mechanisms leading to tigecycline resistance, in particular in strains originating from the same geographic locations, will contribute to check the future prevalence of this resistance and to understand the A. baumannii pan drug resistance evolution.
2,125.2
2016-07-29T00:00:00.000
[ "Biology", "Medicine" ]
Rheology of Crumb Rubber-Modified Warm Mix Asphalt (WMA) This study explores the impact of adding waste vehicular crumb rubber to the commercially available warm mix additives Sasobit® and Zycotherm® on modified asphalt binders’ physical and rheological properties. Various concentrations of crumb rubber (0%, 10%, 15%, and 20%) were introduced to asphalt binder samples with 2% and 4% Sasobit and 1.5% and 3% Zycotherm. The investigation employed conventional tests (penetration and softening point) and advanced mechanical characterization tests, including Superpave rotational viscosity (RV), Dynamic Shear Rheometer (DSR), DSR multi-stress creep recovery (MSCR), DSR linear amplitude sweep (LAS), and Bending Beam Rheometer (BBR). Traditional tests measured the asphalt consistency, while workability was assessed through the RV test. The results showed that the Zycotherm binders experienced a more significant penetration reduction than the Sasobit binders. Additionally, an increased crumb rubber content consistently elevated the softening point and rotational viscosity, enhancing the complex shear modulus (G*) values. Rubberized binders exhibited an improved rutting performance and low-temperature PG grades. Increasing the crumb rubber content enhanced fatigue life, with Z1.5CR20 and S2CR20 demonstrating the longest fatigue lives among the Zycotherm and Sasobit binders, respectively. Overall, Z1.5CR20 is recommended for colder climates, while S2CR20 is suitable for hot-climate applications based on extensive analysis. Introduction Hot mix asphalt (HMA) requires high temperature ranges for production, incurring high energy consumption with increased costs and carbon emissions [1].Increases in energy and materials costs and the impact of environmental conditions on asphalt pavements in the last two decades have prompted the industry to assess the use of sustainable materials to enhance the performance of asphalt binders through the use of polymers [1,2] and bio-ash [3].Other technologies that are currently being investigated are the use of recycled asphalt binders (RAB) [4], recycled asphalt pavements (RAP) [5], and warm mix asphalt (WMA) mixes.WMA technologies are used to lower the mixing temperature by 20 • C to 40 • C and improve the workability of asphalt binders to decrease the costs required to mix the aggregates with the binder using WMA additives [6].Chemical WMA additives differ in their modification mechanism; therefore, careful selection of the WMA additive is required.Overall, WMA technology usage has been proven to decrease environmental impact while enhancing the performance of asphalt pavements [7]. Organic WMA additives consist of wax, manufactured using the Fischer-Tropsch (FT) method, which is added to reduce the viscosity of the asphalt binder during mixing, thus reducing the temperature needed for the production process [7].One of the most well-known wax additives is Sasobit.Sasol Germany manufactures Sasobit, a natural organic wax produced from the Fischer-Tropsch method, which involves a process of natural gas liquefication [8].Sasobit's modifying mechanism relies on its melting point of 100 • C. When Sasobit is added to an asphalt binder heated above the wax's melting temperature, it liquefies; thus, the viscosity of the asphalt binder is reduced.Cooling the asphalt binder crystalizes the wax, which increases the stiffness of the asphalt binder at performance temperature ranges [9,10].The addition of Sasobit has been proven to increase the penetration and softening point while decreasing the viscosity of the asphalt binder properties [11][12][13].Since Sasobit crystallizes in the asphalt binder when cooled, its rheological properties are enhanced.Studies have shown that, when increasing the Sasobit content, the high-temperature performance grade (PG) increases [14,15], and the non-recoverable creep compliance (J nr ) decreases, i.e., increasing the traffic level at any given temperature grade [16][17][18].As for intermediate temperatures and fatigue resistance, Sasobit has shown the potential to enhance fatigue resistance when a lateral amplitude sweep test (LAS) is carried out [19,20].However, multiple studies have shown a slightly high susceptibility to low-temperature cracking, using a bending beam rheometer (BBR), of Sasobit-modified asphalt binders due to the increased stiffness of the binders [21,22]. Chemical WMA additives enhance the workability of asphalt binders at the microscopic level.These additives improve the adhesion between the aggregates and the asphalt binder, thus strengthening the interface [23,24].Unlike organic WMA additives, the chemical WMA impact on the physical and rheological properties of the asphalt binder is not significant [25][26][27].Studies have shown that WMA additives might reduce the asphalt binder's high-temperature PG and rutting resistance [28,29].One of the WMA additives that is currently being investigated is Zycotherm.Zycotherm, manufactured by Zydex ® , India, is a nano-based antistrip additive that improves the workability of an asphalt binder by acting as a water repellent, thus improving the adhesion between the aggregates and the asphalt binder.While Zycotherm addition to an asphalt binder does not significantly improve the asphalt's properties, Zycotherm has been shown to be potent in resisting moisture damage compared to other WMA additives [30][31][32]. Since WMA additives reduce viscosity and improve the workability of asphalt binders, WMA additives can be added to modifiers that increase the viscosity of asphalt binders, notably, crumb rubber modifiers (CRM).When crumb rubber is added to an asphalt binder, the rubber swells to 3-5 times its size, thus absorbing the maltenes in the asphalt binder, increasing the asphaltenes ratio [33].This increase in the asphaltenes ratio results in a better performance of asphalt binders, given that the mixing conditions, material quantity and quality, and mixing process (dry or wet) are optimized [33].Adding CRM to asphalt binders has been proven to enhance their physical and rheological properties and increase their resistance to rutting deformation, fatigue, and thermal cracking.While the CRMB's performance has been observed [34][35][36], the compatibility between the asphalt binder and CRM and workability is challenging due to phase separation.[37,38].To overcome the low workability of CRMB, CRM can be added to warm asphalt binders.The resulting mix performance can be further enhanced when incorporating CRM with WMA additives [39].WMA and CRM added to asphalt mixtures exhibited a decrease in moisture damage susceptibility [40,41]. Adding CRM to warm asphalt binders prepared using Sasobit has shown an improved high-temperature performance, low-temperature thermal cracking resistance, and fatigue cracking resistance while decreasing viscosity [42][43][44].In addition, when adding WMA and CRM to asphalt mixtures, the rutting resistance improved [45] and moisture damage susceptibility decreased [46].CRM added to a warm binder prepared with chemical WMA additives weakens its rheological performance [47,48].Combining the additives with the Additives and Modification This study utilized two types of warm mix additives: Sasobit, which is a wax-based additive manufactured through the Fischer-Tropsch method from natural gas and obtained from Sasol in, Johannesburg, South Africa, and Zycotherm, a chemical-based additive with nano-silane properties produced by Zydex Industries, Gujarat, India (Figure 1).To modify the asphalt binder, Sasobit was added at 2% and 4% by weight, while Zycotherm was added at 1.5% and 3%, based on the manufacturer's recommendations.The properties of these additives are presented in Table 2, General Properties of the Warm Mix Additives. The crumb rubber used in this study was sourced from Beeah Recycling Center/Beeah Waste Management Company in Sharjah, United Arab Emirates.It was obtained by shredding waste (discarded) vehicle tires.The sieve analysis results of the crumb rubber are presented in Table 3, Sieve Analysis of Crumb Rubber.The crumb rubber used in this study was sourced from Beeah Recycling Center/Beeah Waste Management Company in Sharjah, United Arab Emirates.It was obtained by shredding waste (discarded) vehicle tires.The sieve analysis results of the crumb rubber are presented in Table 3, Sieve Analysis of Crumb Rubber. Mixing Process The mixing process involved combining each warm mix additive (Sasobit and Zycotherm) with the original asphalt binder using a high-shear mixer (ROSS Model: +100 LSI).This mechanical mixing was performed at a high shear speed of 1000 rpm and a mixing Mixing Process The mixing process involved combining each warm mix additive (Sasobit and Zycotherm) with the original asphalt binder using a high-shear mixer (ROSS Model: +100 LSI).This mechanical mixing was performed at a high shear speed of 1000 rpm and a mixing temperature of 150 • C for 30 min.The Sasobit was mixed with the asphalt binder at two different contents: 2% and 4% by weight of the asphalt binder.Similarly, the Zycotherm was mixed at contents of 1.5% and 3% by weight of the asphalt binder. Furthermore, the warm mix asphalt binders were blended with crumb rubber using dispersion mixing geometry at a high shear speed of 2000 rpm and a mixing temperature of 170 • C for 60 min.Mixing the warm mix asphalt binder with the crumb rubber was carried out at three different crumb rubber contents: 10%, 15%, and 20% by weight of the asphalt Polymers 2024, 16, 906 5 of 26 binder.For convenience, throughout the paper, the asphalt mixes will be referred to using the abbreviations specified in Table 4. Experimental Plan and Test Methods The penetration test, ASTM D5 [50], examined the consistency of the asphalt binder by loading 100 g on a standard needle that penetrated the asphalt binder surface for 5 s while the sample was submerged in water at a temperature of 25 • C. The softening point, ASTM D36 [51], is the temperature at which a 3.5 g steel ball placed on top of a steel ring filled with the asphalt fell a 25 mm (1-in) distance at starting temperature of 4 ± 1 • C and a rate of heating 1 • C per minute. Rotational Viscosity Test ASTM D4402 [52] was conducted for the asphalt binders at a standard test temperature of 135 • C, representing the average mixing and laydown temperature for Hot Mix Asphalt (HMA) according to the Superpave specifications.In the RV test, a cylindrical spindle with a specified diameter and effective length rotated inside a container filled with the asphalt material to an appropriate height at a standard speed of 20 rpm.The dynamic (rotational) viscosity was measured using the RV device.This viscosity was calculated by dividing the shear stress by the shear strain rate.The shear stress was determined by measuring the torque needed to maintain a constant rotational speed, while the shear strain rate was obtained from the rotational speed using established equations.The RV, penetration, and softening point results will be used to calculate the mixing and compaction temperatures of the asphalt binders, as shown in Equation (1).log η = 10.5012− 2.2601log (PEN)+0.00389(log(PEN)) 2 (1) Performance Grade (PG) Test The Performance Grade (PG) test, ASTM D6373 [53], measures the complex shear modulus (G*) value and the phase angle (δ) to obtain the rutting parameter (|G*|/Sinδ) at multiple temperatures.The test was performed over a temperature range from 64 • C to 82 • C, at increments of 6 • C. In this test, a sample with a diameter of 25 mm was placed in a Discovery Hybrid Rheometer (DHR), TA Instruments, New Castle, DE, USA, conditioned for 10 min at testing temperature, and subjected to a standard angular frequency of 10 rad/s (1.59 Hz) with the strain maintained at 12%.According to the Superpave specifications, the value of ((|G*|/Sinδ) should have a minimum value of 1.0 kPa for unaged asphalt binders and 2.2 kPa for short-term aged asphalt binders in rolling thin-film oven (RTFO), manufactured by Games Cox, San Diego, CA, USA, to pass a given temperature. Multiple-Stress Creep Recovery (MSCR) Test The Multiple-Stress Creep Recovery (MSCR) test was performed according to the ASTM D7405 [54].The MSCR test utilizes the well-established creep and recovery test concept to assess the potential for permanent deformation (rutting) in asphalt binders.It offers a more accurate high-temperature specification for asphalt binders that indicates their rutting performance accurately and is blind to modification.Using the DHR, a one-second creep load was applied to the asphalt binder sample.After the 1 s load was removed, the sample was allowed to recover (relax) for 9 s at a zero load.The MSCR test started with the application of low stress (0.1 kPa) for 10 creep/recovery cycles.Then, the stress was increased to 3.2 kPa and repeated for an additional 10 cycles.Therefore, the MSCR test measured the asphalt binder's recovery and non-recoverable strain compliance.The asphalt sample used in the MSCR test was short-term aged in the RTFO test and had a diameter of 25 mm.The test was conducted at the original asphalt binder's high-performance grade (PG) temperature (64 • C).The test data obtained were used to determine the recovered strain (γ r ), the non-recoverable compliance (J nr ), and the percent recovery (%R) at both stress levels for all tested asphalt binders. Linear Amplitude Sweep (LAS) Test The Linear Amplitude Sweep (LAS) test, AASHTO T391 [55], is used to calculate the number of load cycles to fatigue failure.The test investigates fatigue resistance by utilizing cyclic loading to speed the damage.The rate of damage increase was used to determine the fatigue performance using predictive modelling techniques.The damage accumulation was calculated as per Equation (2): (2) where: where C is a regression coefficient.The binder performance parameters were calculated using Equation (4) as follows: where: f = loading frequency (10 Hz) Finally, Equation ( 5) was used to calculate the number of cycles to fatigue. Polymers 2024, 16, 906 where γ max is the maximum expected binder strain for a given pavement structure. The test was performed on asphalt binder samples of 8 mm in diameter after long-term aging in the pressure aging vessel (PAV).The LAS uses a two-step approach.The first step was to apply a shear frequency sweep to obtain the linear viscoelastic properties.The frequency sweep test was performed at a frequency of 10 Hz, linearly increasing the strain amplitudes from 0.1 to 30% over 3100 loading cycles (10 cycles per second) for a total time of 310 s.The two phases were performed in succession at an intermediate temperature, as specified in the Superpave system [(high PG temperature + low PG temperature)/2], to obtain the asphalt binder's undamaged and damaged material properties.The test was performed at multiple frequencies for 10 s intervals, whereas, in the current LAS test procedure, the calculation of test results was based on the utilization of the viscoelastic continuum damage (VECD) theory.The VECD theory combines the principles of viscoelasticity and continuum damage mechanics to provide a description of the material behavior.It involves constitutive models that represent the stress-strain relationship in the material, to predict how pavements will perform over time in terms of accumulation of damage [56]. An Excel sheet developed by the Modified Asphalt Research Center at the University of Wisconsin-Madison [57] was utilized to carry out the analysis.The number of cycles to fatigue failure (N f ) was the primary output of the LAS test. Bending Beam Rheometer (BBR) Test The Bending Beam Rheometer (BBR) test ASTM D6648 [58], was used in Superpave to test the Pressure Aging Vessel (PAV)-aged asphalt binder at low temperatures for thermal cracking.The BBR basically subjected an asphalt binder simple beam with dimensions of 6.25 × 12.5 × 127 mm to a constant creep load of 0.981 N (resulting from a 100 g mass) over 240 s (creep test).The BBR test simulated the asphalt binder stiffness after two hours of loading at the minimum HMA pavement design temperature.The BBR test investigated the stiffness and relaxation characteristics of the asphalt binder at low temperatures.From the BBR test, the deformation with time was recorded, and therefore, the beam's creep stiffness was plotted with time.The measurements of the creep stiffness (S(t)) showed the low-temperature cracking susceptibility of the asphalt binder and were related to the thermal stresses in an HMA pavement due to shrinking.The rate of change in the stiffness with time (the slope of the curve of the stiffness with time or the m-value) at 60 s was also obtained.The m-value indicates the ability of the HMA pavement to relieve stresses at low temperatures.The Superpave specifications require in the BBR test a maximum value of 300 MPa for the creep stiffness and a minimum value of 0.300 for the m-value.The low testing temperature in the BBR was used to identify the low-temperature performance in the Superpave performance grading (PG) system.If a binder met the criteria for a certain testing temperature, this temperature was shifted down in the grading system by (10 degrees). Rolling Thin-Film Oven (RTFO) Test The Rolling Thin-Film Oven (RTFO) test, ASTM D2872] [59], simulated the short-term aging in asphalt binders during the mixing and laydown of HMA.Standard bottles of asphalt binder (35 g in each) were placed in a rack in the RTFO maintained at a temperature of 163 • C and subjected to air flow at a rate of 4 L/min.The rack rotated at a specified rate in a vertical plane.The test lasted for 85 min from the time the samples were placed in the oven (it was assumed that a period of 10 min was sufficient to allow the temperature to stabilize back to 163 • C after opening the oven door to place the samples). Pressure Aging Vessel (PAV) Test Long-term aging was simulated using the Pressure Aging Vessel (PAV), manufactured by ATS, Shreveport, LA, USA in accordance with [ASTM D6521] [60].The PAV test simulated the long-term aging that occurs in asphalt binders during the service life of the HMA pavement.The RTFO residue was poured into PAV standard pans at 50 g each and to stabilize back to 163 °C after opening the oven door to place the samples). Pressure Aging Vessel (PAV) Test Long-term aging was simulated using the Pressure Aging Vessel (PAV), manufactured by ATS, Shreveport, LA, USA in accordance with [ASTM D6521] [60].The PAV test simulated the long-term aging that occurs in asphalt binders during the service life of the HMA pavement.The RTFO residue was poured into PAV standard pans at 50 g each and the samples were placed inside the PAV.The PAV is an oven-pressure vessel combination that takes RTFO-aged samples and exposes them to a high air pressure (2070 kPa = 300 psi) and high temperature (90 °C = 195 °F, 100 °C = 212 °F, or 110 °C = 230 °F) depending upon expected climatic conditions for 20 h.In this study, the temperature used was 110 °C due to the very hot climate in the study area.A flowchart summarizing the test procedures is shown in Figure 2. Warm Mix Asphalt Binders (Without Crumb Rubber) Sasobit and Zycotherm's impacts on the penetration, softening point, and viscosity of the asphalt binder were tested.The penetration results presented in Figure 3 show a 24% and 31% decrease for S2 and S4, respectively.Zycotherm, on the other hand, showed increases in penetration of 19% and 40%.This indicates an increase in the stiffness of the Warm Mix Asphalt Binders (Without Crumb Rubber) Sasobit and Zycotherm's impacts on the penetration, softening point, and viscosity of the asphalt binder were tested.The penetration results presented in Figure 3 show a 24% and 31% decrease for S2 and S4, respectively.Zycotherm, on the other hand, showed increases in penetration of 19% and 40%.This indicates an increase in the stiffness of the Sasobit-modified asphalt binder and a decrease in the stiffness of the Zycotherm-modified asphalt binder. The softening point values, Figure 4, showed a similar trend to the penetration.Sasobit significantly increased the softening point temperature by 26% and 72% for S2 and S4, respectively.Z1.5 did not change the softening point, while Z3 decreased the softening point slightly.The penetration decreased and the softening point increased when using Sasobit.This was due to the crystallization of Sasobit in the asphalt and the long hydrocarbon chains of Sasobit, thus increasing the stiffness and stability at intermediate temperatures. Previous studies have shown the ability of wax-based warm mix additives to decrease the penetration and increase the softening point, while chemical warm mix additives have shown various impact on the physical properties of asphalt binders, depending on the nature of the chemical additive used [11]. Polymers 2024, 16, x FOR PEER REVIEW Sasobit-modified asphalt binder and a decrease in the stiffness of the Zycotherm-mo asphalt binder.The softening point values, Figure 4, showed a similar trend to the penetratio sobit significantly increased the softening point temperature by 26% and 72% for S S4, respectively.Z1.5 did not change the softening point, while Z3 decreased the sof point slightly.The penetration decreased and the softening point increased when Sasobit.This was due to the crystallization of Sasobit in the asphalt and the long h carbon chains of Sasobit, thus increasing the stiffness and stability at intermediat peratures.Previous studies have shown the ability of wax-based warm mix additi decrease the penetration and increase the softening point, while chemical warm m ditives have shown various impact on the physical properties of asphalt binders, de ing on the nature of the chemical additive used [11].The softening point values, Figure 4, showed a similar trend to the penetratio sobit significantly increased the softening point temperature by 26% and 72% for S4, respectively.Z1.5 did not change the softening point, while Z3 decreased the sof point slightly.The penetration decreased and the softening point increased when Sasobit.This was due to the crystallization of Sasobit in the asphalt and the long h carbon chains of Sasobit, thus increasing the stiffness and stability at intermediat peratures.Previous studies have shown the ability of wax-based warm mix addit decrease the penetration and increase the softening point, while chemical warm m ditives have shown various impact on the physical properties of asphalt binders, de ing on the nature of the chemical additive used [11].The rotational viscosity of the asphalt binders, Figure 5, at 135 • C decreased by 6% and 17% for S2 and S4, respectively.Z1.5 and Z3 recorded slightly increased viscosities by 9% and 8%, respectively.At 165 • C, S2 maintained the viscosity while S4 decreased the RV by 12%.Z1.5 and Z3 increased the viscosity by 15% and 10%, respectively.Viscosity temperature charts were used to calculate the mixing and compaction temperatures, as shown in Table 5.Based on the calculations, S2 and S4 lowered the mixing and compaction temperatures by 1 • C and 3 • C, respectively.Meanwhile, Z1.5 increased the mixing temperature by 3 • C and the compaction by 2 • C. Increasing the Zycotherm content to 3% increased the mixing temperature by 1 • C.This slight increase with Zycotherm was due to the chemical change that the asphalt binder underwent using chemical additives, and another reason could be the high dosage used in this study.The decrease in the viscosity when Sasobit was added was due to the melting of the Sasobit.Sasobit decreased the rotational viscosity of the asphalt binder in its liquid form due to the long hydrocarbon chains in the mix.The rotational viscosity results are supported by findings of previous studies conducted on warm mix additives [14,17,47]. in Table 5.Based on the calculations, S2 and S4 lowered the mixing and compaction peratures by 1 °C and 3 °C, respectively.Meanwhile, Z1.5 increased the mixing tem ture by 3 °C and the compaction by 2 °C.Increasing the Zycotherm content to 3% incr the mixing temperature by 1 °C.This slight increase with Zycotherm was due to the ical change that the asphalt binder underwent using chemical additives, and anothe son could be the high dosage used in this study.The decrease in the viscosity when S was added was due to the melting of the Sasobit.Sasobit decreased the rotational vis of the asphalt binder in its liquid form due to the long hydrocarbon chains in the mi rotational viscosity results are supported by findings of previous studies conduct warm mix additives [14,17,47].At 64 • C, the Sasobit-modified asphalt binders exhibited a doubling and quadrupling of the complex shear modulus value (|G*|) and stiffness for S2 and S4, respectively (Figure 6, Complex Shear Modulus, Phase Angle, and Rutting Parameter Results).In contrast, the Zycotherm modifier led to an 11% and 28% reduction in |G*| for Z1.5 and Z3, respectively. In terms of the phase angle (δ), which indicates the viscoelastic behavior of asphalt binders, all the warm mix asphalt binders demonstrated a reduction in δ, indicating that the binders were becoming more elastic.The most pronounced decrease in the phase angle was observed in the S2 and S4 asphalt binders, as shown in Figure 6. The S2 and S4 asphalt binders exhibited an increase in their Performance Grade (PG) from PG 64 to PG 70.At 64 • C, S2 and S4 displayed |G*|/sinδ values that were 46% and 66% higher, respectively, indicating a superior resistance to rutting, as depicted in Figure 6.On the other hand, Zycotherm did not enhance the PG grade, but reduced |G*|/sinδ by 11% and 28% for Z1.5 and Z3, respectively, compared to the control asphalt binder. The Sasobit-modified asphalt binders (S2 and S4) demonstrated a higher resistance to rutting deformation at 64 • C, attributed to their high |G*| and low phase angle.These findings agree with earlier studies that utilized Sasobit at 4% to enhance high-temperature performance, owing to the crystallization of Sasobit in the asphalt binder below its melting temperature [7,9,14,16].Similarly, Zycotherm showed a minimal impact on the asphalt binder's rheological properties, confirming previous studies [29,30]. The BBR creep stiffness measurements serve as an indicator of the asphalt binder's susceptibility to low-temperature cracking.Lower stiffness values are associated with a high thermal cracking (low temperature) resistance.As the CR content increased, both S2CR asphalt binders exhibited a decrease in their creep stiffness compared to the control asphalt binder S2 (CR = 0%).However, the S4CR asphalt binders displayed a slightly different trend, with creep stiffness values approximately similar to the control asphalt binder S4 (CR = 0%). In contrast, the S2CR10, S2CR15, and S2CR20 binders showed significantly lower stiffnesses by 11%, 43%, and 43%, respectively, compared to the control binder S2.For the S4CR10, S4CR15, and S4CR20 binders, their creep stiffness values were similar to the control binder S4 at −6 • C. The low-temperature performance grade (PG) was determined using the Be Beam Rheometer (BBR) test results.The m-value results obtained from this test offe uable insights into the resistance to thermal cracking, as illustrated in Table 7.A Sasobit to the asphalt binders reduced the low-temperature PG, reaching −16 and − S2 and S4, respectively.Z1.5 exhibited a low-temperature performance similar to S2 while Z3 demonstrated comparable results to the control asphalt binder, with a PG o The findings indicated a decrease in creep stiffness with Sasobit at both 2% and 4% tents, whereas Zycotherm increased the creep stiffness from the BBR test.Z3 exhib creep stiffness similar to that of the control asphalt binder.Regarding the Z1.5CR and Z3CR asphalt binders, notable creep stiffness reductions were observed compared to the control binders, the Z1.5 and Z3 asphalt binders.Specifically, the Z3CR10, Z3CR15, and Z3CR20 asphalt binders displayed lower creep stiffness values, reduced by 13%, 11%, and 12%, respectively, compared to the Z3 asphalt binder.On the other hand, the Z1.5CR asphalt binder showed a decrease in creep stiffness of 13% at a CR content of 10%.The lack of enhancement in creep stiffness observed in Z1.5CR15 and Z1.5CR20 indicates that the 10% CR content was likely the optimal value for achieving the desirable creep stiffness in this group of CR-modified WMA asphalt binders.The Z1.5CR binder exhibited results beyond the linear viscoelastic range, which is more realistic in representing actual field situations where the binder is subjected to traffic loads.The MSCR test results presented in Table 6 reveal that the addition of Sasobit at 64 • C with a stress level of 3.2 kPa improved the creep recovery significantly.Specifically, with a 2% Sasobit content, the creep recovery increased from 0.4% to 1.77%, and with a 4% Sasobit content, the creep recovery increased to 5.11%.On the other hand, the use of Zycotherm did not show any enhancement in the recovery.S2 did not meet the MSCR requirements for PG70, while S4, with its improved creep recovery performance, successfully fulfilled the MSCR criteria.The low-temperature performance grade (PG) was determined using the Bending Beam Rheometer (BBR) test results.The m-value results obtained from this test offer valuable insights into the resistance to thermal cracking, as illustrated in Table 7. Adding Sasobit to the asphalt binders reduced the low-temperature PG, reaching −16 and −10 for S2 and S4, respectively.Z1.5 exhibited a low-temperature performance similar to S2 (−16), while Z3 demonstrated comparable results to the control asphalt binder, with a PG of −22.The findings indicated a decrease in creep stiffness with Sasobit at both 2% and 4% The performance grade classification of each asphalt binder was established by considering all the results obtained from the PG, MSCR, and BBR tests, as presented in Table 8.The BBR results supported the findings of previous studies, confirming that Sasobit enhances high-temperature performance due to increased stiffness, while renders the asphalt binder more susceptible to low-temperature cracking [22]. The intermediate temperatures were determined using the high and low PG temperatures for the asphalt binders.The intermediate temperature can be calculated as the sum of the high and low PG grades, divided by two, with four added to the result.The viscoelastic continuum-damage (VECD) theory was employed to analyze the data.The VECD theory combines the principles of viscoelasticity and continuum damage mechanics to provide description of material behavior.It involves constitutive models that represent the stress-strain relationship in the material, to predict how pavements will perform over time in terms of accumulation of damage [55]. Figure 7a illustrates the influences of various additives on the number of cycles to fatigue failure at a 2.5% strain level.S2 showed a remarkable increase of 83% in the number of cycles to fatigue failure, while S4 increased it by more than threefold, resulting in a significant improvement in fatigue life.On the other hand, the Z1.5 and Z3 binders did not show any enhancement in the number of cycles to fatigue failure.Incorporating Sasobit reduced the aging effects of the asphalt binders, mainly by lowering the compaction and mixing temperatures, leading to an increase in fatigue life. At 5% strain levels (Figure 7b), again, the influence of Sasobit was more pronounced than that of Zycotherm.S2, S4, and Z1.5 increased the number of cycles to fatigue failure by 34%, 86%, and 3%, respectively, while Z3 decreased it by 10%.Previous studies have indicated that Sasobit's effectiveness in reducing aging is attributed to lower M&C (Mixing and Compaction) temperatures of the asphalt binder, resulting in a reduced stiffness, whereas Zycotherm has a minimal effect [7,9,30]. At 5% strain levels (Figure 7b), again, the influence of Sasobit was more pronou than that of Zycotherm.S2, S4, and Z1.5 increased the number of cycles to fatigue f by 34%, 86%, and 3%, respectively, while Z3 decreased it by 10%.Previous studies indicated that Sasobit's effectiveness in reducing aging is attributed to lower M&C ing and Compaction) temperatures of the asphalt binder, resulting in a reduced stiff whereas Zycotherm has a minimal effect [7,9,30] Crumb Rubber-Modified Warm Mix Asphalt Binders Traditional tests such as penetration, softening point, and rotational viscosity w performed to evaluate the influence of crumb rubber on warm mix asphalt binders.vanced characterization tests were also conducted to further understand the impa crumb rubber on the binders.Each rubber content was mixed with four warm mix a tives. The penetration test results for CR demonstrated a consistent reduction in pen tion values, as depicted in Figure 8.The findings indicated that increasing the CR con decreased the penetration values of the asphalt binder.Specifically, with an increa CR content from 10% to 15% to 20%, the penetration of S2CR decreased by 9%, 20%, 28, respectively.Similarly, the penetration of S4CR decreased by 21%, 24%, and 25% contrast, Z1.5CR and Z3CR experienced penetration reductions of 12%, 24%, and 29% 16%, 32%, and 41% respectively.This shows that the reduction in penetration for Sa binders was higher than the reduction observed for Zycotherm binders. As depicted in Figure 9, the softening point results reveal an overall increase in CR-modified WMA binders compared to the WMA binders.Specifically, the Sa blends demonstrated significantly higher softening point temperatures than the Z 0 10,000 Crumb Rubber-Modified Warm Mix Asphalt Binders Traditional tests such as penetration, softening point, and rotational viscosity were performed to evaluate the influence of crumb rubber on warm mix asphalt binders.Advanced characterization tests were also conducted to further understand the impact of crumb rubber on the binders.Each rubber content was mixed with four warm mix additives. The penetration test results for CR demonstrated a consistent reduction in penetration values, as depicted in Figure 8.The findings indicated that increasing the CR content decreased the penetration values of the asphalt binder.Specifically, with an increase in CR content from 10% to 15% to 20%, the penetration of S2CR decreased by 9%, 20%, and 28, respectively.Similarly, the penetration of S4CR decreased by 21%, 24%, and 25%.In contrast, Z1.5CR and Z3CR experienced penetration reductions of 12%, 24%, and 29% and 16%, 32%, and 41% respectively.This shows that the reduction in penetration for Sasobit binders was higher than the reduction observed for Zycotherm binders. increased the friction with the balls and the rings when the modified binder was trave down from its original position.Both WMA binders, through their ability to absorb the CR particles, enhanced compatibility between the crumb rubber and the asphalt binders, thus promoting the s ness of the asphalt binder [7,9].This explains the positive impact on the softening p results observed in the study. The viscosity of asphalt binders is crucial to the mixability and workability of asp binders with aggregates.The rotational viscosity results in Figure 9 indicate that add CR to the WMA binders significantly increased the viscosity at temperatures of 135 145 °C, 155 °C, and 165 °C. At the four temperatures, the increase in the CR content increased the rotational cosity for the Sasobit and Zycotherm asphalt binders, with the Sasobit binders havi lower viscosity at all temperatures.At the standard temperature (135 °C), the rotati viscosity for S2CR compared to the control binder (S2, CR = 0%) increased by 2, 4, a times at CR contents of 10, 15, and 20%, respectively.For S4CR, similarly, the rotati viscosity experienced increases of 2, 3, and 4 times compared to the control binder (S4 = 0%) at CR contents of 10, 15, and 20%, respectively. A comparable trend emerged with the Zycotherm asphalt binders.In the cas Z1.5CR, the rotational viscosity was enhanced 2, 3, and 5 times at CR contents of 10 and 20%, respectively.Meanwhile, for Z3CR, the rotational viscosity demonstrated co sponding rises of 2, 3, and 6 times at CR contents of 10, 15, and 20%, respectively. The mixing and compaction (M&C) temperatures, as shown in Table 9, were ca lated based on the penetration, softening point, and rotational viscosity results.The in sion of CR led to a substantial increase in the M&C temperature ranges.Compare results in other studies [36], increasing the CR content increased the mixing temperat by 15%, 24%, and 33% at CR contents of 10%, 15%, and 20%, respectively, for the S asphalt binder.For the S4CR binder, the percentages were 13%, 22%, and 27%, res tively.On the other hand, the Z1.5CR and Z3CR binders encountered increases in mi temperature of 10%, 21%, and 29% and 9%, 25%, and 33% at CR contents of 10%, 15%, 20%, respectively.Comparing the mixing temperatures between the additives, for As depicted in Figure 9, the softening point results reveal an overall increase in the CR-modified WMA binders compared to the WMA binders.Specifically, the Sasobit blends demonstrated significantly higher softening point temperatures than the Zycotherm blends. Polymers 2024, 16, x FOR PEER REVIEW 17 CRM, Zycotherm was found to have a much lower M&C compared to the Sasobit-m fied binders.This was also true for a higher CRM content.A higher Sasobit conten creased the (M&C) temperatures, while a higher Zycotherm content did not change temperatures as much.Increasing the CR content increased the softening points for the Sasobit and Zycotherm asphalt binders, however, this increase was more observable in the Sasobit binders than the Zycotherm binders.At CR contents of 10, 15, and 20%, the softening point exhibited percentage increases of 6, 10, and 15% for S2CR, respectively, and for S4CR, the percentage increases were 3, 5, and 7%, respectively.However, for Z1.5CR, the softening point increased by 10, 15, and 19% at CR contents of 10, 15, and 20%, correspondingly, and for Z3CR, the softening point experienced percentage increases of 11, 15, and 23% at CR contents of 10, 15, and 20%, respectively.This observation indicates that the rise in the softening point was directly linked to the increase in the crumb rubber content.Notably, the substantial increase in the softening point temperature was because the 20% CR-modified mix was considered to be rubberized, as it contained a high proportion of crumb rubber.The presence of rubber particles increased the friction with the balls and the rings when the modified binder was traveling down from its original position. Both WMA binders, through their ability to absorb the CR particles, enhanced the compatibility between the crumb rubber and the asphalt binders, thus promoting the stiffness of the asphalt binder [7,9].This explains the positive impact on the softening point results observed in the study. The viscosity of asphalt binders is crucial to the mixability and workability of asphalt binders with aggregates.The rotational viscosity results in Figure 9 At the four temperatures, the increase in the CR content increased the rotational viscosity for the Sasobit and Zycotherm asphalt binders, with the Sasobit binders having a lower viscosity at all temperatures.At the standard temperature (135 • C), the rotational viscosity for S2CR compared to the control binder (S2, CR = 0%) increased by 2, 4, and 7 times at CR contents of 10, 15, and 20%, respectively.For S4CR, similarly, the rotational viscosity experienced increases of 2, 3, and 4 times compared to the control binder (S4, CR = 0%) at CR contents of 10, 15, and 20%, respectively. A comparable trend emerged with the Zycotherm asphalt binders.In the case of Z1.5CR, the rotational viscosity was enhanced 2, 3, and 5 times at CR contents of 10, 15, and 20%, respectively.Meanwhile, for Z3CR, the rotational viscosity demonstrated corresponding rises of 2, 3, and 6 times at CR contents of 10, 15, and 20%, respectively. The mixing and compaction (M&C) temperatures, as shown in Table 9, were calculated based on the penetration, softening point, and rotational viscosity results.The inclusion of CR led to a substantial increase in the M&C temperature ranges.Compared to results in other studies [36], increasing the CR content increased the mixing temperatures by 15%, 24%, and 33% at CR contents of 10%, 15%, and 20%, respectively, for the S2CR asphalt binder.For S4CR binder, the percentages were 13%, 22%, and 27%, respectively.On the other hand, the Z1.5CR and Z3CR binders encountered increases in mixing temperature of 10%, 21%, and 29% and 9%, 25%, and 33% at CR contents of 10%, 15%, and 20%, respectively.Comparing the mixing temperatures between the additives, for 10% CRM, Zycotherm was found to have a much lower M&C compared to the Sasobit-modified binders.This was also true for a higher CRM content.A higher Sasobit content increased the (M&C) temperatures, while a higher Zycotherm content did not change the temperatures as much.Despite the high temperatures, adding CR to the WMA binders resulted in a lower viscosity than the CR-modified hot mix asphalt binders, mainly due to the melting of the warm additives at high temperatures.The interaction between the warm mix additives and crumb rubber reduced the activation energy of the CR mix particles, leading to a viscosity reduction [33,34]. The rheological properties presented in Figure 10 for the CR-modified WMA binders demonstrate a substantial expected enhancement compared to the WMA binders.The complex shear modulus, phase angle, and rutting parameter showed significant improvements.Incorporating the CR and WMA binders significantly boosted the G* value for both the Sasobit and Zycotherm binders.Specifically, S2CR10, S2CR15, and S2CR20 displayed two, three, and four times, respectively, higher G* values than the S2 asphalt binder (control binder) at all temperatures.Similarly, S4CR10, S4CR15, and S4CR20 approximately doubled the G* value of the S4 asphalt binder (control binder). Polymers 2024, 16, x FOR PEER REVIEW two, three, and four times, respectively, higher G* values than the S2 asphalt binde trol binder) at all temperatures.Similarly, S4CR10, S4CR15, and S4CR20 approxi doubled the G* value of the S4 asphalt binder (control binder).In the case of Zycotherm, the introduction of CR to Z1.5CR10, Z1.5CR1 Z1.5CR20 displayed two, three, and four times increases in the G* value, respectiv for the Z3CR10, Z3CR15, and Z3CR20 binders, they demonstrated a twofold, thr and fivefold increase in the G* value, respectively.However, when compared to sobit binders, the Zycotherm binders exhibited significantly lower G* values.Mo increasing the Zycotherm content in the CR-modified WMA binders led to a decr the G* value. Regarding the phase angle results, generally, the inclusion of CR slightly dec the lag response (phase angle), especially at a 10% CR content, indicating a more response.Nonetheless, in the case of asphalt binders containing 15% and 20% CR crease in CR content led to a partial reduction in the phase angle.Specifically, fo and S4CR, the decreases at 64°C amounted to 7% and 11%, and 7% and 7% at CR co of 15% and 20%, respectively.As for Z1.5CR and Z3CR, these percentages were 6 7%, and 5% and 7%, respectively, yielding a higher elastic response with a higher therm content. As illustrated in Figure 11, the |G*|/sin results exhibited a decrease as the tem ture rose.However, the addition of CR significantly enhanced the |G*|/sin values.ically, S2CR20 and S4CR20 increased the PG from 64 to 82, indicating an improved resistance of the asphalt binder due to an increased stiffness.Similarly, incorporat into the Zycotherm asphalt binders showed a comparable trend in enhancing thei temperature performance.Compared to Sasobit, Zycotherm achieved a maximum 76 with 20% CR, showcasing its superior rutting performance. Table 10 presents the MSCR (Multiple-Stress Creep Recovery) test results for phalt blends incorporating CR.The influence of CR on the WMA binders was eva through the MSCR test at 64 °C.As anticipated, adding CR improved the rutting res and creep recovery of the asphalt binder at a stress level of 3.2 kPa.In the case of Zycotherm, the introduction of CR to Z1.5CR10, Z1.5CR15, and Z1.5CR20 displayed two, three, and four times increases in the G* value, respectively.As for the Z3CR10, Z3CR15, and Z3CR20 binders, they demonstrated a twofold, threefold, and fivefold increase in the G* value, respectively.However, when compared to the Sasobit binders, the Zycotherm binders exhibited significantly lower G* values.Moreover, increasing the Zycotherm content in the CR-modified WMA binders led to a decrease in the G* value. Regarding the phase angle results, generally, the inclusion of CR slightly decreased the lag response (phase angle), especially at a 10% CR content, indicating a more elastic response.Nonetheless, in the case of asphalt binders containing 15% and 20% CR, an increase in CR content led to a partial reduction in the phase angle.Specifically, for S2CR and S4CR, the decreases at 64 • C amounted to 7% and 11%, and 7% and 7% at CR contents of 15% and 20%, respectively.As for Z1.5CR and Z3CR, these percentages were 6% and 7%, and 5% and 7%, respectively, yielding a higher elastic response with a higher Zycotherm content. As illustrated in Figure 11, the |G*|/sinδ results exhibited a decrease as the temperature rose.However, the addition of CR significantly enhanced the |G*|/sinδ values.Specifically, S2CR20 and S4CR20 increased the PG from 64 to 82, indicating an improved rutting resistance of the asphalt binder due to an increased stiffness.Similarly, incorporating CR into the Zycotherm asphalt binders showed a comparable trend in enhancing their high-temperature performance.Compared to Sasobit, Zycotherm achieved a maximum PG of 76 with 20% CR, showcasing its superior rutting performance.recovery of the asphalt binder. In comparing the rubberized WMA binders, the S2CR20 binder achieved the h creep recovery, followed by S4CR20, Z1.5CR20, and Z3CR20.Additionally, an incr the CR content enhanced the creep recovery for all CR-modified WMA binders.Th gests that an increase in the stiffness of the asphalt binder positively impacted th recovery capability of the asphalt binder.The rutting performance, as indicated by the non-recoverable creep complian rameter at a stress level of 3.2 kPa (Jnr3.2),was evaluated for the various modified binders.In the case of the S2CR10, S2CR15, and S2CR20 binders, the Jnr3.2 parame hibited reductions of 68%, 77%, and 91%, respectively, in comparison to the control S2.Similarly, for the S4CR10, S4CR15, and S4CR20 binders, the Jnr3.2 parameter dis decreases of 63%, 65%, and 80%, respectively, compared to the control binder S4.Z1.5CR10, Z1.5CR15, and Z1.5CR20, the decreases were 70%, 71%, and 88% in t parameter relative to the Z1.5 control binder.Likewise, the Z3CR10, Z3CR15, and Z binders experienced reductions of 53%, 75%, and 87%, respectively, in the Jnr3.2 par compared to the Z3 control binder. In contrast, the S2CR10, S2CR15, and S2CR20 binders showed significantly stiffness by 11%, 43%, and 43%, respectively, compared to the S2 control binder.Th stiffness values of the S4CR10, S4CR15, and S4CR20 binders were similar to the S4 binder at −6 °C.Regarding the Z1.5CR and Z3CR asphalt binders, notable creep s reductions were observed compared to the control binders, the Z1.5 and Z3 asphalt b Specifically, the Z3CR10, Z3CR15, and Z3CR20 asphalt binders displayed lower cree ness values, reduced by 13%, 11%, and 12%, respectively, compared to the Z3 asphalt On the other hand, the Z1.5CR asphalt binders showed decreases in creep stiffness.S cally, the Z3CR10, Z3CR15, and Z3CR20 asphalt binders displayed lower creep stiffn ues, reduced by 13%, 11%, and 12%, respectively, compared to the Z3 asphalt bind the other hand, the Z1.5CR asphalt binders showed a decrease in creep stiffness of 1 CR content of 10%.The lack of enhancement in creep stiffness observed in Z1.5CR Z1.5CR20 indicates that the 10% CR content is likely the optimal value for achiev sirable creep stiffness in this CR-modified WMA asphalt binders group. The traffic levels can be determined from AASHTO [M332] based on the Jnr3.2 in Table 10.Across all CR WMA binders, the pavement's capacity to endure traffic s Compared to the S2 asphalt binder, S2CR10, S2CR15, and S2CR20 demonstrated an enhanced creep recovery of the asphalt binder.While the S4CR10, S4CR15, and S4CR20 binders also exhibited an improved creep recovery, this enhancement was notably lower than that observed in the S2CR binders.Remarkably, the Z1.5CR and Z3CR binders, despite being lower in G* value, showed the most significant improvements in the creep recovery of the asphalt binder. In comparing the rubberized WMA binders, the S2CR20 binder achieved the highest creep recovery, followed by S4CR20, Z1.5CR20, and Z3CR20.Additionally, an increase in the CR content enhanced the creep recovery for all CR-modified WMA binders.This suggests that an increase in the stiffness of the asphalt binder positively impacted the creep recovery capability of the asphalt binder. In contrast, the S2CR10, S2CR15, and S2CR20 binders showed significantly lower stiffness by 11%, 43%, and 43%, respectively, compared to the S2 control binder.The creep stiffness values of the S4CR10, S4CR15, and S4CR20 binders were similar to the S4 control binder at −6 • C. Regarding the Z1.5CR and Z3CR asphalt binders, notable creep stiffness reductions were observed compared to the control binders, the Z1.5 and Z3 asphalt binders.Specifically, the Z3CR10, Z3CR15, and Z3CR20 asphalt binders displayed lower creep stiffness values, reduced by 13%, 11%, and 12%, respectively, compared to the Z3 asphalt binder.On the other hand, the Z1.5CR asphalt binders showed decreases in creep stiffness.Specifically, the Z3CR10, Z3CR15, and Z3CR20 asphalt binders displayed lower creep stiffness values, reduced by 13%, 11%, and 12%, respectively, compared to the Z3 asphalt binder.On the other hand, the Z1.5CR asphalt binders showed a decrease in creep stiffness of 13% at a CR content of 10%.The lack of enhancement in creep stiffness observed in Z1.5CR15 and Z1.5CR20 indicates that the 10% CR content is likely the optimal value for achieving desirable creep stiffness in this CR-modified WMA asphalt binders group. The traffic levels can be determined from AASHTO [M332] based on the J nr3.2 values in Table 10.Across all CR WMA binders, the pavement's capacity to endure traffic showed an enhancement as the CR content increased from 0% (control binder) to 10%, 15%, and 20%.To illustrate, considering the S2CR binder, its traffic level improved from standard (S), corresponding to the control binder with CR at 0%, to Extremely Heavy (E) when the CR content reached 20%.A parallel pattern can be observed for the S4CR binder, with its traffic level improving from Heavy (H) to "E".Conversely, in the cases of the Z1.5CR and Z3CR binders, the traffic levels advanced from "S" to "E" and from "S" to Very heavy (V), respectively.S2CR20 demonstrated the best rutting resistance, displaying the highest creep recovery and the least non-recoverable creep compliance (J nr3.2 ).An increase in Sasobit content positively impacted the asphalt binder's high-temperature performance, as the interaction between Sasobit and CR enhanced its rutting performance, in line with previous studies [17,40,42].Moreover, the use of Sasobit is preferred over chemical warm mix additives [42]. The J nr3.2 and %R values were graphed against the CR content for all the WMA binders, as presented in Figures 12 and 13.The exponential model was the most suitable in describing the relationship between J nr3.2 and %R at 64 • C with the CR content.As the CR content increased, there was an exponential decrease in the J nr3.2 value and a simultaneous exponential increase in the creep recovery (%R).This indicates that adding CR significantly improved the rutting resistance for the WMA binders.PG grades.The most significant enhancement was seen in the Z3CR binders, with t value improvement resulting in enhanced low PG grades of −28 and −34 at 15% an CR contents, respectively.Nevertheless, no clear trend emerged when the J nr3.2 or %R values at the high PG temperature were plotted against the CR content.This discrepancy can be attributed to the fact that these values were obtained at different temperatures for each asphalt binder based on its high PG temperature.The addition of the rubber enhanced the binder's resistance to creep due to the internal friction of the solid rubber particles.The recovery of the rubber-modified warm mix binders was improved by the gel-like behavior of the rubber particles interacting with the WMA binders. Table 11 presents the BBR test results, revealing appealing observations.The addition of CR to the S2 asphalt binder did not affect the low PG of the asphalt binder, but when added to the S4 asphalt binder, it reduced the low PG by 6 • C. The S2CR binders did not show any improvement (increase) in the m-value.However, S4CR binders showed enhancements in the m-value, resulting in an improved low PG grade.Conversely, the Z1.5CR binders had higher m-values, particularly with more CR content, leading to lower PG grades.The most significant enhancement was seen in the Z3CR binders, with the followed by S2CR20, showed the highest fatigue life (N f ).This indicates an improved fatigue performance at a 20% CR content. Previous studies have suggested the use of Sasobit to enhance the fatigue life of asp binders by reducing oxidation and volatilization, thereby mitigating the aging impact o asphalt binder [22,30].However, this study demonstrates a notable improvement in fat life by employing Zycotherm at 1.5% and incorporating 20% crumb rubber. Limitations It is important to note that the test results presented in this research work are lim to the mixing procedures, testing conditions, and type and concentration of the mate Therefore, changing any of those inputs may affect the outputs.At the 2.5% strain level, the increases in the number of cycles to fatigue failure for the S2CR binders were 5%, 10%, and 17%, at CR contents of 10%, 15%, and 20%, respectively.The same percentages were 3%, 5%, and 7% for the S4CR binders.On the other hand, the Z1.5CR and Z3CR binders experienced increases of 4%, 20%, and 29% and 4%, 23%, and 23%, respectively. Previous studies have suggested the use of Sasobit to enhance the fatigue life of asphalt binders by reducing oxidation and volatilization, thereby mitigating the aging impact of the asphalt binder [22,30].However, this study demonstrates a notable improvement in fatigue life by employing Zycotherm at 1.5% and incorporating 20% crumb rubber. Limitations It is important to note that the test results presented in this research work are limited to the mixing procedures, testing conditions, and type and concentration of the materials.Therefore, changing any of those inputs may affect the outputs. Conclusions This paper investigated the effect of adding two commercial warm mix additives, namely, Sasobit and Zycotherm, to a control asphalt binder.Crum rubber was then introduced to the warm mix binders and its effect was studied. Effect of Warm Mix Modifiers Upon investigating the effects of Sasobit and Zycotherm modifications and analyzing the results of the warm mix asphalt (WMA) binders' tests, the following conclusions can be drawn: • Sasobit reduced the penetration and increased the softening point significantly for S2 and S4.In contrast, Z1.5 and Z3 resulted in higher penetration values without causing a significant impact on the softening point.• The Sasobit-modified asphalt binders showed a better resistance to deformation at high temperatures due to a high |G*| and low phase angle, while Zycotherm showed mixed effects.• The addition of Sasobit decreased the low-temperature Performance Grade (PG) of the asphalt binders, reaching −16 and −10 for S2 and S4, respectively.Z1.5 exhibited similar low-temperature behavior to S2, whereas Z3 had comparable results to the control asphalt binder, with a PG of −22.• At a 2.5% strain, S2 saw an 83% increase and S4 more than tripled its cycles to fatigue. Effect of Crum Rubber on Warm Mix Binders Based on the analysis of the results regarding CR-modified warm mix asphalt (WMA) binders, the following conclusions can be inferred: • The addition of CRM decreased the penetration and viscosity while increasing the softening point.Compared to hot mix CRMB, Sasobit and Zycotherm proved to be effective in reducing the mixing and compaction temperatures, with 3% Zycotherm reducing the mixing and compaction temperatures substantially.The penetration, softening point, and viscosity results align with the rheological findings.The increased physical stiffness increased the G*/Sin.• All CR-modified WMA binders experienced a decrease in the J nr3.2 parameter, indi- cating an enhanced rutting resistance.These reductions ranged from 63% to 91% for the CR-modified Sasobit binders, and from 53% to 88% for the CR-modified Zycotherm binders, with the most substantial improvements observed in the S2CR20 and Z1.5CR20 asphalt binders.The reduction varied based on the CR content and the warm mix additive content.• The S2CR10, S2CR15, and S2CR20 binders exhibited lower creep stiffnessed for lowtemperature cracking by 11%, 43%, and 43%, respectively, compared to the control binder (S2), while the S4CR15, and S4CR20 binders did not exhibit a notable decrease in their creep stiffness values compared to the control binder (S4).• Asphalt binders with a 10% CR content showed a reduced creep stiffness and improved low-temperature performance.At 15% and 20% CR levels, the low-temperature PG values notably improved. • Increasing the CR content in the modified WMA binders led to an enhanced fatigue life.Remarkably, among the Sasobit binders, S2CR20, and among Zycotherm binders, Z1.5CR20, demonstrated the longest fatigue lives at both strain levels. In summary, and based on this study's comprehensive findings and analysis, the crumb-rubber-modified Zycotherm WMA binder (Z1.5CR20) is suitable for colder climates.In contrast, the crumb-rubber-modified Sasobit WMA binder (S2CR20) is ideal for hotclimate applications.Future research works should take the testing to the mixtures level and investigate the effect of the crumb rubber WMA on the performance of asphalt mixtures, as well as the rheological correlation between thermal and fatigue cracking. placed inside the PAV.The PAV is an oven-pressure vessel combination that takes RTFO-aged samples and exposes them to a high air pressure (2070 kPa = 300 psi) and high temperature (90 • C = 195 • F, 100 • C = 212 • F, or 110 • C = 230 • F) depending upon expected climatic conditions for 20 h.In this study, the temperature used was 110 • C due to the very hot climate in the study area.A flowchart summarizing the test procedures is shown in Figure 2. Figure 2 . Figure 2. Flowchart summarizing the testing methods in the study. Figure 2 . Figure 2. Flowchart summarizing the testing methods in the study. Figure 3 . Figure 3. Penetration values for warm mix asphalt. Figure 3 . Figure 3. Penetration values for warm mix asphalt. Figure 3 . Figure 3. Penetration values for warm mix asphalt. Figure 4 . Figure 4. Softening point results for warm mix asphalt. Figure 5 . Figure 5. Rotational viscosity of the modified asphalt binders. Figure 5 . Figure 5. Rotational viscosity of the modified asphalt binders. indicate that adding CR to the WMA binders significantly increased the viscosity at temperatures of 135 • C, 145 • C, 155 • C, and 165 • C. Polymers 2024 , 16, 906 20 of 26 PG grades.The most significant enhancement was seen in the Z3CR binders, with value improvement resulting in enhanced low PG grades of −28 and −34 at 15% an CR contents, respectively. Table 2 . General properties of the warm mix additives. Table 3 . Sieve analysis of crumb rubber. Table 2 . General properties of the warm mix additives. Table 3 . Sieve analysis of crumb rubber. Table 5 . Mixing and compaction temperatures.Complex Shear Modulus, Phase Angle, and Rutting Parameter Results).In con the Zycotherm modifier led to an 11% and 28% reduction in |G*| for Z1.5 and Z3, re tively. Table 5 . Mixing and compaction temperatures. Table 7 . BBR test results for WMA Binders. Asphalt Binder/WMA Testing Temperature (°C) m-Value Stiffness (MP The performance grade classification of each asphalt binder was established b sidering all the results obtained from the PG, MSCR, and BBR tests, as presented in 8. Table 8 . WMA asphalt binders performance grade. Table 6 . MSCR test results for WMA binders. Zycotherm increased the creep stiffness from the BBR test.Z3 exhibited a creep stiffness similar to that of the control asphalt binder. Table 7 . BBR test results for WMA Binders. Table 9 . Mixing and compaction temperatures results. Table 9 . Mixing and compaction temperatures results. Table 10 presents the MSCR (Multiple-Stress Creep Recovery) test results for all asphalt blends incorporating CR.The influence of CR on the WMA binders was evaluated through the MSCR test at 64 • C. As anticipated, adding CR improved the rutting resistance and creep recovery of the asphalt binder at a stress level of 3.2 kPa. Table 10 . MSCR test results for CR-modified WMA binders at the maximum temperature achieved (high PG). Table 11 . BBR results for the modified binders. Table 11 . BBR results for the modified binders.
13,463.8
2024-03-26T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science" ]
Prediction of Response to Cisplatin-Based Neoadjuvant Chemotherapy of Muscle-Invasive Bladder Cancer Patients by Molecular Subtyping including KRT and FGFR Target Gene Assessment Patients with muscle-invasive urothelial carcinoma achieving pathological complete response (pCR) upon neoadjuvant chemotherapy (NAC) have improved prognosis. Molecular subtypes of bladder cancer differ markedly regarding sensitivity to cisplatin-based chemotherapy and harbor FGFR treatment targets to various content. The objective of the present study was to evaluate whether preoperative assessment of molecular subtype as well as FGFR target gene expression is predictive for therapeutic outcome—rate of ypT0 status—to justify subsequent prospective validation within the “BladderBRIDGister”. Formalin-fixed paraffin-embedded (FFPE) tissue specimens from transurethral bladder tumor resections (TUR) prior to neoadjuvant chemotherapy and corresponding radical cystectomy samples after chemotherapy of 36 patients were retrospectively collected. RNA from FFPE tissues were extracted by commercial kits, Relative gene expression of subtyping markers (e.g., KRT5, KRT20) and target genes (FGFR1, FGFR3) was analyzed by standardized RT-qPCR systems (STRATIFYER Molecular Pathology GmbH, Cologne). Spearman correlation, Kruskal–Wallis, Mann–Whitney and sensitivity/specificity tests were performed by JMP 9.0.0 (SAS software). The neoadjuvant cohort consisted of 36 patients (median age: 69, male 83% vs. female 17%) with 92% of patients being node-negative during radical cystectomy after 1 to 4 cycles of NAC. When comparing pretreatment with post-treatment samples, the median expression of KRT20 dropped most significantly from DCT 37.38 to 30.65, which compares with a 128-fold decrease. The reduction in gene expression was modest for other luminal marker genes (GATA3 6.8-fold, ERBB2 6.3-fold). In contrast, FGFR1 mRNA expression increased from 33.28 to 35.88 (~6.8-fold increase). Spearman correlation revealed positive association of pretreatment KRT20 mRNA levels with achieving pCR (r = 0.3072: p = 0.0684), whereas pretreatment FGFR1 mRNA was associated with resistance to chemotherapy (r = −0.6418: p < 0.0001). Hierarchical clustering identified luminal tumors of high KRT20 mRNA expression being associated with high pCR rate (10/16; 63%), while the double-negative subgroup with high FGFR1 expression did not respond with pCR (0/9; 0%). Molecular subtyping distinguishes patients with high probability of response from tumors as resistant to neoadjuvant chemotherapy. Targeting FGFR1 in less-differentiated bladder cancer subgroups may sensitize tumors for adopted treatments or subsequent chemotherapy. Introduction Bladder cancer is still a highly frequent cancer in Europe with an incidence of nearly 200,000 cases and an annual mortality rate of 64,966 cases in 2018 [1]. Approximately 30% of these patients suffer from muscle-invasive bladder cancer (MIBC) at the time of initial diagnosis [2]. Up-to-date radical cystectomy (RC) with lymph node dissection remains the recommended treatment in highest-risk non-muscle-invasive and muscle-invasive nonmetastatic bladder cancer, preceded by cisplatin-based neoadjuvant chemotherapy (NAC) in eligible patients [3]. In order to remedy this unsatisfactory situation, serious efforts have recently focused on new therapeutic strategies regarding the application of neoadjuvant and adjuvant chemotherapies [4]. A better risk assessment of patients has been recommended by developing novel predictive/prognostic models [5]. In clinical practice, the therapeutic management of these patients has so far been performed almost exclusively on the basis of clinical data and classical pathological TNM criteria, but with few reliable results [5]. Neoadjuvant treatment modalities are still not widely accepted due to their remaining inability to accurately select patients who will benefit vs. those who may potentially be harmed [6]. It is hoped that the identification of new molecular tissue biomarkers could help to stratify risk groups and determine patients who could have a benefit from adjuvant strategies after surgery [7]. The molecular subtyping of bladder cancer has been well accepted since its initial introduction in 2014 [8][9][10]. Therein, the quantitation of KRT5 and KRT20 on mRNA level and/or their recapitulation on protein level by IHC belong to common-sense hallmarks of molecular subtyping of basal and luminal tumors, respectively. In a previous work, we showed that KRT20 is strongly associated with adverse outcome for pT1 NMIBC as well as chemotherapy-naïve MIBC [11,12]. Importantly, molecular subtyping of MIBC may also play a role as a potential biomarker for neoadjuvant treatment response. When molecular classification is to be translated into clinical use, it is important to consider that the several classification methods emphasize slightly different aspects of tumor biology [13]. However, the predictive role of these markers in MIBC patients receiving neoadjuvant chemotherapy is still unknown. Early studies indicate a tremendous decrease in KRT20 mRNA levels, when comparing matched TURB and cystectomy tissue samples before vs. after neoadjuvant chemotherapy [14]. The aim of the present study was to evaluate the predictive role of KRT20 in combination with potential, druggable resistance markers in the neoadjuvant situation and to prove their clinical usefulness. Distribution of Assessed Protein and mRNA Markers across the Study Cohort As depicted in the remark diagram (Figure 1), TURB biopsies from 36 patients could be analyzed, with both clinical data and matched tissues being available. Distribution of Assessed Protein and mRNA Markers across the Study Cohort As depicted in the remark diagram (Figure 1), TURB biopsies from 36 patients could be analyzed, with both clinical data and matched tissues being available. After radical cystectomy, the ypT0 rate was 39%, four patients showed lymph node metastases (11%) and one patient had positive margin (3%). The number of resected lymph nodes ranged from 8 to 30 (average was 14). All investigated experimental markers could be determined by PCR of urinary bladder cancer TURB biopsies as well as cystectomy tissue. As depicted in Figure 2, the relative gene expression of multiple subtype-specific marker genes significantly differed between TUR biopsy and matched cystectomy specimen. Most prominently, the median mRNA expression of the luminal marker gene KRT20 decreased from 37.76 to 30.65 (138.1-fold), while the decrease was less prominent for other luminal marker genes, such as GATA3 (decrease from 38.81 to 36.20; 6.1-fold) and ERBB2 (decrease from 37.94 to 35.40; 5.8-fold). Interestingly, the median expression level of the basal marker gene KRT5 decreased less substantially after neoadjuvant chemotherapy (decrease from 36.36 to 35.27; 2.1-fold), while a marked change could be observed for the upper quartile of KRT5 mRNA expression (decrease from 40.59 to 36.41; 18.1-fold). In contrast, the median expression of FGFR1 mRNA was higher in cystectomy samples after neoadjuvant chemotherapy compared to matched TUR biopsy samples before therapy (increase from 33.29 to 35.89; 6.1-fold), while its receptor tyrosine kinase family member FGFR3 was significantly lower in cystectomy samples after chemotherapy (decrease from 37.89 to 34.98; 7.5-fold). When comparing the After radical cystectomy, the ypT0 rate was 39%, four patients showed lymph node metastases (11%) and one patient had positive margin (3%). The number of resected lymph nodes ranged from 8 to 30 (average was 14). All investigated experimental markers could be determined by PCR of urinary bladder cancer TURB biopsies as well as cystectomy tissue. As depicted in Figure 2, the relative gene expression of multiple subtype-specific marker genes significantly differed between TUR biopsy and matched cystectomy specimen. Most prominently, the median mRNA expression of the luminal marker gene KRT20 decreased from 37.76 to 30.65 (138.1-fold), while the decrease was less prominent for other luminal marker genes, such as GATA3 (decrease from 38.81 to 36.20; 6.1-fold) and ERBB2 (decrease from 37.94 to 35.40; 5.8-fold). Interestingly, the median expression level of the basal marker gene KRT5 decreased less substantially after neoadjuvant chemotherapy (decrease from 36.36 to 35.27; 2.1-fold), while a marked change could be observed for the upper quartile of KRT5 mRNA expression (decrease from 40.59 to 36.41; 18.1-fold). In contrast, the median expression of FGFR1 mRNA was higher in cystectomy samples after neoadjuvant chemotherapy compared to matched TUR biopsy samples before therapy (increase from 33.29 to 35.89; 6.1-fold), while its receptor tyrosine kinase family member FGFR3 was significantly lower in cystectomy samples after chemotherapy (decrease from 37.89 to 34.98; 7.5-fold). When comparing the pre-therapy total gene expression data distribution for each marker, with the gene expression data distribution of tumors achieving a pathological complete response, it became apparent that the responding tumors were disproportionally enriched in the high KRT20 expressors and low FGFR1 expressors. pre-therapy total gene expression data distribution for each marker, with the gene expr sion data distribution of tumors achieving a pathological complete response, it becam apparent that the responding tumors were disproportionally enriched in the high KRT expressors and low FGFR1 expressors. . The subtraction of the DCT from total number of the PCR reaction converts the numbers, so that higher numbers mean higher pression levels. Pretreatment mRNA expression in TUR biopsies is depicted in blue. Posttreatm mRNA expression is depicted in orange. Upper panel depicts the subtyping marker, lower pa the assessed target genes. Tumor gene expression from tumors achieving pCR are displayed darker coloring. Correlation of mRNA Markers on Basis of Molecular Subtyping and Clinical Variables As previously described and depicted in Figure 3, the mRNA expression of basal a luminal marker genes was negatively associated in TURB biopsy samples before chem therapy. KRT5 was negatively associated with the luminal marker's genes KRT20, GAT and ERBB2 (r = −0.6111, p < 0.0001; r = −0.4782, p = 0.0032 and r = −0.3611, p = 0.0305, spectively). However, FGFR3 mRNA expression was positively associated with the do inant luminal marker gene KRT20 (r = 0.3470, p = 0.0381), while virtually no associati could be detected with the basal marker gene KRT5 (r = 0.0921, p = 0.5930). Importantly, Spearman correlation supported the previously mentioned inverse lation of KRT20 and FGFR1 mRNA expression with pCR status. While KRT20 mRN tended to be positively associated with pCR status (r = 0.3072; p = 0.0684), the negat association of FGFR1 mRNA expression with pCR status was highly significant ( −0.65418, p < 0.0001). Correlation of mRNA Markers on Basis of Molecular Subtyping and Clinical Variables As previously described and depicted in Figure 3, the mRNA expression of basal and luminal marker genes was negatively associated in TURB biopsy samples before chemotherapy. KRT5 was negatively associated with the luminal marker's genes KRT20, GATA3 and ERBB2 (r = −0.6111, p < 0.0001; r = −0.4782, p = 0.0032 and r = −0.3611, p = 0.0305, respectively). However, FGFR3 mRNA expression was positively associated with the dominant luminal marker gene KRT20 (r = 0.3470, p = 0.0381), while virtually no association could be detected with the basal marker gene KRT5 (r = 0.0921, p = 0.5930). Importantly, Spearman correlation supported the previously mentioned inverse relation of KRT20 and FGFR1 mRNA expression with pCR status. While KRT20 mRNA tended to be positively associated with pCR status (r = 0.3072; p = 0.0684), the negative association of FGFR1 mRNA expression with pCR status was highly significant (r = −0.65418, p < 0.0001). Hierarchical Clustering Defines Subgroup of Chemotherapy Resistant Tumors The relative mRNA expression of the candidate genes was used to perform two-way hierarchical clustering, and clinical outcome was superimposed to characterize the arising patient groups. As depicted in Figure 4, hierarchical clustering revealed two KRT5-positive basal clusters with moderate pCR rate (3/11; 27%), one KRT20-positive luminal cluster with high pCR rate (10/16; 63%) and one "double negative" subgroup with both keratins (KRT5 and KRT20) being expressed at very low levels but exhibiting high FGFR1 expression, which we therefore named stromal-rich tumors. The stromal-rich tumor subgroup had low pCR (1/9; 12.5%), with the only exception in the stromal cluster having again high KRT20 and low FGFR1 expression, indicating the limitation of the cluster method. However, in summary, the luminal cluster exhibited a twofold higher pCR rate, while the stromal-rich tumors exhibited a threefold lower pCR rate compared to the overall pCR rate of 38%. Hierarchical Clustering Defines Subgroup of Chemotherapy Resistant Tumors The relative mRNA expression of the candidate genes was used to perform two-way hierarchical clustering, and clinical outcome was superimposed to characterize the arising patient groups. As depicted in Figure 4, hierarchical clustering revealed two KRT5-positive basal clusters with moderate pCR rate (3/11; 27%), one KRT20-positive luminal cluster with high pCR rate (10/16; 63%) and one "double negative" subgroup with both keratins (KRT5 and KRT20) being expressed at very low levels but exhibiting high FGFR1 expression, which we therefore named stromal-rich tumors. The stromal-rich tumor subgroup had low pCR (1/9; 12.5%), with the only exception in the stromal cluster having again high KRT20 and low FGFR1 expression, indicating the limitation of the cluster method. However, in summary, the luminal cluster exhibited a twofold higher pCR rate, while the stromal-rich tumors exhibited a threefold lower pCR rate compared to the overall pCR rate of 38%. Contingency Testing to Evaluate Predictive Value of Marker Genes To overcome limitations of the clustering method to predict the outcome of the par- Contingency Testing to Evaluate Predictive Value of Marker Genes To overcome limitations of the clustering method to predict the outcome of the partitioning method was used to define the optimal cut-off to predict pathological complete response by KRT20 and FGFR1 mRNA levels. When applying these cut-offs in contingency tests, both markers revealed themselves to be predictive for clinical outcome ( Figure 5). Contingency Testing to Evaluate Predictive Value of Marker Genes To overcome limitations of the clustering method to predict the outcome of the partitioning method was used to define the optimal cut-off to predict pathological complete response by KRT20 and FGFR1 mRNA levels. When applying these cut-offs in contingency tests, both markers revealed themselves to be predictive for clinical outcome (Figure 5). Stratification based on KRT20 mRNA did separate tumors exhibiting high KRT20 mRNA expression and high pCR rate (66.7%) from tumors with low KRT20 mRNA expression and low pCR rate (25%). This separation was significant in a chi-squared test (Chi 2 = 5.845, p = 0.0156). Stratification based on FGFR1 mRNA did separate tumors exhibiting high FGFR1 mRNA expression and low pCR rate (0%) from tumors with low FGFR1 mRNA expression and high pCR rate (66.7%). This separation was highly significant in chi-squared testing (Chi 2 = 21.38, p < 0.0001). Stratification based on KRT20 mRNA did separate tumors exhibiting high KRT20 mRNA expression and high pCR rate (66.7%) from tumors with low KRT20 mRNA expression and low pCR rate (25%). This separation was significant in a chi-squared test (Chi 2 = 5.845, p = 0.0156). Stratification based on FGFR1 mRNA did separate tumors exhibiting high FGFR1 mRNA expression and low pCR rate (0%) from tumors with low FGFR1 mRNA expression and high pCR rate (66.7%). This separation was highly significant in chi-squared testing (Chi 2 = 21.38, p < 0.0001). Discussion Since the discovery of luminal and basal subtypes in muscle-invasive bladder cancer in 2014, their impact on response to neoadjuvant chemotherapy has been discussed [9]. Interestingly, already in the first report, basal tumors characterized, i.a., by high KRT5 mRNA expression, as well as luminal tumors exhibiting, i.a., high KRT20 mRNA expression had intermediate to high pathological complete response rates ranging between 25 and 60%. In contrast, the so-called "p53-like" tumors, bearing no p53 mutation and exhibiting low KRT5 and KRT20 mRNA expression did not or only marginally responded to upfront chemotherapy. Interestingly, when comparing gene expression levels in TUR biopsies before neoadjuvant chemotherapy with matched cystectomy tissue after treatment, the frequency of luminal tumors dropped, while basal tumors remained similar and "p53-like tumor" increased [9]. However, molecular subtyping evolved and became more complex. Recently, a substratification of the original tripartite molecular subtypes has been published as "consensus molecular classification of muscle invasive bladder cancer" that distinguishes "Luminal Papillary", "Luminal unstable", "Luminal unspecified", "Basal/Squamous", "Stroma-rich" and "Neuroendocrine-like" subtypes. However, while the diverse subtyping approaches were integrated by quantifying similarities of genome-wide RNAseq-based expression analysis using Cohens kappa scores and constructing clustered networks, the clinical impact and prognostic value became less apparent, with the smallest subtype ("Neuroendocrine-like") having a markedly different, worse outcome [15]. Still the molecular sub-subtyping provoked subtle differentiation of hypothesized best treatment options, with "Luminalpapillary" and "Luminal-infiltrated" having "low predicted likelihood of response" to a neoadjuvant chemotherapy report [8] in contrast to the initial report [9]. Here, we have used RT-qPCR-based quantitation of predefined hallmark subtyping markers (KRT5/KRT20) [8], with proven prognostic impact in muscle-invasive and nonmuscle-invasive bladder cancer [11,12] and which have been shown to have some predictive value in a finding cohort [14]. Moreover, we have integrated FGFR1 and FGFR3mRNA expression analysis into the predictive model to evaluate the impact of stromal interactions and therapeutic implications in view of pan-FGFR inhibitors entering the field. We could validate a dramatic decrease in luminal marker gene expression as exemplified by KRT20, which is congruent with the initial finding of Choi et al., from matched-pair analysis, in which luminal subtype signatures get lost after neoadjuvant chemotherapy. Moreover, we could recapitulate the finding of an independent previous cohort, where KRT20 also exhibited a dramatic decrease in overall expression [14]. Therefore, we conclude that luminal tumors, defined by high KRT20 mRNA expression, do have a high likelihood of responding to neoadjuvant chemotherapy (66.7% pCR rate in KRT20 high tumors vs. 25% pCR rate in KRT20 low tumors). This is in sharp contrast to previous hypothetical assumptions [8], but in line with the initial original work [9]. Moreover, by analyzing FGFR1 mRNA expression, a stromal-rich tumor subtype has been identified that lacks both KRT5 and KRT20 mRNA expression and is almost resistant to upfront chemotherapy (0% pCR rate in FGFR1 high tumors vs. 66.7% pCR rate in FGFR1 low tumors). This stromal-rich tumor subtype has similarities with the "p53-like" non-responding tumors not responding to upfront chemotherapy in the initial subtyping landmark paper [9], while the major impact of FGFR1 itself has not been reported in previous publications. Importantly, FGFR1is a bona-fide target of FGFR inhibitors, which had initially been introduced into the treatment of metastatic bladder cancer, which harbors FGFR3 mutations or fusions [16]. It is tempting to speculate whether blocking FGFR1 activity in stromal-rich subtypes can restore sensitivity to neoadjuvant chemotherapy in otherwise resistant, muscle-invasive bladder cancer, which warrants further clinical investigation. As discussed above, Choi et al. [9] identified a basal, a luminal and a so-called p53like subtype. Approximately one-third of patients belonged to each subtype [9]. They initially reported that p53-like tumors were more resistant to NAC than luminal or basal tumors [9]. Subsequent publications focused on the survival benefit of basal tumors, which in the absence of NAC were associated with the worst prognosis but had the best prognosis after NAC [17]. Recently, Seiler et al. [18] developed a single-sample genomic subtyping classifier based on samples classified according to the molecular subtyping methods of the aforementioned projects. OS and pCR according to subtype (claudin-low, basal, luminal-infiltrated, and luminal) were retrospectively compared for 343 MIBC NAC and 476 MIBC non-NAC cases. Luminal tumors had the longest OS with and without NAC. Nevertheless, OS differed according to the response to NAC. Claudin-low tumors were associated with poor OS irrespective of treatment regimen. Basal tumors showed the highest improvement in OS with NAC compared with surgery alone [18]. Despite having higher case numbers, the analysis lacks the comparison of tumor tissue analysis before and after chemotherapy to conclude on the responsive subtypes. In contrast, the comparison of subtyping markers in TURB versus matched Cystectomy samples is in line with initial and recent publications demonstrating luminal subtype being most strongly affected by chemotherapy in independent cohorts of similar size, as this type of tumor cell is disappearing in the post-treatment samples [9,14]. Furthermore, our findings indicate that luminal tumors defined by high KRT20 mRNA expression do have worse outcomes in MIBC if not treated by (neo)adjuvant chemotherapy, while basal tumors defined by KRT5 mRNA overexpression have better survival irrespective of chemotherapeutic treatment [12]. Of note, in this series, the determination of basal/luminal tumors was performed with an identical molecular test system as has been used in this work. The technique to perform molecular subtyping seems to be critical for prognostic interpretation. While the RNAseq-based subtyping approaches use correlative measures across different platforms against predefined, heterogenous cohorts to vote for a most probable subtype, the RT-qPCR method uses highly sensitive and robust single-marker assessments, with reproducible cut-off values to differentiate between positive and negative marker status on a single sample basis. The same method has been used for outcome prediction and subtyping in breast cancer by developing respective IVD assays [19][20][21][22]. The potential limitations of our study relate to its retrospective design and small cohort size. However, the number of patients was limited; the study included consecutive bladder cancer patients, who were homogeneously being treated with the same neoadjuvant chemotherapy scheme before radical cystectomy at a single center. Because retrospective designs do not guarantee causality, further prospective studies and the use of independent series are warranted to prove the prognostic and predictive value of the analyzed marker combinations to robustly stratify the clinical outcome in real-world assessments, which is the aim of the prospective bladder BRIDGister that had been initiated recently. Reflecting on the present study in light of already published data, there is reason for optimism that predictive biomarkers will soon be used in clinical practice to guide the use of NAC in patients with MIBC. It seems that, similar to the situation in breast cancer, molecular subtyping of tumors as well as molecular target gene quantification could help to identify tailored treatments in the neoadjuvant setting to optimally address the individual tumor biology of advanced bladder cancer patients. Moreover, applying this approach may help to significantly accelerate the clinical development of new therapeutic options and their optimal sequence with the established chemotherapeutic backbone in defined subtypes of an advanced bladder cancer setting. It is well-accepted that achieving a pathological complete response after NAC with consecutive RC is associated with improved overall survival [23]. Therefore, both the European and American guidelines recommend a platinum-based neoadjuvant chemotherapy for patients with cT2-T4a cN0cM0 irrespective of molecular subtype [24]. However, most recently it has been shown that efficacy of neoadjuvant chemotherapy is not only important for the immediate tumor regression contributing to an improved survival of patients achieving a pathological complete response, but also a prerequisite for efficacy and survival benefit from subsequent adjuvant checkpoint therapy [25]. In this prospective, randomized clinical trial, the forest plot analysis indicates that adjuvant monotherapy treatment with the anti-PD-1 checkpoint therapy nivolumab was only superior compared to the placebo control arm, when the patients had received a preceding platinum-based neoadjuvant chemotherapy [26]. This suggests that not only patients with pathological complete response towards NAC, but also patients with chemotherapysensitive tumors exhibiting minor responses benefit from neoadjuvant chemotherapy, as it forces the remaining tumor tissue to evade the chemotherapy-induced attack by the host's immune system by manipulating the checkpoint control. Without preceding chemotherapy, the adjuvant immune therapy is ineffective, as the tumor is masked by the host immune system due to its tumor biology. Based on our findings, we have speculated that particularly luminal tumors with lower immune recognition and subsequent lower immune infiltration, which on the other hand are most sensitive towards chemotherapy, would have the best survival after first/second line checkpoint therapy. Most recently, we could show that indeed KRT20-positive tumors as defined by RT-qPCR from TUR biopsies do have the best survival after second-line anti-PD1 treatment in a retrospective real-world cohort (Wirtz et al. in preparation). That means that adjuvant immunotherapy is likely to have the greatest impact if its use is guided by predictive biomarkers, selecting the most appropriate neoadjuvant regimen. For tumors not responding to standard chemotherapy as defined by overexpression of stromal signatures and FGFR1 target gene expression, the inhibition of FGFR activities by targeted approaches may be superior to predispose muscle-invasive bladder cancer to subsequent immune oncology treatment. In summary, molecular subtypes and precise target gene assessment on the basis of the underlying tumor biology, as exemplified in this study, seem to be promising to better select the appropriate therapy sequence of standard and upcoming targeted therapies. Patients Patient Population From June 2014 to March 2021, a total of 55 patients were included in the trial. After evaluating the necessary data set and FFPE tissue, in total, 36 cases could be included. Representative tissue from the primary tumor as well as from cystectomy tissue was mandatory. Together, 30 male patients and 6 female patients (average age 69 years, range 53-85 years) were included. Pathohistological T-category and grade for the primary tumors are as follows: The study included for the primary tumors pTaG2 (n = 2), pT1G2 (n = 3), pT1G3 (n = 5), pT2G2 (n = 3), and pT2G3 (n = 23) obtained by transurethral resection under institutional-review-board-approved protocols. Three patients showed carcinoma in situ (8%). All non-muscle invasive urothelial carcinomas included in the study progressed to muscle-invasive tumors under the follow-up. All patients were treated with radical surgery after receiving neoadjuvant chemotherapy. Patient characteristics including clinical lymph node status before chemotherapy as well as ECOG performance status at the point of starting chemotherapy are summarized in Table 1. Eligibility Eligible patients for this trial were required to have histologically confirmed MIBC transitional cell carcinoma in the bladder. Patients who had received a previous systemic chemotherapy regimen were excluded. Previous radiation therapy was also an exclusion criterion. Additional eligibility requirements included the following: an ECOG performance status of 0 to 2, a leukocyte count ≥3000/µL, a platelet count ≥100,000/µL, serum bilirubin <1.5 mg/dL, glomerular filtration rate >60 mL/min, and age >18 years. Patients with other active malignancies or any other serious or active medical conditions were excluded. Pregnant or lactating females were ineligible. All patients were required to provide written informed consent prior to the study enrolment. Pretreatment Evaluation Prior to enrolment in this trial, all patients were required to have a complete history, physical examination, complete blood counts, chemistry profile, and urine analysis. In addition, patients underwent computed tomography scans of the chest, abdomen, and pelvis with appropriate tumor measurements. Assessment of Treatment Efficacy All patients received treatment with the following regimen: gemcitabine at a dose of 1000 mg/m 2 as a 30 min intravenous infusion on days 1 and 8. On day 2, cisplatin at a dose of 70 mg/m 2 was administered as an intravenous infusion and hydration with 2000 mL NaCl 0.9%. The regimen was repeated every 21 days. Patients received standard premedication and antiemetic prophylaxis. Patients were evaluated for response to treatment after the completion of 2 courses (6 weeks). Reevaluation included a repeat of all previously abnormal radiologic studies with repeat of objective tumor measurement. Patients received 1 to 4 (median 2) cycles of NAC. Dose Modifications All patients received full doses of both agents on day 1 and 2 of the first course of treatment. Subsequent doses were based on hematologic and non-hematologic toxicity observed. Dose modifications for myelosuppression were determined by the blood counts measured on the day of scheduled treatment. Nadir blood counts were not used as a basis for dose reduction. On day 1 of each course, full doses of all drugs were administered if the leukocyte count was ≥3000/µL and the platelet count was >100,000/µL. If the leukocyte count was <3000/µL or the platelet count was <100,000/µL, patients received granulocyte colony stimulating factor. Criteria for Follow-Up The follow-up consisted of clinical examination, ultrasound of abdomen and computed tomography scans of the chest, abdomen, and pelvis with appropriate tumor measurements every 6 months. Progression was defined as new metastatic disease or local progress during follow-up. Chemotherapy response was defined as absence of recurrence, progression, or death from the disease during follow-up. Responses were defined using the response evaluation criteria in solid tumors (RECIST). A complete response (CR) after neoadjuvant chemotherapy was defined as ypT0 in final histopathological report after cystectomy. Surgical Intervention All urinary diversions were performed as open surgeries by one surgeon who had more than 10 years operative experience in practice after fellowship. Men underwent removal of the prostate if present and women underwent hysterectomy and bilateral salpingo-oophorectomy if those organs were present. The extent of the pelvic lymph node dissection (PLND) was left to the discretion of the surgeon based on clinician preference and judgment (extent of disease, vascular disease). The extent of PLND was alterable intraoperatively based on clinical findings (vascular disease, fibrosis, adenopathy). After completion of radical cystectomy plus PLND, the open urinary diversion was performed based on preoperative and intraoperative assessments and previous patient discussion. Isolation of Tumor RNA After histopathological confirmation of >20% tumor content in TUR biopsy samples based on HE stain evaluation, one subsequent 10 µm slice was used for RNA extraction from FFPE tissue with a commercially available bead-based extraction method (XTRACT kit; STRATIFYER Molecular Pathology GmbH, Cologne, Germany). Similarly, for cystectomy, representative tumor blocks with sufficient tumor content were histopathologically selected for RNA extraction. In cases of pathological complete response, representative scar tissue indicative of former presence of tumor cells was selected as comparative control tissue. After binding and washing to magnetic beads, the RNA was eluted with 100 µL elution buffer and RNA eluates were then stored at −80 • C until use. Gene Expression by RT-qPCR The mRNA expression levels of KRT5, KRT20, ERBB2, GATA3, FGFR3 and FGFR1, as well as one reference gene (REF), namely CALM2, were determined by RT-qPCR, which involves reverse transcription of RNA and subsequent amplification of cDNA executed successively as a 1-step reaction using inventoried validated TaqMan Gene Expression Assays (MP002, MP015, MP452, MP689, MP599 and MP597, STRATIFYER Molecular Pathology GmbH, Köln, Germany). The robustness and usefulness of CALM2 as a housekeeping gene for diverse candidate genes as well as comparability to diverse IHC assessments such as CK20/KRT20, MKI67/Ki67 and PDL1, when used as single reference gene, has been demonstrated in multiple publications [11,26,27] and resulted in the introduction of CALM2 as the housekeeping gene in CE-certified IVD products such as Endopredict [19] and MammaTyper [28]. Each patient sample or control was analyzed with each assay mix in triplicate. The experiments were run on a Siemens Versant (Siemens, Erlangen, Germany) according to the following protocol: 5 min at 50 • C and 20 s at 95 • C, followed by 40 cycles of 15 s at 95 • C and 60 s at 60 • C. Forty amplification cycles were applied, and the cycle quantification threshold (Cq) values of three markers and one reference gene for each sample (S) were estimated as the median of the triplicate measurements. The final values were generated by subtracting the CT levels from the reference gene CALM2 from the CT level of the candidate gene to result in Delta CT values (DCT). The DCT was subsequently subtracted from the total number of cycles (40-DCT) to ensure that normalized gene expression obtained by the test was proportional to the corresponding mRNA expression levels, and higher 40-DCT values mean higher mRNA expression levels. Statistical Analysis Medians and interquartile ranges (IQR) were determined for continuous variables as well as frequencies and proportions for categorical variables, and candidate gene mRNA expression levels were plotted as data distributions with 40-DCT values on the y-axis. Correlation analyses were performed using Spearman rank correlations. Partition models were generated to create contingency tables with optimal cut-offs. Two-way hierarchical clustering using the continuous mRNA expression values of KRT5, KRT20, GATA3, ERBB2, FGFR3 and FGFR1 was performed using Ward's minimum variance method, wherein the distance between two clusters is the ANOVA sum of squares between the two clusters added up over all the variables. Ward's method joins clusters to maximize the likelihood at each level of the hierarchy under the assumptions of multi-variate normal mixtures, spherical covariance matrices, and equal sampling probabilities. Finally, nonparametric testing and a chi2 test were conducted to examine differences in continuous and categorical variables as appropriate. All statistical tests were two-sided and p-values < 0.05 were considered statistically significant. All tests and calculations were performed using the software R, version 3.1.2 (R Development Core Team 2014) or JMP 9.0.0 (SAS Institute Inc., 100 SAS Campus Drive Cary, Cary, NC 27513-2414, USA). Ethics The study was performed according to the Declaration of Helsinki. The study was approved by the local Institutional Review Board of National Medical Association Brandenburg (No. AS S19(bB)/2020 dated 4 June 2020). Written informed consent was obtained from each participant. Conclusions Molecular subtyping distinguishes patients with a high probability of response from tumors as resistant to neoadjuvant chemotherapy. Targeting FGFR1 in less-differentiated bladder cancer subgroups may sensitize tumors for adopted treatments or subsequent chemotherapy.
7,477.4
2022-07-01T00:00:00.000
[ "Medicine", "Biology" ]
The Design of CNN Architectures for Optimal Six Basic Emotion Classification Using Multiple Physiological Signals This study aimed to design an optimal emotion recognition method using multiple physiological signal parameters acquired by bio-signal sensors for improving the accuracy of classifying individual emotional responses. Multiple physiological signals such as respiration (RSP) and heart rate variability (HRV) were acquired in an experiment from 53 participants when six basic emotion states were induced. Two RSP parameters were acquired from a chest-band respiration sensor, and five HRV parameters were acquired from a finger-clip blood volume pulse (BVP) sensor. A newly designed deep-learning model based on a convolutional neural network (CNN) was adopted for detecting the identification accuracy of individual emotions. Additionally, the signal combination of the acquired parameters was proposed to obtain high classification accuracy. Furthermore, a dominant factor influencing the accuracy was found by comparing the relativeness of the parameters, providing a basis for supporting the results of emotion classification. The users of this proposed model will soon be able to improve the emotion recognition model further based on CNN using multimodal physiological signals and their sensors. Introduction The negative stimuli that people experience in their daily lives vary widely, and the accumulation of these negative stimuli causes stress and mental illnesses such as depression. However, owing to the lack of awareness of the dangers of mental illness and the negative social perception, treatment behaviors are often passive. Additionally, efforts to alleviate negative stimuli are ongoing, but it is difficult to grasp the degree of stress of an individual, and, in many cases, emotional differences make assessment difficult. Therefore, differences in personal emotions are very important in relieving stress and treating mental illnesses. Representative techniques for recognizing emotions include image-based recognition, voice-based recognition, and physiological-signal-based recognition [1]. All three techniques show good emotion-recognition performance. However, image-based recognition is influenced by the background or brightness of the image, and recognition is impossible with images taken at some viewing angles. Voice-based emotion recognition identifies instantaneous emotion with high accuracy. However, it has the disadvantage of recognizing emotions only over short time periods [2]. Emotion recognition based on physiological signals is less influenced by the external environment than image-and voice-based recognition. Further, it is less sensitive to sociocultural differences among users. In addition, because physiological signals are controlled by the autonomic nervous system, they cannot be intentionally manipulated. The use of physiological signals is advantageous in that these signals are acquired in a natural emotional state that is learned socially, rather than artificially [3,4]. In other words, emotion recognition using physiological signals is a very powerful method for grasping human emotional states. Moreover, numerous studies have shown that physiological signals and human emotional states are closely related [5][6][7][8]. Based on this reference, the study was conducted using heart rate variability and respiration signal, representative responses of the autonomic nervous system. Emotion classification through defined emotions and measured signals is performed in various ways, ranging from traditional statistical methods and machine learning (linear regression, logistic regression, hidden Markov model, naïve Bayes classification, support vector machine, and Decision Tree) to the latest technique, deep learning. In particular, the support vector machine (SVM) is a widely used method for emotion classification [9][10][11]. Torres-Valencia et al. [12] classified two-dimensional emotions by using HMM, and studies using the C4.5 Decision Tree [13], K-nearest neighbor (KNN) [14,15], and Linear Discriminant Analysis (LDA) [16] have been reported. There have also been studies using deep-learning methods, such as convolutional neural networks (CNNs) [17,18], Deep Belief Network (DBN) [19], and Sparse AE [20], or models integrating machine learning and deep learning [21,22]. In addition to the simple categorization of emotions, studies that apply classified models to various fields have been actively conducted [23][24][25][26]. Even the conventional classifiers used in the past are not inferior in performance to state-of-the-art classifiers, and the results may vary depending on the classifier parameters and the characteristics of input data. Although many classification models have been introduced, this study used the CNN model to classify emotions. CNN is suitable model for processing and recognizing images. Thus, we used the advantages of CNN and applied the structure to classify physiological signals. CNN is a deep learning method and it does not need feature extraction because it learns features automatically. CNN is more scalable. In particular, CNN is easy to integrate with other deep learning models. By transforming the learning method in parallel, it can easily build hybrid or ensembled models with other deep learning models. it can also re-train CNNs for new tasks based on existing models. As mentioned above, previous studies classified emotions by acquiring physiological signals and then classifying these signals in various ways. In other words, the research focus was on the processing of data and classification of signals using classification models. In contrast, the present study proposes an optimum emotion classification model beyond simple emotion classification. For this purpose, we compared the classification accuracy of single-signal and multi-signal approaches, experimented with various combinations between input signals for classify emotions into various stages, and verified the dominance of parameters using principal component analysis. Furthermore, the emotion recognition methods using various deep-learning techniques proposed in this study are expected to be referred to classify emotions efficiently. The overall flow of this study is shown in Figure 1. Definition of Emotion and Study Using Physiological Signal In many studies related to existing emotions, human emotions are divided into arousal and valence [27] by using a two-dimensional emotion classification model based on the circumplex model of affect [28], and dominance is applied to two-and three-dimensionally extended emotional models. This study, however, was based on Ekman's six basic emotions, which are defined based on facial expressions that are commonly found in various countries and cultures and are associated with happiness, fear, surprise, anger, sadness, and disgust [29]. Emotions defined in this way can be measured using physiological signals. Physiological signals can be easily measured using non-invasive sensors and wearable devices, and various signals such as electrocardiogram (ECG), electromyogram (EMG), electroencephalogram (EEG), galvanic skin response (GSR), skin temperature (SKT) and respiration (RSP) are closely related to human emotions [20]. Different studies used different numbers of signals; Garcia et al. [30] used seven physiological signals to classify emotions, while Zheng et al. [31] and Rami et al [32] classified emotions with EEG signals alone. Many studies have been conducted to classify emotions by combining various types of signals or by obtaining two or more signals, the multimodal signal. Basically, multimodal signals include the use of signal components with plural sensory modalities [33]. There are various ways to apply the multimodal signal. Emotion recognition and analysis were performed using facial expressions and EEG signals by Huang et al. [34] and using facial expressions and voices by Andrea et al. [35]. There is no relationship between the number of signal types and the classification rate, and even when two experiments are performed under the same conditions, the results may vary depending on the selection of features in the analysis. Emotion Analysis Method To analyze emotions through respiration, the subject's respiration should be measured in real time. There are many ways to measure respiration, including the direct measurement of air flow during inhalation and exhalation, the use of a photoplethysmogram (PPG) sensor, and a method of measuring changes in the width of the thoracic cavity by using a respiratory band. The direct measurement of air flow allows the precise measurement of breathing volume, but it involves specific testing equipment and requires subjects to wear inconvenient airflow transducers. As an alternative to this problem, PPG sensors, which use peripheral circulation changes through respiration, may be used [36,37]. However, since the PPG sensor was originally developed for measuring heartbeat, it is necessary to extract the respiratory signal from the PPG signal. However, data loss or distortion may occur in the signal-extraction process, which may be somewhat less accurate than direct measurement of respiration. Therefore, in this study, we decided to measure the size change of the thoracic cavity by utilizing a wearable breathing band to compensate for the limit of secondary data that can be extracted from a PPG sensor. The respiration band measures the change in a magnetic-field sensor built into the band, and it converts the change in size of the chest cavity for each breath into an electrical signal to measure respiration. Unlike the PPG sensor, the measurement of respiration using respiration bands does not require additional signal processing and is simpler than airflow measurement. Therefore, in this study, respiration signals were measured using a respiration band, the respiration rate (RSP rate) was measured from the respiration signal (RSP value), and the measured variables were used as RSP parameters. Emotions can also be recognized using heartbeat signals in addition to respiratory signals. Heartbeat signals can be measured using various methods. Of these, a method of acquiring an electrocardiogram (ECG) directly through an attached electrode and a measurement method of using a PPG/blood volume pulse (BVP) sensor on a finger or an earlobe are the most commonly used. The direct method of acquiring an ECG utilizes the principle that the skin senses electrical impulses that occur each time the heart beats. The measurement using a PPG/BVP sensor measures the blood flow and extracts the heartbeat signal from optical electronics by absorbing the infrared light of the cell and blood vessels. The method of measuring and applying bio-signals in this manner has the advantages of simplicity and ease of operation even with the newest smartphones [38]. The heart rate (HR) is the rate of heartbeats, and it is measured as the number of beats of the heart per minute. In other words, HR is the number of peak-to-peak periods in the ECG per minute. One cycle of an ECG consists of QRS waves. The highest peak in one cycle is referred to as the R peak, and the peak-to-peak time interval of each electrocardiogram is called the RR or NN interval or inter-beat interval [39]. A continuous change in the inter-beat interval or RR interval is called heart rate variability (HRV). Because HRV reflects the ability of the heart to adapt to the environment, the analysis of HRV is useful for assessing autonomic imbalances by quantifying the state of the autonomic nervous system, which regulates cardiac activity [40]. Therefore, the activity of the autonomic nervous system activity, that is, sympathetic and parasympathetic nerve activity, can be measured through HRV analysis. Heart Rate Variability (HRV) Analysis The analysis of HRV can be divided into analysis in the time domain and analysis in the frequency domain. The simplest method of analyzing HRV is time-domain analysis, in which the time interval of the QRS wave or the instantaneous heart rate at a specific time is analyzed. Analysis in the frequency domain is used to analyze the power distribution of the function to frequency and to quantify the balance of the autonomic nervous system [41,42]. The parameters obtained through the analysis in each domain are called HRV parameters. In this study, a finger-clip BVP sensor was used to obtain HRV parameters in the time domain and frequency domain. The different variables that can be obtained from the time-domain analysis are the mean RR interval (RRI), standard deviation of normal to normal interval (SDNN), root-mean-square of successive differences (RMSSD) and pNN50(proportion of NN50), but we used the most basic HR and HRV amplitudes. In addition, different variables such as very low frequency (VLF), low frequency (LF), high frequency (HF), VLF/HF ratio, and LF/HF ratio can be extracted through analysis in the frequency domain. The present study used LF, which indicates the activity of the sympathetic nervous system, and HF, which indicates the activity of the parasympathetic nervous system, as parameters, and they were obtained through analysis in the frequency domain. In addition, the LF/HF ratio [43], which can estimate the ratio of sympathetic and parasympathetic activity, was used as a parameter. The detailed meaning of these parameters is described in Table 1. Emotion Classification Using Machine Learning and Deep Learning There have been many studies using various parameters of bio signals using sensors. Fujiwara et al. [45] proposed drowsiness detection and validation with HRV analysis and EEG-based signals. Szypulska et al. [46], similar to Fujiwara et al. [45], used HRV analysis to predict fatigue and sleep onset. In addition, research has been proposed to reconstruction PPG signals into ECG signals using MLP. In this study, however, we wanted to classify emotions by the acquired parameters. Research on classifying human emotions using physiological signals has been performed using various methods and systems. Emotion classification starts with the definition of the emotions to be recognized, the measurement of signals, and the selection of classifiers. In the definition of emotions, the emotions to recognize are selected based on the discrete emotion model or by using a two-or three-dimensional emotion model. Verma et al. [15] and Wen et al. [47] classified human emotions into 13 and 5 emotions, respectively. However, Liu et al. [48] classified emotions by using a two-dimensional emotion model. Additionally, Valenza et al. [49] used two-dimensional emotion models, but they defined emotions in five stages. Theekshana et al. [50] classified discreate emotion as an ensemble model. Thus, even when the study is related to emotion classification, the contents of the research can be changed according to the emotions that are defined. In the present study, we classified emotions using CNN. Many other studies have classified emotions in other ways. As mentioned in the Introduction, a method of classifying emotions uses conventional statistical analysis techniques and deep learning. Many studies used both-statistical analysis and deep learning-methods or combined them to increase the accuracy of the model. Yin et al. [51] classified emotions into multiple SAEs and DEAP datasets. Their results were compared with the conventional classifier and deep learning, and the performance of the constructed deep-learning model was evaluated. Zheng et al. [19] studied the classification performance of KNN, SVM, and GELM after integrating HMM and DBN. Cho et al. [52] measured human stress using a respiration variability spectrogram (RVS), which was measured with a thermal imaging camera using CNN. In addition to the machine learning and deep learning mentioned above, many studies have been conducted to analyze emotions using fuzzy theory [25,53,54]. Aside from that, deep learning is used in various fields [55]. However, unlike previous studies, the present study proposes an emotion classification into various stages using CNN model. In Section 3, experiments and data pre-processing are introduced. In Section 4, single-signal and multi-signal classification are compared, and a parameter combination procedure is performed between signals. Emotion classification into various stages are determined in Section 5. Section 6 discusses the results. Section 7 concludes the paper. Experiment As mentioned in the Introduction, experiments were conducted to classify six emotions and compare the classifiers. It was an open label and single arm experiment performed from November to December 2017 at Seoul Metropolitan Government-Seoul National University (SMG-SNU) Borame Medical Center, Seoul, Korea. Ethical approval was provided by the Institutional Review Board (IRB) at site, and the study adhered to the principles outlined in the Declaration of Helsinki and good clinical practice guidelines. The trial is registered in the Boramae IRB (The Ethics Committee of the SMG-SNU Boramae Medical Center). The experiment was conducted in a controlled environment where a monitor capable of displaying video clips with six distinct emotions and a sensor capable of acquiring physiological signals were installed. To stimulate the participants, video clips expressing happiness, fear, surprise, anger, sadness, and disgust were played for one minute in that order. In order of emotion, the video clips used in the experiment were part of About Time (2013) Before the start of the experiment, participants were sufficiently briefed about the experiment and its side effects. When seated in front of the monitor, the participant put a BVP sensor on the finger and a respiration band on the abdomen. When the experiment began and it was determined that physiological stability was ensured, we measured the physiological signal in the neutral state for 1 min to be used as a different emotion for Control group. Then, participants relaxed for 2 min. After the measurement of the physiological signal of the neutral state and relax, the participant watched the happiness video for 1 min and maintained a relaxed state for 2 min thereafter. When the second relaxed state ended, the participant watched the fear video and the remaining videos in the order indicated in Figure 2. In addition, markers were displayed on the signal at the start of each video and at the start of the neutral state to facilitate data processing. The participants wore the BVP sensor as a finger clip. The BVP finger-clip sensor has built-in optical electronics that measures blood flow through the absorption of infrared light in tissue and blood vessels. Every heartbeat causes blood to flow into the arteries and veins. The waveform amplitude of the BVP signal is related to blood flow as well as to the vasodilation and vasoconstriction of blood vessels. The participants wore the respiration band on the chest or abdomen, and the magnetic-field sensor of the band measured the stretching of the band when the participant breathed. The measured band-expansion value is related to the respiration signal. Because this non-invasive and simple measurement method was used to extract physiological signals by using the above-mentioned equipment, the burden on the participants was minimized. In the actual experiment, the sensor mentioned above and shown in Figure 3d was used. In the BVP signal, various parameters can be obtained to confirm the response of the autonomic nervous system. In this experiment, HRV parameters such as heart rate, HRV amplitude, LF, HF, and LF/HF were extracted. From the RSP signal, the RSP value and RSP rate, which are the RSP parameters, can be extracted. The HRV and RSP parameters in the raw signal were automatically calculated, processed, and stored using Biotrace + software, which did not require manual intervention. Data Processing All signals obtained from the experiment can be recorded and confirmed through Biotrace+ software. Based on this software, we selected the data suitable for the study and split them based on the experiment time. From the 53 participants' signals, 49 signals (30 male and 19 female, 29 ± 10 years old) were used for the study. The remaining four signals were discarded due to signal instability or other reasons. The signal to be analyzed was selected from among the HRV parameters (heart rate, HRV amplitude, LF, HF, and LF/HF) and RSP parameters (RSP value and RSP rate), which are stored as time-series data. Subsequently, we split the data according to the markers, as recorded in the experiment, and extracted the data corresponding to the six emotions for 1 min. Data processing was completed by labeling each of the six extracted signals with their corresponding emotion. The overall procedure of data processing is represented in Figure 4. The data obtained in this manner were applied to the pre-processed data to be used in the classification model presented in Section 4. The number and type of data obtained through data preprocessing procedures are summarized in Table 2. Although the number of all data is specified here, the data used in the actual analysis may vary slightly depending on the individual characteristics of the experiment participants. The data used for analysis depend on the combination of features. The combination of features can also be found in Section 4. In addition, information about the number of features can be found in Table 3. Method The pre-processed data obtained from the data-processing step were applied to the classification of emotions. Although emotions can be classified in various ways, in this study, we compared results of single-signal and multi-signal classification and classified emotions into various stages. For this study, we constructed a CNN model and applied the pre-processed data. The overall procedure of comparing single-signal and multi-signal from data acquisition from each sensor is shown in Figure 5. The obtained HRV-(HR, HRV Amplitude, LF, HF, LF/HF Ratio; HRV) and RSP-parameter (RSP Value, RSP Rate; RSP) data were applied to the CNN as parameters to classify the emotions. Each CNN model used emotion-(happiness, fear, surprise, anger, sadness, and disgust) and neutral-labeled data as input data to classify a given emotion. Parameter Combination The phased combination of each parameter was proceeded to find the optimal classification model. The combination of all physiological signal parameters obtained through experiments is depicted in Figure 6. Combination steps (1) and (2) correspond to single-signal classification (RSP parameter or HRV parameter). Combination steps (3) and (4) correspond to multi-signal classification (combination of RSP and each domain of HRV parameters). Combination step (5) corresponds to multi-signal classification (RSP and HRV parameters). Particularly, Combination step (3) was applied to the classification model through the fusion of the RSP parameters and the frequency-domain HRV parameters. Similarly, Combination step (4) was applied through the fusion of the RSP parameters and the time-domain HRV parameters. These step-by-step combinations of parameters were intended to verify the optimal emotional classification model. Convolution Neural Network Model CNN is generally used for the classification and prediction of images. However, by modifying the structure of CNN, we used a method of row-by-row reading of input data to classify physiological signals. CNN used Python's frameworks, Keras, and Tensorflow. CNN utilized 70% of all data as a training set and the other 30% as a test set. Figure 6 shows how the CNN model read the seven-parameter input data for emotions classification. Each row was used as input data and each column was reduced as it went through the convolutional layer. In particular, Figure 6 illustrates how the shape of the input data changes from (7 1 1) to (6 1 1). If the input data were changed to 2 or 5, the columns in Figure 7 would be changed, respectively. However, when the Combination steps (1)-(5) were classified using the CNN, the input data varied according to the number of input parameters. That is, when classifying the RSP: Combination step (1), the model used the input data shape (2 1 1), while Combination step (2) with HRV used (5 1 1). Thus, for RSP and HRV (Combination step (5)), (7 1 1) was used. Overall, the model was constructed to have similar tendencies in large frameworks. The input data shapes are listed in Table 3, and the constructed CNN model is shown in Figure 8. The tendency of the overall CNN model constructed in this study was designed by reducing the size of the shape by one (n, n-1, n-2, n-3, etc.). Therefore, in Model 1, the input shape was (2 1 1) because two input parameters were applied, and the shape decreased as learning progressed. Models 2-4 were constructed to have the same tendency. Furthermore, in Models 2-4, we reduced the number of convolution filters from 50 to 25 and then increased it to 50. In all models, batch normalization and activation functions were applied between two convolution layers. Finally, maxpooling proceeded. We also initialized weights with the "He kernel initializer" function in all convolution layers. Finally, a He kernel initializer was used in the ReLu and Softmax functions along with Dense, and the dropout rate was set to 0.5. The performance of the CNN model depends on the hyperparameters as well as the shape of the constructed model. To classify the six emotions in the constructed model, different learning rates and batch sizes for each model were applied to the CNN model constructed using the hyperparameters listed in Table 4. The contents described in Sections 3 and 4 are procedures of acquiring raw data, pre-processing data, selecting parameters, combining signals, and building a classification model. This is the most important part of this study and the essential part of the process of finding the most novel and optimized emotion classification models. Figure 9 shows the pipeline of the overall process. Results Before mentioning the accuracy of the classification results, we present the method used for calculating accuracy in this study. There are two major cases of classification using deep learning. One is condition positive and the other is condition negative. In general, positive means identified and negative means rejected. Each condition positive and negative is divided into true and false. True means correct and false means incorrect. Therefore, the results of classification using deep learning are classified into four types: true positive (TP), false positive (FP), true negative (TN), and false negative (FN). In these four cases, the formula for accuracy is shown in Equation (1). We classified the parameters into six emotions based on the CNN models described in the preceding subsection and evaluated the single-signal and multi-signal classification performance. The classification results are summarized in Tables 5 and 6. The classification accuracy was 63.18% and 77.42% when emotions were classified using the single-signal with CNN Models 1 and 2, respectively. This result suggests that both classifiers can classify emotions with proper performance when using the HRV parameters and show better results compared to classification using the RSP parameters alone. The results of the classification were identified, but the results contained some meanings that were insufficient to be used as a model for general emotion classification. Therefore, instead of classifying emotions using a single-signal, we verified the classification results of emotions using multi-signal. Table 6 shows the results of classifying multi-signal. The classification accuracy was 78.31% and 76.02% when parameters which combined RSP and some HRV were classified using the multi-signal with CNN Models 2 and 3, respectively. This result suggests that both classifiers can classify emotions with proper performance when using the parameters which combine RSP and some HRV parameters. Unfortunately, while CNN Models 2 and 3 were used to classify emotions properly, similar to some of the results in Table 5, it was rather insufficient to actually classify general emotions. Finally, the accuracy was 94.02% when emotions were classified using all the multi-signal parameters with CNN Model 4. This result suggests that CNN Model 4 can classify emotions with superior performance when using the RSP and HRV parameters. These five cases reveal that multi-signal classification is better than single-signal classification. However, not all multi-signal models have outstanding results, and multi-signal consisting of only some domains of HRV did not yield high classification results. Therefore, this study confirms that the accuracy of multi-signal emotion classification is higher than that of single-signal emotion classification. When classifying emotions using multi-signal, all domains of the HRV parameter must be used. Discussion Various physiological signals can be detected in the human body, and each physiological signal can describe various states of the body. In addition, since physiological signals cannot be changed at will, it is a good indicator for analyzing a person's emotional state. Much research has been conducted on emotional analysis [9,20,[58][59][60], and this research was based on a two-dimensional emotion model or a few discrete emotion models. In addition, emotion classification was performed with respiration signals and ECG signals [30,31,61], and multi-signal classification using signals such as EEG, GSR, EMG, and SKT, which are less relevant to respiration, has been used [10,62]. However, the dimension emotion model actually has a difference in feeling between the six basic emotions in daily life. Therefore, in this study, we studied emotion classification based on the six emotions with a focus on optimal emotion classification model, unlike previous studies that classified emotions simply. The optimal emotion classification model can reduce unnecessary procedures in analysis and experimentation. Therefore, we compared the accuracy of multi-signal and single-signal classification, performed classification using combined physiological signals into various stage, and then judged the accuracy of emotion classification using some parts of cardiac and respiratory signals. A total of 53 subjects participated in the experiment. They watched video clips corresponding to the six basic emotions while wearing a finger-clip BVP sensor and chest-band respiration band. The BVP and RSP signals of the subjects were measured while they watched the video clips. Of the raw signals, 49 signals were used in the study, removing inappropriate data. The signal selecting and splitting proceeded according to the research purpose. Subsequently, labeling and data pre-processing were performed. The labeled data obtained in the pre-processing step were applied to the classifier. RSP (single signal) and HRV (single signal) were applied to CNN to identify single-signal classification performance. For this purpose, two CNN models (CNN Models 1 and 2) were created for each parameter. Through the results, when single-signal was used, it was confirmed that proper emotion classification performance can be obtained. However, it cannot be used for an independent emotion classification model because of the low performance. To solve this problem, we decided to classify emotions through the combination of different domains of HRV or all of HRV and RSP. Combining different types of signals can improve classification performance. The signals were combined in three ways: the combination between the RSP signal and frequency-domain HRV signal; the combination between the RSP signal and time-domain HRV signal; and the combination between the RSP signal and HRV signal. These were named Combination steps (3)-(5), respectively. The combined signal was classified with CNN in the same manner as in the previous cases. As a result of classification using Combination steps (3) and (4) signals with CNN, the accuracy was confirmed to be similar for the two classifiers. In addition, Combination step (5) results show an accuracy greater than 94%; thus, it was confirmed that Combination step (5) yielded higher performance with all classifiers. Multi-signal classification showed the better performance than single-signal classification. Therefore, it can be said that the use of many types of input signals can enhance the emotion classification performance. However, when using multi-signal, all domains of HRV must be applied as input data. However, in the combination of RSP and some of HRV signal, the results of emotion classification are similar. In other words, the accuracies of Combination steps (3) and (4) are similar. We identified the dominance of the entire HRV parameters to verify why both results are similar. Therefore, we performed principal component analysis (PCA) for all the HRV parameters. Table 7 shows the PCA results for all the HRV parameters. PC1 and PC2 can explain 65.3% of the total data. The cumulative proportions of the PCA are also listed in Table 7. Based on the results shown in Table 7, a correlation circle was drawn, as shown in Figure 10, to confirm the dominance of the entire parameters. The length of the arrow corresponding to each variable indicates the contribution of that variable and the relative magnitude of its variance. In other words, the contribution of each parameter to the PC is shown in Figure 10. As shown in Figure 10, the most significant influence on PC1 is the HRV frequency domain parameter; similarly, the HRV time domain parameter is the most significant influence on PC2. When looking at the length of the arrow, the length of the arrow between the two domains is not much different. Based on this, for each domain of the HRV parameter, it can be verified that there is no dominant domain on either side. Therefore, we can see why the classification results of Combination steps (3) and (4) are similar. Based on the above studies, we can determine that the optimal model for classifying emotions using multi-signal is CNN Model 4, and it is difficult to classify general emotions with single-signal or some combination of data. Therefore, when classifying emotions, both RSP and HRV parameters should be used. By modifying the structure of the CNN model, we developed a general signal classifier. The data were acquired through six emotion-based video viewing experiments and preprocessed. Four different CNN models were developed for preprocessed data. Based on this, the classification accuracy of singleand multi-signal was compared. In addition, emotions were classified at various stages by signal combinations, and the dominance of HRV parameters was verified to confirm that it is difficult to classify emotions with multi-signal combined with several domains. Finally, an optimal model for classifying emotions with high accuracy is presented. Conclusions The response of the human autonomic nervous system is a good indicator of emotions because it cannot be manipulated at will. In this study, we studied the optimal emotion classification models. Unlike previous studies using many physiological signals for emotion classification, the present study attempted to obtain an optimal emotion classification model. Respiration and cardiac signals corresponding to six basic emotions were extracted through experiments; the signals were combined variously to find the optimal classification model; and several CNN models were built to classify emotions. PCA was also used to determine the cause of similar emotional classification results. Although multi-signal emotion classification shows very good results, it takes a relatively long time because of the many parameters and data used, and the calculation requires high computing power. Therefore, further research should be conducted to improve the performance of emotion classification with fewer parameters and advanced CNN models. Finally, signals were measured for one minute in the experiment. If the constructed model were commercialized, this may be a rather long time. Since classification results may vary depending on the measurement time of the signal, further research will be required to shorten the time of signal extraction and improve the model accuracy. Although there are limitations in the present research, it is possible to develop a more efficient emotion estimation technology from the results of this study.
7,781.2
2020-02-01T00:00:00.000
[ "Computer Science" ]
Solving and classifying the solutions of the Yang-Baxter equation through a differential approach. Two-state systems The formal derivatives of the Yang-Baxter equation with respect to its spectral parameters, evaluated at some fixed point of these parameters, provide us with two systems of differential equations. The derivatives of the $R$ matrix elements, however, can be regarded as independent variables and eliminated from the systems, after which two systems of polynomial equations are obtained in place. In general, these polynomial systems have a non-zero Hilbert dimension, which means that not all elements of the $R$ matrix can be fixed through them. Nonetheless, the remaining unknowns can be found by solving a few number of simple differential equations that arise as consistency conditions of the method. The branches of the solutions can also be easily analyzed by this method, which ensures the uniqueness and generality of the solutions. In this work we considered the Yang-Baxter equation for two-state systems, up to the eight-vertex model. This differential approach allowed us to solve the Yang-Baxter equation in a systematic way and also to completely classify its regular solutions. The Yang-Baxter equation The Yang-Baxter equation (ybe) is one of the most important equations of contemporary mathematical-physics. It originally emerged in two different contexts of theoretical physics: in quantum field theory, the ybe appeared as a sufficient condition for the many-body scattering amplitudes to factor into the product of pairwise scattering amplitudes [1,2,3]; in statistical mechanics it represented a sufficient condition for the transfer matrix of a given statistical model to commute for different values of the spectral parameters [4,5]. Since the pioneer works in quantum integrable systems -see [6,7,8] for a historical background -, the ybe has become a cornerstone in several fields of physics and mathematics: it is most known for its fundamental role in the quantum inverse scattering method and in the algebraic Bethe Ansatz [9,10,11], although it also revealed to be important in the formulation of Hopf algebras and quantum groups [12,13,14,15,16], in knot theory [17], in quantum computation [18], in AdS-CFT correspondence [19,20] and, more recently, in gauge theory [21,22,23]. The ybe can be seen as a matrix relation defined in End (V ⊗ V ⊗ V ), where V is an nth dimensional complex vector space. In the most general case, it reads: R 12 (x, y)R 13 (x, z)R 23 (y, z) = R 23 (y, z)R 13 (x, z)R 12 (x, y), (1) where the arguments x, y and z, called spectral parameters, have values in C. The solution of the ybe is an R matrix defined in End (V ⊗ V ). The indexed matrices R ij appearing in (1) are defined in End (V ⊗ V ⊗ V ) through the formulas R 12 = R ⊗ I, R 23 = I ⊗ R, and R 13 = P 23 R 12 P 23 = P 12 R 23 P 12 , where I ∈ End (V ) is the identity matrix, P ∈ End (V ⊗ V ) is the permutator matrix (defined by the relation P (A ⊗ B) P = B ⊗ A for ∀ A, B ∈ End(V )) and P 12 = P ⊗ I, P 23 = I ⊗ P . For physical reasons, a particular form of the ybe is usually considered: this consists in assuming that the R matrix depends only on the difference of the spectral parameters x, y, and z. In this case, the ybe assumes the simpler form: where the new spectral parameters are related to the older ones by u = x − y and v = y − z. In this work, we shall consider only the "additive" ybe (3). For each solution of the ybe, a given integrable system can be associated. In fact, in statistical mechanics, the R matrix represents the Boltzmann weights of a given statistical model while, in quantum field theory, the R matrix is associated with factorizable scattering amplitudes between relativistic particles. From the ybe we can prove that systems described by an R matrix possess infinitely many conserved quantities in involution -the Hamiltonian being one of them -, the reason why they are called integrable [24]. We say that a given solution R(u) of the ybe (3) is regular if R(0) = P . Regular solutions of the ybe have several important properties. We list below some of them [8]: Two solutions that differ from each other only by these transformations are said to be equivalent. Regular solutions of the ybe can also have several properties or symmetries; the most common are the following [8]: -Unitarity (U) symmetry: R(u)P R(−u)P = ρ(u)I; -Permutation (P) symmetry: P R(u)P = R(u); -Transposition (T) symmetry: R(u) t = R(u); -Permutation-Transposition (PT) symmetry: P R(u)P = R(u) t ; -Crossing (C) symmetry: R(u) t1 R(−u − 2ζ) t1 = σ(u)I. Here, t denotes transposition in End(V ⊗ V ), t 1 and t 2 denote the partial transposition in the first and second vector spaces, respectively, ζ is called the crossing parameter of the R matrix, I is the identity matrix and, finally, ρ(u) and σ(u) are two complex functions specific to each model. We highlight that we shall not impose any of these symmetries in our search for solutions of the ybe (3). Nevertheless, we indicate in Appendix B which symmetries each solution presents. The differential Yang-Baxter equations The ybe corresponds to a system of non-linear functional equations. Several particular solutions of the ybe are known [6,7,8]. The first solutions were found by a direct inspection of the functional equations, which are in fact very simple because the R matrix is assumed to have many symmetries. Nevertheless, there are other more advanced methods for solving the ybe: we can cite, for instance, the Baxterization of braid relations [25], the use of Lie algebras and superalgebras [26,27,28], the construction of Hopf algebras and quantum groups [13,14,15], and also techniques relying on algebraic geometry [29]. The methods mentioned above usually require that the R matrix presents one or more symmetries from the very start. From a mathematical point of view, would be desirable to develop a method that requires in principle as few as possible symmetries and, at the same time, that is powerful enough in order to find and classify the solutions of the ybe. This paper is concerned with the development and extensive use of such a method, which is based on a differential approach. To be more precise, this method consists mainly of the following: if we take the formal derivatives of the ybe (3) with respect to the spectral parameters u and v and then evaluate the derivatives at some fixed point of those variables (say at zero), then we shall get two systems of ordinary non-linear differential equations for the elements of the R matrix. The derivatives of the R matrix elements, however, can be regarded as independent variables, so that, after they are eliminated, two systems of polynomial equations are obtained in place. Thus, these polynomial systems can be analyzed -for instance, through techniques of the computational algebraic geometry [30] and eventually completely solved. It happens, however, that these polynomial systems usually have a positive Hilbert dimension, which means that the systems are satisfied even when some of the R matrix elements are still arbitrary. The remaining unknowns, nonetheless, can be found by solving a few number of differential equations that arise from the expressions for the derivatives we had eliminated before. These auxiliary differential equations, therefore, can be thought as consistency conditions of the method. For example, if we take the formal derivative of (3) with respect to v and then evaluate the result at the point v = 0, then we shall get the equation, E := R 12 (u)D 13 (u)P 23 + R 12 (u)R 13 (u)H 23 = H 23 R 13 (u)R 12 (u) + P 23 D 13 (u)R 12 (u), (4) while, on the other hand, if we take the derivative of (3) with respect to u and then evaluate the result at u = 0, we shall get, In (4) and (5), we introduced the quantities: where P is the permutator matrix 1 so that R(0) = P and H = D(0). We highlight that H = H L P , where H L is nothing but the local Hamiltonian associated with the model -see, for instance, [6,8]. The idea of transforming a functional equation into a differential one goes back to the works of the Niels Henrik Abel, who solved several functional equations in this way [31]. Abel's method presents many advantages when compared with other methods of solving functional equations. For instance, it consists in a general method that can be applied to a huge class of functional equations; it establishes the generality and uniqueness of the solutions (which would be difficult, if not impossible, to establish in other ways) by reducing the problem to the theory of differential equations and so on -see [32] for more. Notice moreover that although Abel's method requires the solutions to be differentiable (there can be non-differentiable solutions of some functional equations), this restriction is not a problem when dealing with the ybe, as its solutions are always assumed to be differentiable because of the connection between the R matrix and the corresponding local Hamiltonian. Concerning the theory of integrable systems, the differential method is perhaps most known in connection with boundary ybe [33,34]: This equation -which is also known as the reflection equation -is a generalization of the ybe for non-periodic boundary conditions. The fundamental unknown of the reflection equation is the K matrix -also known as the reflection matrix -, while the R matrix is assumed to be given. Notice that the K matrices figuring in (7) always depend only on a single variable, which is why the differential method transforms (7) into a system of linear algebraic equations instead of a non-linear differential system. This particularity makes the differential method as simple as powerful to study the reflection equation (7). In fact, this method was extensively used by Lima-Santos and collaborators in a series of papers, where solutions of (7) associated with non-exceptional Lie algebras and superalgebras were found and classified [35,36,37,38,39,40,41,42]. Differential methods were also employed to solve the periodic ybe (3). In fact the first solutions of the ybe were found precisely in this way [6]. What distinguishes the present approach from the previous ones is that here we regard the derivatives of the R matrix elements as independent variables, so that the system of equations (4) and (5) can be solved in an algebraic way -it is only at the end of the calculations that a few number of differential equations arise as consistency conditions. This approach allowed us to make a systematic study of the ybe; it provided a simpler and comprehensive analysis of the possible branches of the solutions which enabled us to make a complete classification of the regular R matrices for two-state quantum systems, up to the eight-vertex model 2 . Our results agree with early classifications proposed by Sogo & al. in [44] and by Khachatryan and Sedrakyan in [45]; in fact, many of the solutions derived here are equivalent to the ones presented in [44] and [45], although a few of them seems to be new -which is the case, for instance, of some solutions for six-vertex models with unusual shapes, among others. The eight-vertex model is the most general two-state vertex model satisfying the Z 2 -symmetry [43,46], which means that the non-null elements r j1,j2 i1,i2 (u) of the R matrix must satisfy the relation i 1 − j 1 + i 2 − j 2 ≡ 0 mod 2, where the indexes i 1 , i 2 , j 1 and j 2 can assume only the values 0 or 1. Therefore, let us write the most general R matrix associated with the eight-vertex model as follows: Besides, let us denote the matrices D(u) = R ′ (u) and H = D(0) respectively by In this work, we shall look for regular solutions of the ybe (3) which means that R(0) = P , where P is the permutator matrix, explicitly given in this case by In the next section we shall present a detailed analysis of the differential ybe's (4) and (5), from which we derive the possible solutions of the ybe (3) for two-state systems, up to eight-vertex model. Other important properties of these solutions will be presented in the Appendixes. Solutions for the four-vertex model The most simple regular solution of the ybe occurs when the R matrix has the same shape as the permutator matrix. (8) and we are left with a four-vertex model. In this case the R matrix is, This means that 2 A priori, the most general R matrix associated with a two-state system would be a four-by-four matrix with no zero elements -in this case, the ybe (3) would represent a set of 64 functional equations for 16 unknowns. The Hamiltonian of such a sixteenvertex model would describe a completely anisotropic Heisenberg chain in the presence of external fields and with arbitrary ionized configurations [43]. It is not known, however, if the sixteen-vertex model is integrable: its R matrix, if exists, would have no symmetry at all -not even the unitarity one. For this reason (and because the problem of solving the ybe for the sixteen-vertex model is insurmountable at the present), we shall restrict ourselves to the eight-vertex model, whose Hamiltonian is related to a Heisenberg chain in the presence of external fields but with ionized configurations occurring only in pairs [43]. 3 For sake of clarity, we shall often hide the dependence of the R matrix elements on the spectral parameter u. so that the general R matrix of the four-vertex model is: The solution depends on four free-parameters (namely, α 1 , α 2 , γ 1 and γ 2 ). Notice, however, that one of these parameters can be removed due to the multiplicative property of the ybe, which means that the four-vertex R matrix above presents only three bare free-parameters 4 . Solutions for the usual six-vertex model Let us consider now the usual six-vertex model. In this case we require just that d 1 = d 2 = 0 in (8) and, consequently, the most general six-vertex R matrix becomes: We can verify that in this case the systems E and F -respectively, the equations (4) and (5) -become different each from the other, so that we get two complementary systems at our disposal. In fact, several simple relations immediately follow from these equations: for instance, subtracting E 2,5 from F 2,5 we are led to the relation, 4 In general, one or more free-parameters can be removed from the solutions thanks to the equivalence properties of regular R matrices (see Section 1). For example, by multiplying the R matrix through a given regular function, redefining the spectral parameter, performing a similarity transformation or by grouping together some of the parameters and renaming them. The remaining parameters that cannot be removed anymore from the R matrix through these equivalence transformations will be called bare free-parameters of the solutions. On the other hand, from E 3,3 or F 3,3 it follows as well that, This means that we can write: Now we can solve the equations E 2,5 , E 4,7 , E 4,6 and E 3,5 for a ′ 1 , a ′ 2 , b ′ 1 and b ′ 2 , respectively, which provide us with, After these derivatives are eliminated, we can verify that the equations E 2,3 and E 6,7 become, respectively, On the other hand, the equation F 2,3 gives us a relation between a 2 and a 1 : Then, using (17) and (22), it follows from (21a) and (21b), assuming that b 1 = 0 and b 2 = 0, the following constraint: This means that the solutions of the ybe for the six-vertex model admit two main branches. 4.1 The first case a 2 = a 1 Let us first assume that α 2 = α 1 which, according to (22), implies a 2 = a 1 . Then we look for a solution with the following properties: Notice that, because b 2 is fixed through (17) and c 1 , c 2 are given by (19), it only remains to find a 1 and b 1 . From (20) it follows that, This system of linear differential equations can be easily solved with the initial conditions a 1 (0) = 1 and b 2 (0) = 0. The solution is: where, At this point, all equations of the systems E and F are satisfied. Moreover, we indeed have b ′ , so that the solution is consistent with the differential method. Therefore we get the following solution: which depends on five free-parameters, namely, α 1 , β 1 , β 2 , γ 1 and γ 2 . This solution can also be written in a more convenient form by introducing the parameter η defined by the relation so that we get 5 sinh (ωη) = ǫω/ √ β 1 β 2 , where ǫ = sign (2α 1 − γ 1 − γ 2 ). Thus, in terms of this new parameter, the solution becomes: In order to count the number of bare free-parameters of this solution we can proceed as follows: first, we can simplify the R matrix by dividing all its elements by e 1 2 (γ1+γ2)u (thanks to the multiplicative property of the ybe this gives another equivalent solution). After that, we may notice that only the ratio between β 1 and β 2 appears in the solution, so that we can set β 1 /β 2 → β. In the same fashion, η always appears multiplied by ω so that we can let ωη → λ. Finally, we can redefine the spectral parameter u through ωu → u, after which γ 1 and γ 2 will appear only in the combination (γ 1 − γ 2 ) /ω, which we may call γ. This means that the solution has actually only three bare free-parameters, namely, β, λ and γ. It follows therefore that the R matrix above is equivalent to the solution named 6V(I) by Sogo & al. in [44] and that one given by equation (2.15) in the work of Khachatryan & Sedrakyan in [45]. Now, let us suppose that Taking (22) into account, this means that we are looking for a solution with the properties: In this case, the derivatives (20) become, which can be easily solved as we impose the initial conditions a 1 (0) = a 2 (0) = 1 and b 1 (0) = b 2 (0) = 0. The solution is: where, here, 5 In the whole paper, we shall use the notation ǫ = ±1. We can verify that all the equations are satisfied when the constraint α 1 + α 2 = γ 1 + γ 2 is taken into account. Therefore, together with the expressions for c 1 = e γ1u and c 2 = e γ2u given by (19) we got a solution with five free-parameters. This solution can also be written in a simpler form after we make the transformation so that sinh (ωη) = ǫω/ √ β 1 β 2 , where ǫ = sign (α 1 − α 2 ). In fact, using (31), the solution above becomes, (37) After simplifying this solution through the equivalence transformations of the ybe (see Section 1), we can verify that it presents three bare free-parameters. This R matrix is therefore equivalent to the solution named 6V(II) in [44] and that given by equation (2.15) in [45]. Solutions for unusual six-vertex models In the previous section, the six-vertex R matrix (16) was obtained from the most general eight-vertex R matrix (8) by zeroing the elements d 1 and d 2 . This, however, is not the only possibility for constructing six-vertex R matrices that are compatible with the regularity condition. Indeed, we might have zeroed any two of the elements b 1 , b 2 , d 1 and d 2 instead. For instance, if we vanish the elements b 1 and b 2 then we would be led to a six-vertex model whose R matrix has the following unusual shape: In the other cases, we would get the following unusual six-vertex models: In this section we shall show that the ybe (3) admits solutions for such unusual six-vertex R matrices. Let us first consider the R matrix given by (38). Our starting point here is again the analysis of equations E 3,3 and F 3,3 . As in the previous cases, these equations fix the ratio between c 2 and c 1 through the simple relation Here, however, equations E 1,1 and F 1,1 are not null, and difference of them provide us with a nice relation between d 2 and d 1 : Returning to equations E 3,3 and F 3,3 , we realize that c 2 must equal c 1 . Thanks to the multiplicative property of the ybe (see Section 1), this means that we can write 6 : Thus, we can go on by eliminating the derivatives a ′ 1 , a ′ 2 , d ′ 1 and d ′ 2 through the equations E 2,5 , E 4,7 , E 1,4 and E 2,2 , respectively, from which we get the relations: Now, from E 1,7 , it follows that after which E 2,8 becomes a quadratic equation for a 2 1 : Assuming δ 1 positive, the only solution of this equation that satisfies the initial conditions a 1 (0) = a 2 (0) = 1 and d 1 (0) = 0 is The other solutions differ from the above one by negative signs in front of the square roots, but they do not satisfy the required initial conditions for δ 1 positive 7 . After a 1 is fixed by (46), we can verify that the remaining equations imply the constraint α 2 2 = α 2 1 , so that the solutions branches into two ways. Let us consider first the branch in which α 2 = α 1 . In this case, from (44) and (46) we get that Therefore, the solution is characterized by 6 Notice that this is the same as setting γ 2 = γ 1 = 0. If we wish, we can recover the parameter γ 1 by renormalyzing the solution (e.g., by multiplying the R matrix by e γ 1 u ). With that the ybe will still be satisfied, although differential ybe's will be satisfied only if we redefine the other parameters of the solution so that the renormalized R matrix satisfies the consistency condition R ′ (0) = H. The number of bare free-parameters of the solutions are not altered by this choice, of course. 7 In general, for some complexes values of δ 1 , the initial conditions can also be satisfied when the signs in front the square roots are negative. A detailed study of these cases shows, however, that this leads to the same solution as the one presented in the text. This can be explained from the fact that δ 1 is a free-parameter of the solution, so that the negative signs can be absorbed into its definition. For this reason, in similar situations, we shall assume that the free-parameters of the solutions are positive, although the corresponding solution may be valid as well for negative, or even complex, values of these parameters. As d 2 is already fixed by (41), it remains to find d 1 . This follows from the third equation in (43), which provides the following non-linear differential equation: This is a Riccati differential equation with constant coefficients (see [47]). To solve it, let us rewrite it in the form (we have written y in place of d 1 for convenience): where A = δ 2 , B = 2α 1 , C = δ 1 and ξ 1 , ξ 2 are the roots of the quadratic equation Ay 2 + By + C = 0. Let us first assume that ξ 2 = ξ 1 (the degenerated case ξ 2 = ξ 1 will be presented in Appendix A). In this case, the differential equation (50) can be reduced to an integral, which, by its turn, can be easily solved through partial fractions method: where c is the constant of integration. Inverting this relation, we get the desired general solution of (50): Thereby, the corresponding solution of (49) can be found by imposing the initial condition d 1 (0) = y(0) = 0, which implies the value c = log (ξ 2 /ξ 1 ) / [A(ξ 1 − ξ 2 )] for the constant of integration. Thus, after we replace back the values of A, ξ 1 and ξ 2 , we shall get that, from what follows the desired R matrix: which depends on three free-parameters (namely, α 1 , δ 1 and δ 2 ). Introducing the parameter η through the relation coth (ωη) = α 1 /ω so that sinh (ωη) = ω/ √ δ 1 δ 2 , the solution above can be rewritten as from which we see that the number of bare free-parameters is two (ω can be removed by redefining u and η). Therefore, it can be verified that this solution is equivalent to that given by equation (5.25) in [45]. The subcase a 2 = a 1 Now, let us consider the second possibility in which α 2 = −α 1 . From (44) and (46) it follows that we have, in this case, Therefore, this branch is characterized by As in the previous case, d 1 can be found through the differential equation provided by the third equation in (43). Here, however, we get the following non-linear differential equation 8 : where In the what follows, we shall show that the general solution of (58) is given in terms of Jacobian elliptic functions. To see why, notice first that (58) can be reduced to the following integral: where we have written y in place of d 1 for convenience and ξ 2 1 and ξ 2 2 are the two roots of the quadratic equation Ay 2 + By 2 + C = 0 with A, B and C given by (59). Now, assuming ξ 2 = ξ 1 (the degenerated case ξ 2 = ξ 1 will be presented in Appendix A) and making the change of variable y → ξ 1 sin φ, it follows that (60) can be rewritten as: where k = ξ 1 /ξ 2 . The integral above is the definition of the trigonometric elliptic integral of first kind of modulus k (see [47]), denoted here by F (φ|k), so that we get, where c is the constant of integration. Now, the inverse of F (φ|k) is the Jacobi amplitude function, φ(u) = am(u|k), from which we get the general solution for the function φ: 8 We highlight that the non-linear differential equation (dy/dx) 2 = Ay 4 +By 2 +C appears in several fields of mathematics and physics. In fact, in the last decade it becomes a cornerstone in some expansion methods applied to non-linear wave equations, providing in this way new elliptic solutions for several non-linear partial differential equations. For instance, this expansion method provided a countless number of solutions for the Klein-Gordon-Schrödinger [48], Kronig-Penny [49], Boussinesq [50], Korteweg-de-Vries [51], Burgers [52], Ostrovsky [53] equations and their generalizations -only to cite a few. It is interesting to notice that all the elliptic solutions of the ybe derived here follows from the solutions of this differential equation. The solution for d 1 follows from the relation d 1 = ξ 1 sin φ, after we use the identity sin(am(u, k)) = sn(u, k) and impose the correct initial conditions. In fact, the condition d 1 (0) = 0 implies c = 0, and from A = δ 2 2 , we get that 9 , where ǫ = cosign (ξ 2 δ 2 ) . Therefore, we have determined all elements of the R matrix. After using the following well-known identities for the elliptic functions (see [47]), to simplify the elements of the R matrix, we shall get the solution, where This solution can be simplified further by introducing a parameter η through the relation iǫ δ 2 /δ 1 sn (ωη|k) = 1/ξ 1 . Thus, we can verify from the relations (59) and the identities (65) that cn (ωη|k) dn (ωη|k) = 2α 1 ξ 1 /δ 1 . Then, from the addition formula of the elliptic sinus (see [47]), it follows that (66) can be rewritten as where Λ (u, η) = 1 − k 2 sn 2 (ωu|k) sn 2 (ωη|k). Alternatively, from the identity (see [47]), we can also rewrite (66) as follows 10 : This solution is characterized by two bare free-parameters (e.g., δ 1 /δ 2 and k, as ωη is a function of k). We did not find such R matrices in the literature; we mention, however, that an elliptic R matrix with this same shape was already discussed in [45], which corresponds to a specific limit of two asymmetric eight-vertex R matrices (which are equivalent to the solutions discussed in Section 6.2 of this paper), after several elliptic transformations are performed. We were not able to verify if the R matrix (70) can be mapped to that one reported in [45] by performing some combinations of elliptic transformations and identities (e.g., Jacobi's imaginary modulus transformation, the doubleargument identities or others, see [47]). Nonetheless, we did check that these solutions have essentially the same trigonometric limits -see Appendix A. The remaining cases For the remaining cases of the unusual six-vertex R matrices given by (39), the functional equations are quite simple, reason for which we shall report only the final results here. They are: and 6 Solutions for the eight-vertex model The most general R matrix considered in this work belongs to the eight-vertex model and it has the following shape: (73) 10 Notice that (65) and (67) imply the remarkable identity: . This provides another way of written the R matrix (66). Here again, we can verify that the equations E 1,1 , F 1,1 , E 3,3 and F 3,3 imply the relations Therefore, without loss, we can assume henceforward that The difference of E 2,5 with F 2,5 also provides the relation: Now we can eliminate the derivatives a ′ 1 , a ′ 2 , b ′ 1 and b ′ 2 from the equations E 2,5 , E 4,7 , E 4,6 and E 3,5 , respectively, which become, after simplification, Besides, equation F 2,3 also provides, in the same fashion as in the usual six-vertex model. At this point, if we take the difference between E 1,4 and F 2,8 and assume that β 1 = 0, then we shall get the which evince how the solution branches. 6.1 The first case a 2 = a 1 The first branch occurs when α 2 = α 1 which, according to (78), implies as well that Now, multiplying E 2,3 by β 2 and E 6,7 by β 1 and taking the difference of them, we are led to the equation, for β 1 , δ 1 and δ 2 different from zero. It seems, therefore, that we have other three possible branches for the solutions. Let us consider first the possibility β 2 = β 1 , which implies b 2 = b 1 . Therefore, this solution is characterized by the ratios: Now, from equation E 1,4 we can eliminate d ′ 1 : Besides, from E 1,6 and E 1,7 we can fix a 1 and d 1 : where we assumed β 1 positive (see Footnote 7) and a 2 1 = b 2 1 , since otherwise the solution would not be regular. At this point, we can verify that all the equations of the systems E and F are satisfied. It remains, however, to find b 1 . This can be achieved from the third equation in (77), which provides the following non-linear differential equation: with The differential equation (85) has the same form as (58) has same form as that for d 1 . Therefore, the desired solution for b 1 is where ξ 2 1 and ξ 2 2 are the roots of the quadratic equation Ax 2 + Bx + C = 0, with A, B and C given by (86) and we assumed ξ 2 2 = ξ 2 1 (for the degenerated case ξ 2 2 = ξ 2 1 , see Appendix A). From the identities (65), we can simplify all the elements of the R matrix, from which we obtain the following solution: where ω = ξ 2 2 δ 1 δ 2 = β 2 1 /ξ 2 1 and k = ξ 1 /ξ 2 is the modulus of the elliptic functions. Notice that ω, k, ξ 1 and ξ 2 are functions of the parameters α 1 , β 1 , δ 1 and δ 2 . This solution (88) can be simplified further if we introduce a new parameter η through the relation sn (ωη|k) = 1/ξ 1 , so that cn (ωη|k) dn (ωη|k) = ǫα 1 /β 1 . Then, from the addition formula of the elliptic sinus (67) and using the relations (86), it follows that (88) can be rewritten as This solutions corresponds to a generalization of the eight-vertex R matrix found originally by Baxter in [4,5] and by Zamolodchikov in [54]. It contains three bare free-parameters (e.g., k, ωη and δ 1 /δ 2 ) and it is equivalent to the solution named 8V(I) in [44] and that given by equation (3.7) in [45]. Let us now to consider the possibility β 2 = −β 1 in (81), which implies b 2 = −b 1 . This means that we are looking for a solution with the following properties: As in the previous case, we can eliminate d ′ 1 from the equation E 1,4 : Then, multiplying E 2,8 by b 1 and E 3,8 by a 1 and taking the difference of them, we are led to the equation The only possibility for a 1 , b 1 and d 1 different from zero is α 1 = 0, which, of course, does not means that a 1 = 0. Now, multiplying E 1,6 by b 1 and F 1,7 by a 1 and taking again the difference, we get the equation Clearly a 2 if the solution is regular, hence, the second factor in the equation above should vanish. Solving (93) for d 1 we get, At this point, all equations of the systems E and F are satisfied. It remains however to find a 1 and b 1 . These remained unknowns can be found through the respective differential equations in (77), which become, Here we remark that a 2 1 −b 2 1 −1 = 0 implies d 1 = 0, which would lead us to a particular solution of the usual six-vertex model. Therefore, let us assume that a 2 1 − b 2 1 − 1 = 0. In this case, the system of differential equations above, with the initial conditions a 1 (0) = 1 and b 1 (0) = 0, has the solution: Now, taking the derivative of b 1 and b 2 at u = 0 we get that b ′ 1 (0) = ǫβ 1 and b ′ 2 (0) = ǫβ 2 . This means that we must set ǫ = 1 for consistency with the differential method 11 . Therefore, after simplify the previous expressions, we find the solution we were looking for: This R matrix depends on three free-parameters (β 1 , δ 1 and δ 2 ) but one of them can be removed by redefining u, so that the number of bare free-parameters is two (say, β 1 / √ δ 1 δ 2 and δ 1 /δ 2 ). It is equivalent to the solution named 8V(III) in [44] and that given by equation (3.11) in [45]. Let us back now to equation (79) and assume α 2 = α 1 . Thus, we are looking for a solution in which Before solving (79), let us eliminate d ′ 1 from E 1,4 : Then we can solve (79), say for a 1 , which provides Besides, after d ′ 1 is eliminated, we can verify that F 1,7 reduces to It seems therefore that we have two additional branches to consider, depending on whether β 2 = β 1 or β 2 = −β 1 . The second possibility, however, implies α 2 = α 1 , which lead us to the previous case discussed above (in fact, assuming β 2 = −β 1 we can verify that the difference between E 1,6 and E 8,3 gives the equation (α 1 − α 2 ) b 1 d 1 = 0). Therefore, let us consider the case β 2 = β 1 , that is, We continue in this way by solving E 1,6 for d 1 : where we assumed β 1 positive (see Footnote 7). Now, E 2,3 reduces to The first possibility α 2 = α 1 lead us again to the previous considered case in which a 2 = a 1 . Therefore, let us consider that Now all the functional equations are satisfied. It remains, however, to found b 1 . As in the previous cases, b 1 can be determined through the differential equation which is provided by the third equation in (77). In this case, however, Therefore, the desired solution of the differential equation above satisfying the initial value b 1 (0) = 0 is, where ξ 2 1 and ξ 2 2 are the roots of the quadratic equation Ax 2 + Bx + C = 0 with A, B and C given by (107) and we have assumed ξ 2 1 = ξ 2 2 (the case ξ 2 1 = ξ 2 2 will be discussed in Appendix A). Therefore, from the equations (98), (100) and (103), we can write down all the elements of the R matrix, which are: ) and we have used the identities (65) to simplify the square root in (103). This solution can still be simplified by introducing the parameter η through the relation sn(ωη|k) = 1/ξ 1 and noticing that, remarkably, the expressions for ξ 2 1 and ξ 2 2 are quite simple in this case. In fact, we can set either or, conversely, The first case implies the identity cn (ωη|k) = ǫα 1 /β 1 where ǫ = cosign (α 1 /β 1 ), which leads to the following R matrix: where ω = β 2 ). The second case implies dn (ωη|k) = ǫα 1 /β 1 where ǫ = cosign (α 1 /β 1 ), from which we get the solution where, now, ω = √ δ 1 δ 2 and k = (β 2 To count the number of bare free-parameters of the solutions (112) and (113), we should use the relations (107) and (110) or (111). In this way, we can verify that β 1 can be removed and the solution will depend only on δ 1 /δ 2 , k and ωη, so that these solutions are characterized by three bare free-parameters. The R matrices (112) and (113) are related to the solutions originally found by Felderhof in [55] and by Bazhanov & Stroganov in [56] and they are also equivalent to the solutions reported in [44] (solution named 8V(II) in Table I) and in [45] (eqs. (3.23) and (3.19), respectively). We remark that the R matrices (112) and (113) are related to each other by the inversion of the modulus k -see Footnote 9. Solutions for the five-vertex models In the previous sections, we obtained the general regular solutions of the ybe for the case where the R matrix had an even number of non-null elements. There exist, however, regular R matrices with an odd number of non-null elements, which correspond to the five and seven-vertex models. In this section we shall concern with the solutions for the five-vertex models. The solutions for the seven-vertex models will be treated in the next section. Five-vertex models can be obtained in four different ways. Two types of five-vertex models can be obtained by zeroing one of the elements b 1 or b 2 in the six-vertex R matrix (16): These five-vertex models are related to the so-called Totally Asymmetric Simple Exclusion Process (tasep) -see [57,58]. Other two types of five-vertex models can be obtained as well by zeroing the elements d 1 or d 2 in the unusual six-vertex R matrix (38): Because the main steps to solve the ybe for the five-vertex models are similar to the previous cases, in the what follows we shall only comment on the possible branches and present the final results. This is the case corresponding to the R matrices given in (114). The solutions branch into two classes regarding on whether α 2 = α 1 or γ 1 + γ 2 = α 1 + α 2 . In the first case where α 2 = α 1 , we obtain the solutions, If we set γ 2 = γ 1 in the solutions above and make the change of variable u → log (u) /(α 1 − γ 1 ) then we shall obtain the same R matrices presented by Motegi & Sakai in [58], which are related to the tasep models. For the second case where γ 2 = α 1 + α 2 − γ 1 , we get the solutions, These solutions present four free-parameters (α 1 , γ 1 , γ 2 and β 1 or β 1 ); however, one of them can be removed by renormalization and another one by redefining the spectral parameter. This means that the solutions above contain two bare free-parameters only. Finally, we remark that these R matrices can also be obtained from the six-vertex R matrices (30) and (37), respectively, by taking the limits β 1 → 0 or β 2 → 0 (it is necessary to eliminate η before taking the limit to avoid discontinuities). Solutions for the seven-vertex models Finally, let us consider the case in which the R matrix has seven non-null elements. In principle, a seven-vertex model can be obtained by zeroing one of the elements b 1 , b 2 , d 1 or d 2 in the eight-vertex R matrix (73). This would provide four possible initial shapes for the seven-vertex R matrices. It happens, however, that β 1 = 0 implies β 2 = 0 and vice-versa whenever d 1 and d 2 are both different from zero, which is a consequence of the constraint β 2 2 = β 2 1 remaining from the eight-vertex model case. Therefore, we must regard both b 1 and b 2 different from zero here, which means that there are only two possibilities for the initial shapes of the seven-vertex R matrices, namely, The ybe for these seven-vertex models can be solved in the same fashion as the usual six-vertex models. In fact, after the elements a 1 , a 2 , b 1 , b 2 , c 1 and c 2 are found by the same equations as before, other simple equations fix d 1 or d 2 and their derivatives, while the remaining equations provide some constraints and determine the branches of the solutions. Relation (23) still holds true here, which means that we have two main branches depending on whether we set α 2 = α 1 or α 1 + α 2 = γ 1 + γ 2 . In the first case, we get a solution in which a 2 = a 1 ; in the second case, we have a 2 = a 1 . We shall present these solutions in the what follows. The first case a 2 = a 1 In this case, we have α 2 = α 1 , which implies a 2 = a 1 . The solution can be found by following the same steps presented Section 4.1. Some other simple equations determine d 1 (or d 2 ). However, differently from what happens in the sixvertex case, some of the remaining equations implies the constraint β 2 2 = β 2 1 , which means that we have two subcases to work on. The subcase Now, let us consider the case α 2 = α 1 and β 2 = −β 1 . Here γ 1 and γ 2 remain arbitrary and we are led to the following solutions: and The R matrices (125) and (126) depend on four free-parameters, namely, β 1 , γ 1 , γ 2 and δ 1 or δ 2 . However, β 1 can be removed by redefining the spectral parameter and, if we divide everything by e γ1u , then we can set γ =(γ 2 − γ 1 ) /β 1 as a bare free-parameter, so that the solutions have actually only two bare free-parameters. These solutions are, therefore, equivalent to the seven-vertex R matrices presented as solution 7V(III) in [44] and that given by equation (5.36) in [45]. We remark as well that the limits β 1 → 0 or β 2 → 0 of (125) and (126) reproduces the five-vertex R matrices given in (118). The second case a 2 = a 1 Now, let us consider the case in which α 1 + α 2 = γ 1 + γ 2 . The solution can be found by following the same steps presented in Section 4.2, among with other simple equations that determine d 1 (or d 2 ). After all elements are eliminated from the systems of equations, we shall come across with following constraint: We have therefore three cases to consider. In the first case we get a solution in which b 2 = b 1 ; in the other two cases we get solutions in which b 2 = b 1 (we kindly thank the anonymous referees of this paper for drawing our attention for this possibility). Conclusions and generalizations In this work we developed a differential method for solving the ybe and to classify its solutions. This method allowed a systematic analysis of the functional equations arising from the ybe for two-state systems: it revealed in a simple way how the solutions branch and allowed a complete classification of their regular R matrices. In total we found thirty-one families of solutions that are associated with the four, five, six, seven and eight-vertex models -see Table 1. In the Appendices, we also report interesting reduced solutions, which are obtained from the general ones by fixing some of the free-parameters of the R matrices. In this way, trigonometric solutions are obtained from the elliptic R matrices as well as rational solutions are derived from the trigonometric ones. The symmetries of the solutions, their geometric invariants and manifolds, the corresponding Hamiltonians and the respective classical limits (when they exist) are also discussed in the Appendices. This work can be generalized in several ways. The most obvious generalization is the classification of the R matrices associated with three-state systems, which includes important models as the fifteen-vertex R matrix of Table 1 Classification of the solutions of the ybe for two-state systems according to the ratios of the R matrix elements. We also indicate whether or not the respective solution is of the free-fermion type (i.e., if the quantity Φ = a 1 a 2 + b 1 b 2 − c 1 c 2 − d 1 d 2 is zero or not), and we give as well the number of bare free-parameters of the solutions. In total we found thirty-one families of solutions. Cherednik and the nineteen-vertex R matrices of Zamolodchikov-Fateev and Izergin-Korepin. Such classification is already in preparation and it will be communicated in the future. Other generalizations may include the computation of the reflection matrices (solutions of the boundary ybe) associated with the R matrices presented here, as well as the study of the statistical mechanics of the respective integrable models. It would be interesting, for instance, to present the Bethe Ansatz of these integrable models. Finally, we believe that the differential method can also be useful in the study of the non-additive ybe, the tetrahedral equation and also to find and classify the solutions of the classical ybe without make reference to the quantum ybe -this analysis could provide classical r matrices that fall outside the Belavin-Drinfel'd classification given in [60]. Acknowledgements The author kindly thanks Professor A. Lima-Santos for his comments and also the referees for their valuable remarks and suggestions. This work was fomented by Coordination for the Improvement of Higher Level Personnel (CAPES). A Reduced Solutions The general solutions presented in this work contain several free-parameters. When giving particular values to these parameters, particular solutions are obtained. For instance, if we set ω = 1 and γ 2 = γ 1 = 0 in the six-vertex R matrices (30) and (37), respectively, then we shall obtain the well-known simplest trigonometric R matrices of the six-vertex model [46], namely, In a similar way, the well-known R matrices of the eight-vertex model, for instance Baxter's R matrix [4,5,46], can be obtained from (89) after we set ω = 1 and δ 1 = δ 2 , so that we get ξ 1 ξ 2 = β 1 /δ 1 with k = ξ 1 /ξ 2 . Another important example was suggested by one of the referees of this paper. It corresponds to a reduced solution obtained from the eight-vertex R matrices (112) and (113) by fixing the value of η according to the expression ωη = iF √ 1 − k 2 , where F (k) denotes the complete elliptic integral of first kind whose modulus is k. In this case, we get that sn (ωη|k) → ∞, while cn (ωη|k) /sn (ωη|k) → −i and dn (ωη|k) /sn (ωη|k) → −ik. Noticing further that in this case we get β 1 → 0 but β 1 sn (ωη|k) → (ǫ/k) √ δ 1 δ 2 for (112) and β 1 sn (ωη|k) → ǫ √ δ 1 δ 2 for (113), we can verify that this value for ωη leads, respectively, to the following unusual six-vertex R matrices: and This limit was already discussed in [45] and it is related to an unusual elliptic six-vertex R matrix found by the authors of that work. The R matrices above can be compared with the elliptic R matrix (70), which has the same shape as them (see Section 5). Other elliptic reduced solutions can be obtained by giving special values for the elliptic modulus k -or, which is the same, by considering solutions of the differential equation (dy/dx) 2 = Ay 4 + By 2 + C for particular values of the coefficients A, B and C. Regarding on these possibilities, we mention the work [53] in which the authors had presented a table with fifty-two particular solutions of this differential equation -each one of them would give place to a particular elliptic R matrix. The most interesting way of deriving reduced solutions from the general ones is, however, to take special limits for the elliptic modulus k, so that the elliptic functions degenerate into trigonometric ones. Indeed, trigonometric R matrices can be derived from the elliptic ones by taking one of the following well-known limits of the elliptic functions -see [47]: (care should be taken in evaluating these these limits, however, because the modulus k of the elliptic functions appearing in the R matrices generally depends on complicated expressions of the solutions parameters). In the same fashion, rational R matrices can be obtained from the trigonometric ones by taking some limits, usually letting ω → 0, which degenerate the trigonometric functions into rational ones. These degenerated R matrices will be reported in the what follows (many of these degenerated solutions were already reported in [45]). A.1 From elliptic R matrices to trigonometric ones Let us begin our analysis with the elliptic R matrix of the unusual six-vertex model given by (70). Here the limit k → 0 is achieved by setting either δ 1 = 0 or δ 2 = 0. In any case, we get that η → ∞ and ω → 2ǫiα 1 where ǫ = sign(α 1 ); whence we obtain the following reduced solutions: and The limit k → 1, on the other hand, can be achieved by making either α 1 = 0 or α 1 = ǫi √ δ 1 δ 2 . In the first case we get the R matrix, while, in the second case, we get, We remark that the limits of (134) and (135) for k → 0 (which requires δ 1 = 0 or δ 2 = 0 in the first case and α 1 = 0 in the second) correspond to the same R matrices as given by (137) and (138), respectively, while their limits for k → 1 (which in any case is achieved as iα 1 = √ δ 1 δ 2 ) are both the same and they are given by (139) with a 1 and a 2 interchanged (this swapping between a 1 and a 2 can be explained by the symmetry of the solutions (134) and (135) regarding the inversion of the elliptic modulus k -see also Footnote 9). We can conclude, therefore, that all the three elliptic unusual six-vertex R matrices (70), (134) and (135) have essentially the same trigonometric limits. Now, let us analyze the trigonometric reductions of the eight-vertex elliptic R matrices. For the R matrix (89) in which a 2 = a 1 , the limit k → 0 can be achieved by letting either δ 1 → 0 or δ 2 → 0. However, we can verify that these limits lead to the seven-vertex R matrices (123) and (124), respectively (with iω in place of ω), so that anything new is obtained here. The limit k → 1, on the other hand, can be achieved as α 1 → ǫ β 1 − √ δ 1 δ 2 and it provides the following trigonometric eight-vertex R matrix: where ω = β 1 (β 1 − ǫα 1 ) = β 1 √ δ 1 δ 2 and tanh (ωη) = ǫω/β 1 so that sech (ωη) = ǫα 1 /β 1 . Besides, for the R matrices given by (112) and (113), in which a 2 = a 1 , we have the following: for k → 0, the limit of (112) if found as we make either δ 1 = 0 or δ 2 = 0; however, we can verify that the two seven-vertex R matrices obtained in this way are equivalent to the R matrices (128) and (129), after we replace ω by iω and u by ǫu, so nothing new is obtained here again. The limit of (113) for k → 0, on the other hand, is achieved when β 1 = ǫα 1 and it leads to the following trigonometric eight-vertex R matrix: where ω = √ δ 1 δ 2 and sin (ωη) = ω/β 1 . Finally, it remains to consider the limit k → 1 of the R matrices (112) and (113). Here, in both cases the limit k → 1 is achieved by imposing the additional constraint δ 1 δ 2 = β 2 1 − α 2 1 and it leads to the same trigonometric R matrix, namely, where ǫ = sign(α 1 ), ω = √ δ 1 δ 2 = β 2 1 − α 2 1 and tanh (ωη) = ω/β 1 . A.2 From trigonometric R matrices to rational ones Rational R matrices can be derived from the trigonometric ones from special limits of its free-parameters, usually through the limit ω → 0. As we shall see, all trigonometric R matrices have a non-trivial rational limit with the only exception being the four-vertex R matrix (15), whose rational limit is R(u) = P . Let us begin our analysis with the usual six-vertex R matrices (30) and (37). Setting γ 2 = γ 1 = 0 to eliminate the exponentials and then taking limit ω → 0, which implies α 1 = √ β 1 β 2 , we shall get respectively the following rational R matrices: Now, let us take the unusual six-vertex R matrix (54). Here the limit ω → 0 is achieved by imposing the additional constraint α 1 = ǫ √ δ 1 δ 2 . This provides the following R rational matrix: This solution also corresponds to the case ξ 1 = ξ 2 in the differential equation (50). For the remaining unusual six-vertex R matrices given by (66), (71) and (72), we have the following rational limits: For the eight-vertex R matrices, we have the following: the rational limit of (89) is found as we take the limit ω → 0, which implies that β 1 = ǫα 1 and either δ 1 = 0 or δ 2 = 0. This gives us two rational R matrices with seven non-null entries: Besides, the rational limit of the R matrix (89) is achieved as β 1 → 0 and δ 1 → 0 or as β 1 → 0 and δ 1 → 0; these two cases lead respectively to the same R matrices given at (145). Finally, for the R matrices (112) and (113) the rational limit is obtained as ω → 0, which implies β 1 = ǫα 1 and, either, δ 1 = 0 or δ 2 = 0. This provides us with two other rational R matrices with seven non-null entries: For the five-vertex R matrices the rational limit is found by taking simultaneously the limits α 2 → α 1 → 0 and γ 2 → γ 1 → 0. In this way, the R matrices given by (116) and (117) reduce to the following rational R matrices: while the R matrices presented in (118), (119), (120) and in (121) reduce respectively to the same R matrices given by (145). Finally, it can be verified that the seven-vertex R matrices (123) and (124) reduce to same rational R matrices given at (146), while the seven-vertex R matrices (125) and (126) reduce to the same rational R matrix given at (145). The seven-vertex R matrices (128), (129), on the other hand, reduce respectively to the R matrices given at (147), while the R matrices (130) and (131) reduce to the following R matrices: B Symmetries and properties of the solutions Regular solutions of the ybe can satisfy several symmetries. These symmetries can be discovered either by an algebraic approach -analyzing, for instance, the shape of the R matrix and the ratios between its elements -or through a geometric analysisstudying, in this case, the properties of the manifolds associated with the solutions. The unitarity, permutation, transposition, permutation-transposition and crossing symmetries that we commented in Section 1 depend mainly on the ratios of the R matrix elements and they can be said to be of the algebraic type. The solutions we found in this work, however, generally do not enjoy the majority of these symmetries. This, of course, is due to the fact that we assumed a priori none of these symmetries, which led us to quite asymmetric R matrices. Nonetheless, the R matrices can in general satisfy a given required symmetry if we impose the necessary restrictions on the free-parameters of the solutions. In fact, from the definitions given to these symmetries in Section 1, we can verify that the permutation symmetry requires b 2 = b 1 and c 2 = c 1 , while the transposition symmetry requires c 2 = c 1 and d 2 = d 1 . Besides, to the solution satisfy the transpositionpermutation symmetry, it is necessary that b 2 = b 1 and d 2 = d 1 and, finally, other more complicated relations are necessary for the solution to have the unitary and crossing symmetries. Notwithstanding, we highlight that all the solutions we found in this work remarkably satisfy the unitarity condition regardless of any additional constraint (i.e., in their most general form). In Table 1 we have classified the solutions into thirty-one families, according to the different ratios of the R matrix elements. The algebraic symmetries satisfied by these R matrices (within the constraints necessary for the solutions to have a given symmetry, if necessary) are presented in Table 2. The geometric symmetries of the solutions, on the other hand, can be achieved through the analysis of their invariants, that is, the relations among the R matrix elements that are independent on the spectral parameters u and v. These invariants fix the manifolds associated with the solutions and they are a consequence of the ybe. Here again, the differential forms of the ybe given by equations (4) and (5) provide a more direct way of deriving these invariants. To see why, we can proceed as follows: first, we can eliminate all the derivatives from the systems E and F , after which they reduce to two systems of polynomial equations only. Then, methods of algebraic geometry and commutative algebra -e.g., Gröbner basis, Hilbert series, multivariate resultants, and so on -can be used to study the ideals generated by these polynomial equations [30]. In fact, we can show in this way that the Hilbert dimension corresponding to each affine variety is usually positive, which means that the system of equations are satisfied before all the unknowns are fixed. This explains why we need to solve one or more differential equations at the end of the calculations to find the remaining elements of the R matrix. As we shall see below, the following quantity plays a key role in the analysis of the eight-vertex model invariants and its descendants [61,62]: In fact, the eight-vertex models are usually classified into two main classes according to whether Φ is zero or not: the case Φ = 0 corresponds to the so-called free-fermion models (or Felderhof-type models, after [55]), while the case Φ = 0 corresponds to the non-free-fermion models (or Baxter-type models, after [46]). If we take the derivative of (150) with respect to u and evaluate the result at u = 0 we shall get the quantity Thus, ϕ = 0 provides a necessary condition (often sufficient) for a model to be of the free-fermion type. Due to its importance, we indicate if a given vertex model is or not of the free-fermion type in Table 1. In the sequel, we shall analyze in more details the invariants and manifolds associated with the six and eight-vertex models. A similar analysis can be done for the four, five and seven-vertex models but it will be concealed. Table 2 The unitarity (U ), permutation (P ), transposition (T ), permutation-transposition (P T ) and crossing (C) symmetries of the solutions, among with the relation defining the crossing parameter (ζ), when available. The symbol ✓ indicates that the solution has the corresponding symmetry without any constraint. If some conditions are necessary for the solution to have a given symmetry (in such a way that the shape of the R matrix remains the same), we write them down. Finally, the symbol ✗ means that the solution cannot have the given symmetry at all. B.1 Invariants of the usual six-vertex model Let us begin with the usual six-vertex model, in which case we must take d 1 = d 2 = 0. After the derivatives are eliminated from the systems (4) and (5), only a few equations do not vanish. Some of them fix the ratios of the R matrix elements, providing the already derived relations (see Section 4): Among the remaining equations, E 2,3 and E 6,7 are of particular importance. After simplification, they become: From these two equations, the following invariants directly follow: from which we can deduce as well that The possible manifolds associated with the six-vertex models are found after we multiply E 2,3 by β 2 and E 6,7 by β 1 and take the sum or the difference of them. In fact, this provide us with the expressions: Therefore, we can see that there are two possibilities, namely, either ϕ = α 1 + α 2 − γ 1 − γ 2 = 0 or a 1 b 1 β 2 − a 2 b 2 β 1 = 0. The first case corresponds to a free-fermion solution which was named 6V2 in Table 1. In the second case, we get a manifold characterized by which actually means that a 2 = a 1 after we use (152). Assuming ϕ = 0 we get a manifold belonging to the non-free-fermion solution that corresponds to the R matrix named 6V1 in Table 1 (notice that the factor a 2 b 2 β 1 + a 1 b 1 β 2 cannot be zero because this would imply a 2 = −a 1 in view of (152), which incompatible with the regularity condition). B.2 Invariants of the unusual six-vertex model where b 1 = 0 and b 2 = 0 In the case of the elliptic unusual six-vertex model presented in Section 5, we have b 1 = b 2 = 0 but d 1 = 0 and d 2 = 0. After the derivatives are eliminated from the systems E and F , some equations can be used to fix the ratios between the R matrix elements, which are: In fact, we get from E 1,1 and F 1,1 for instance, the invariant, which actually means c 2 = c 1 and d 2 = (δ 2 /δ 1 ) d 2 , as stated above. Moreover, from E 1,4 , E 4,1 , E 5,8 and E 8,5 we get as well the invariants: from which we obtain the ratio between a 2 and a 1 given by the first equation in (158). The possible manifolds associated with this unusual six-vertex model can be found from the analysis of the equations E 2,8 and F 1,7 . In fact, taking the difference between E 2,8 and E 1,7 , we shall get, after we use (158), (164) and (165), the following relation: ( Therefore, we have two possibilities: if α 2 = α 1 we get the solution given by equation (54), which is a non-free-fermion solution (solution 6U1 in Table 1). In this case, the following invariants hold: The other possibility, on the other hand, means that Φ = 0, that is, it belongs to a free-fermion solution. This case corresponds to the solution 6U2 of Table 1. We shall not discuss the invariants of the unusual six-vertex R matrices given by (71) and (72) because the analysis is similar to the previous cases. B.3 Invariants of the eight-vertex model In the case of the eight-vertex model, several other equations survive after the derivatives are eliminated from the systems E and F . Some of them provide the ratios derived in Section 6, namely, As in the previous case, from E 1,1 and F 1,1 we get the invariant, and, from E 1,4 , E 4,1 , E 5,8 and E 8,5 we also get that, where we used the relation b 2 2 = b 2 1 , which always holds for regular solutions of the eight-vertex models. Here, the equations E 2,3 and E 6,7 are again of particular interest. In this case, they are: Taking the sum and the difference of them, we get, From this, we can analyze the possible manifolds associated with the eight-vertex model. Considering first the free-fermion case, where Φ = 0, we can see from (167) that there is only two possibilities here -in the same fashion as in the usual six-vertex model -, namely, we should have either ϕ = 0 or a 1 b 1 − a 2 b 2 = 0. For the first case, it is more convenient to consider the subcases β 2 = β 1 and β 2 = −β 1 separately. For β 2 = −β 1 we obtain the invariant, which means that a 2 = a 1 and b 2 = −b 1 ; therefore, this manifold corresponds to the solution named 8V2 in Table 1. For the other possibility, β 2 = β 1 , it follows that so that this manifold is associated with to the solutions 8V3 and 8V4 of Table 1. The last possibility a 1 b 1 − a 2 b 2 = 0 leads us to the invariant which actually means that b 2 = b 1 and a 2 = a 1 . In this case, however, other equations imply that ϕ = 0 as well, so that we lie into a particular case of the solutions above. Notice that for the free-fermion case, the following invariants always hold: Now, let us consider the non-free-fermion case in which Φ = 0. Here we should assume that ϕ = 0. Besides, the possibility β 2 = −β 1 leads to d 1 = 0 if the solution is regular, so that there is only one possibility here, namely, β 2 = β 1 . By its turn, this lead us to the invariant, which implies that a 2 = a 1 and b 2 = b 1 . Therefore, this manifold is associated with the solution 8V1 of Table 1. Finally, notice that the following invariants always hold for the non-free-fermion solutions: as we can see from the equations E 1,6 and E 1,7 , for instance, after we make use of the relations (163), (164) and (165). Several other invariants can be found for the eight-vertex model as well. We shall not go further on this subject, however, because these relations do not provide much more than we already have. Table 3 Hamiltonian coefficients for each vertex model and the corresponding spin chains. Notice that the free-fermion models are characterized by Jz = 0. or, conversely, In Table 3 we present the values of the J's coefficients for each vertex model listed in Table 1. We can see from the Table 3 that the four-vertex R matrix corresponds to the Ising spin chain, in which the interaction occurs only in the z direction. Besides, all the free-fermion solutions are associated with either the xx or xy Heisenberg spin chains. Finally, the non-free-fermion solutions are related to either the xxz or xyz Heisenberg spin chains (the xxx spin chain is related to some of the reduced solutions). D Classical limits We finish this paper by presenting the classical limits of the R matrices classified in Table 1. As commented in Section 1, the ybe plays a fundamental role in theory of quantum integrable systems. For classical integrable systems, an analogous role is played by the so-called classical Yang-Baxter equation: a first-order approximation of the (quantum) ybe (3) that reads: The solutions of the classical ybe are called classical r matrices. Given a solution R(u, h) of the (quantum) ybe (3) that also depends smoothly on a certain parameter h, besides the spectral parameter u, we say that this R matrix has a classical limit if the following condition holds: lim where I is the identity matrix and f (u) is any differentiable complex function. In this case, the classical r matrix is defined by the formula: and we can verify that it satisfies the classical ybe (180). In fact, (180) follows after we differentiate twice the quantum ybe (3) with respect to h, evaluate the result at h = 0 and use the property (181). The classical ybe was introduced by Sklyanin in [63] and Belavin & Drinfel'd in [60] have classified the non-degenerated solutions of the classical ybe for finite dimensional simple Lie algebras (a non degenerated solution of (180) is such that det r(u) = 0). After that, Jimbo [13] and Drinfel'd [14,15] independently introduced the concept of quantum groups as deformations of Lie algebras that allowed the construction of R matrices from the classical ones [26]. Since then, the classical ybe has appeared in connection with many important topics of theoretical physics and mathematics -see, for instance, [7] and references therein. In this section we shall present the classical limit of the R matrices associated with two-state systems. For the cases where this limit exists, we have identified the parameter h with η and normalized 12 the R matrices so that we have f (u) = 1. We remark, however, that the majority of the R matrices reported in this work does not have a classical limit as well. This is because the condition (181) is too restrictive 13 . In fact, any R matrix whose elements b 1 or b 2 is zero cannot have a classical limit because (181) can never be satisfied. Thus, the four-vertex R matrix, all the five-vertex R matrices and the unusual six-vertex R matrices cannot have such a classical limit. Besides, the condition (181) is satisfied only if a 2 (u, 0) = a 1 (u, 0), b 2 (u, 0) = b 1 (u, 0) etc., which implies that the asymmetric six-vertex R matrix (37), all the seven-vertex R matrices except (123) and (124), and, finally, the eight-vertex R matrices (97), (112) and (113) cannot have such a classical limit. In short, the only R matrices that admit a classical limit are the six-vertex R matrix (30), the seven-vertex R matrices (123) and (124) and the eight-vertex R matrix (89). The classical limits of these R matrices will be presented below. For the six-vertex R matrix (30), we have the following classical r matrix: where we have made β 2 = β 1 so that (181) is satisfied (see Footnote 13). For the seven-vertex R matrices (123) and (124), we have respectively the following classical r matrices: and For the eight-vertex R matrix (89) we have the following classical r matrix: Finally, we can compare our results with the Belavin-Drinfel'd classification [60]. For two-states systems, the r matrices correspond to the sl(2) Lie algebra and for this case Belavin & Drinfel'd have shown that, up to certain equivalences, there is 12 Different normalizations lead to different r matrices that differ from each other only by the addition of a term proportional to the identity. These r matrices, however, can be regarded as equivalent thanks to the following additive property of the classical ybe: if r(u) is a solution of (180), then the matrixr(u) = r(u) + f (u)I is also a solution of it. 13 Some solutions can have a classical limit if the condition (181) is weakened. For example, it can be verified that the classical limit of the asymmetric six-vertex R matrix (37) is given by the same r matrix (183), although the condition (181) is not satisfied in this case. Also, for the six-vertex R matrices (30) and (37), the classical ybe is still satisfied without the requirement β 2 = β 1 . These cases, however, fall outside the Belavin-Drinfel'd classification so that we shall not analyze these possibilities further. only two trigonometric solutions and only one elliptic solution, besides their reduced rational solutions. The two trigonometric solutions are associated, respectively, with the six and seven-vertex models and the corresponding r matrices are indeed equivalent to the r matrices we found here -namely, that ones given by (183), (184) and (185), respectively. The elliptic solution, by its turn, is associated with Baxter's eight-vertex model and the corresponding classical r matrix is indeed equivalent to the r matrix (186). We conclude therefore that our classification completely agrees with the Belavin-Drinfel'd analysis. E The differential Yang-Baxter equations We list below the non-null functional equations corresponding to the the ybe (3) for the general eight-vertex model. The functional equations for the four, five, six and seven-vertex models are contained in this system and can be obtained by zeroing the corresponding elements. The two sets of differential equations derived from the ybe (3) are presented below. The E system corresponds to equation (4) while the F system corresponds to (5).
17,896.2
2017-12-05T00:00:00.000
[ "Mathematics" ]
Synthesis, Biological Evaluation, and Molecular Dynamics of Carbothioamides Derivatives as Carbonic Anhydrase II and 15-Lipoxygenase Inhibitors A series of hydrazine-1-carbothioamides derivatives (3a–3j) were synthesized and analyzed for inhibitory potential towards bovine carbonic anhydrase II (b-CA II) and 15-lipoxygenase (15-LOX). Interestingly, four derivatives, 3b, 3d, 3g, and 3j, were found to be selective inhibitors of CA II, while other derivatives exhibited CA II and 15-LOX inhibition. In silico studies of the most potent inhibitors of both b-CA II and 15-LOX were carried out to find the possible binding mode of compounds in their active site. Furthermore, MD simulation results confirmed that these ligands are stably bound to the two targets, while the binding energy further confirmed the inhibitory effects of the 3h compound. As these compounds may have a role in particular diseases, the reported compounds are of great relevance for future applications in the field of medicinal chemistry. Introduction Thiosemicarbazones are considered as an important class of molecules that have important chemical properties as well as multiple biological activities [1]. Over the last 50 years, thiosemicarbazones have been examined as antibacterial, antiviral, and anticancer agents, where pharmacological attributes are mainly due to its parent ketone or aldehyde moiety [2]. The synthesis of thiosemicarbazone compounds is considered economical because of their low cost. It has been found that the conjugated =N-HN-C=S tridentate donor system of thiosemicarbazone is responsible for its anticancer potential [3]. The coordination chemistry of thiosemicarbazone is of considerable interest as they form complexes with various metals. They have also gained much attention as a pharmaceutical agent because of their diverse application in pathophysiological state [4]. LOXs (linoleate-oxygenoxidoreductase, EC 1.13.11.12) are members of a wider family of fatty acid dioxygenases that do not contain heme iron and can be isolated from higher plants, animals, and fungi [5]. They are mainly involved in the stereo-and regio-specific dioxygenation of polyunsaturated fatty acid (PUFAs) having a (1Z,4Z)-pentadiene system [6], e.g., linoleic acid (LA), arachidonic acid, or linolenic acid (LeA) into hydroperoxy derivatives. Moreover, the catalytic products of 15-LOX, i.e., leukotrienes and lipoxins, act as a pro-inflammatory and anti-inflammatory mediator (signaling molecule) in the biosynthesis of various compounds that have pathophysiological implications such as psoriasis, bronchial asthma, arthritis, and carcinogenic processes, as well as immune response [7]. Mammalian LOXs have been characterized into three distinct groups, 5-LOX, 12-LOX, and 15-LOX, based upon the carbon atom of substrate that has been oxygenated (C5, C12, and C15, respectively) [8]. Although mammalian and plant LOXs displayed about 25% amino acid similarity, they showed similar overall structures, especially in the catalytic region [9]. Previous literature suggested the overexpression of 15-LOX in carcinogenesis, broncho-alveolar epithelial cells, monocytes, macrophages of asthmatic patients, and eosinophils [10,11]. 15-LOX has been involved in the modulation of inflammatory responses by regulating the expression of interleukin 12 (IL-12) in a stimuli-restricted manner, depending upon the cell-type1 [12]. Renal cancer cells (RCCs) exhibited overexpression of 15-LOX that resulted in the production of hydroperoxy products, i.e., cytokine interleukin-10 (IL-10) and pro-inflammatory chemokine CCL2. In tumor microenvironment, these mediators modulated the immune function of T-lymphocyte and macrophages, thus enhancing the tumor evasion and immunosuppression. Disruption of arachdonic acid metabolism and reduction in inflammatory mediators represents an appealing approach to control both immune suppression and inflammation in human cancers, including RCC [13]. Therefore, the development of potent and selective inhibitors of 15-LOX appears to be a logical target owing to their potential role in modulating the inflammatory response. The carbonic anhydrases (CAs, EC 4.2.1.1) are zinc-containing metalloenzymes, located in all organisms including prokaryotes, archaea, and eukaryotes [14,15]. In humans, 14 different isozymes are found that differ in their subcellular localization along with tissue distribution. It includes four membrane bound isozymes (CA IV, CA IX, CA XII, and CA XIV), four cytosolic forms (CA I-III, CAVII), one mitochondrial form (CA V), along with one secreted form (CA VI) [16]. These enzymes are involved in the catalysis of simple chemical reaction and the interconversion between the bicarbonate ion and carbon dioxide, and thus participate in important physiological processes related to respiration, secretion in a variety of tissues/organs, pH and CO 2 homeostasis, transport of CO 2 /bicarbonate between metabolizing tissues and lungs, bone resorption, some biosynthetic reactions (such as gluconeogenesis, lipogenesis, and ureagenesis), tumorigenicity, calcification, and many other physiologic or pathologic processes [17]. Two isozymes of carbonic anhydrase (CA IX and XII) are highly expressed in many tumors and may be functionally involved in oncogenesis. However, immunohistochemically studies have indicated that carbonic anhydrase II is found to be highly expressed in several tumors such as malignant brain tumors [18] and gastric and pancreatic carcinomas [19,20]. The relation of cancer with carbonic anhydrase has been recently established. It has been found that the sulfonamide class of compounds, i.e., acetazolamide, suppresses the in vitro invasion of renal cancer cells. Through Western blotting and immunocytochemical techniques, it was observed that these cell lines showed overexpression of CA II. The extracellular pH of cells is acidic and intracellular pH is more basic in solid tumours as compared with adjacent normal cells. The intracellular/extracellular pH gradient is regulated by ion transport proteins [21,22] and carbonic anhydrases [23,24] (Figure 1). Therefore, it is possible to develop an anti-cancer agent that would selectively and reversibly bind to b-CA II where their anti-tumor affect could be achieved to the site of action [25]. In order to find potential antitumor agents, we have synthesized a series of thiosemicarbazone analogues containing a Schiff base and their biological activity was evaluated by treating them with 15-lipoxygenase and carbonic anhydrase II isozymes as dual target inhibitors. Docking simulations were performed using the X-ray crystallographic structure of enzyme protein to explore the binding modes of these compounds at the active site. Chemistry The synthesis of target 2-(hetero/(aryl)methylene) hydrazine-1-carbothioamides (Schiff base derivatives of thiosemicarbazide) was performed as outlined in Figure 2. The target compounds (3a-j) were produced in good to high yields, as discussed in our previously published paper [32]. A singlet was observed in the 1 H NMR spectrum at δ 8.14 for the proton of azomethine (HC=N), and singlets at δ 11.2 and 7.55 for NH and NH 2 , respectively (see Supplementary Data). In 1 H NMR spectra, the characteristic azomethine protons were seen in the 7.6-8.2 ppm range. The peaks at δ 184.0 (C=S) and 158 (C=N) in the 13 C NMR spectra provided proof that thiosemicarbazones had been synthesized (see Supplementary Data). Carbonic Anhydrase and Lipoxygenase Activity and SAR The structure activity relationship of various hydrazine-1-carbothioamides derivatives (3a-3j) toward b-CA II and 15-LOX was studied and further evaluated for their potential role. The effect of compound 3a was found to be promising towards the inhibition of b-CA II and 15-LOX. This parent compound exhibited dual inhibition of both targeted Carbonic Anhydrase and Lipoxygenase Activity and SAR The structure activity relationship of various hydrazine-1-carbothioamides derivatives (3a-3j) toward b-CA II and 15-LOX was studied and further evaluated for their potential role. The effect of compound 3a was found to be promising towards the inhibition of b-CA II and 15-LOX. This parent compound exhibited dual inhibition of both targeted enzymes. After that, the substitutional effect was also observed and it was found that all derivatives displayed non-selective, potent inhibitory activity for both b-CA II and 15-LOX, except four compounds (3b, 3d, 3g, 3j) that remained selective towards b-CA II. It was found from Table 1 that differently substituted hydrazine carbothioamide derivatives displayed potent activity at a lower concentration of 100 µM against both enzymes. These compounds displayed their potential activity in the range of 0.13 ± 0.01 to 10.23 ± 0.21 µM and 0.14 ± 0.01 to 1.34 ± 0.14 µM toward CA II and 15-LOX, respectively. When the structure activity relationship of derivatives 3c, 3g, and 3h was compared with that of 3a, it was found from their structures that the compound with meta halogen substituted phenyl ring (3h) displayed more potent activity than those with substitution at the ortho position of the phenyl ring (3e, 3g). The activity of the compound is due to the presence of more electronegative fluorine at the meta position, as exhibited by compound 3h. The compound 3h displayed ≈7-fold higher potential towards b-CA II as compared with its standard inhibitor acetazolamide with IC 50 ± SEM = 0.96 ± 0.18 µM. However, the effect of the disubstituted phenyl ring was also studied on the parent compound, i.e., hydrazonecarbazomide, and it was found that it resulted in considerable loss of activity as compared with monosubstitution. The compounds (3d, 3e, 3g, and 3j) displayed lesser activity as compared with the meta-monosubstituted phenyl ring (3h). The replacement of phenyl ring substitution with the thiophene group resulted in greater loss of activity, i.e., 3j with IC 50 ± SEM = 3.71 ± 0.25 µM, which is ≈29-fold less potent as compared with compound 3h towards CA II. Only six compounds displayed significant inhibitory potential for 15-LOX. The compounds 3h and 3c showed maximum potential for lipoxygenase with IC 50 ± SEM = 0.14 ± 0.01 and 0.16 ± 0.01 µM, compared with its standard inhibitor quercetin with IC 50 ± SEM = 15.8 ± 0.61 µM. A detailed study of the structure showed that the substitution of phenyl ring with a strong electronegative group i.e., fluoro at the ortho and meta position resulted in greater inhibition potential. However, either replacement of the phenyl ring with the thiophene ring or disubstitution of the phenyl ring resulted in marked loss of potential for 15-LOX. The IC 50 is the concentration at which 50% of the enzyme activity is inhibited. CA II and 15-LOX activities were carried out at a final concentration of 100 µM. a : Values are the mean of three experiments, b : Compounds showing <50% inhibition. Binding Mode of 3h with b-CA II Molecular docking studies were undertaken to investigate the potential binding of 3h (most active against b-CA II) in the enzyme active site, as shown in Figure 3, which illustrates the interactions of ligand with side chains of amino acids in the active site of b-CA II. Amino acid residue surrounding the ligand mainly consists of Trp4, Asn61, His63, Asn66, Gln91, His93, His118, Val120, Val141, Leu196, Thr197, Thr198, and Thrl199. The estimated binding affinity of the inhibitor 3h was found to be in a micromolar region similar to the in vitro results. Analyzing the docked poses after molecular docking revealed that a potent inhibitor of b-CA II made hydrogen bonds with amino acid residue such as Asn61 and His63 in the active site of target enzymes. These hydrogen bonds were observed with bond lengths of 2.3 and 2.8 angstrom. Moreover, it was observed that one water molecule acts as a bridge between the inhibitor 3h and amino acid residues Asn66 and Gln91 and forms hydrogen bonds, as shown in Figure 2. Binding Mode of 3h with 15-LOX Molecular docking of most of the potent compound 3h was also performed to predict the binding mode in the active site of target enzyme, i.e., 15-LOX. Molecular docking showed the binding interactions of 3h in active site of the target enzyme, as given in Figure 4. Amino acid residue in the active site of PDB ID: 1IK3 surrounding the ligand comprised Glu355, His378, Arg401, Leu406, Hiss364, and Leu595. The inhibitor 3h formed hydrogen bonding interaction with the residue His378. Hyde assessment of the inhibitor 3h inside the 15-LOX revealed the contribution of each atom of inhibitor 3h. The meta-substituted fluorine atom has the highest stability inside the 15-LOX active pocket with a binding free energy of −6.4 KJ/mol site, as it possessed the electron-withdrawing effect, increased the reactivity at the phenyl ring, and was involved in important hydrophobic interactions. Followed by the terminal amine group of thiourea, which makes the interaction with His378 have a binding free energy of about −2.7 KJ/mol. The Hyde energy of carbon (C=N) contributed favorably in determining the binding affinity of ligand with targeted protein. The Hyde energy of carbon was observed to be −1.5Kj/mol. Details of the Hyde assessment are illustrated in Figure 3. Hyde assessment of the inhibitor 3h revealed atom-wise energy contribution toward total binding free energy estimation. It was observed that the terminal primary amine group of thiourea of inhibitor 3h is energetically much favorable with a release of about −4 KJ/mol energy when bound to the b-CA II active site. Similarly, the other amino group of the thiourea conformation is also favorable with a release of about −2.5 KJ/mol. The meta-substituted fluorine also releases an amount of −2.4 KJ/mol upon binding to the b-CA II active. In addition, the carbon atom on the phenyl ring also contributed favorably with release of −3.0 KJ/mol during molecular interactions with targeted protein. Dynamics Stability and Flexibility Profiling of the Two Ligand Bound Complexes To understand the dynamic features of these two ligand bound systems, root mean square deviation (RMSD) for each system was calculated after 50 ns. As given in Figure 5, the average RMSD for both systems remained 1.2 Å. It can be seen from the ligand bound systems that (3h-15-LOX) (A) the RMSD did not show any major convergence, except a little increment in the RMSD between 35 and 40 ns. At the start, when the system was not in the equilibrium state i.e., between 3 and 6 ns, an usual increment was observed. In the case of the 3h-b-CA II complex, the RMSD initially remained uniform until 25 ns. However, an acceptable convergence between 25−30 and 36−38 ns was observed. Overall, these results shows that the two ligand bound systems favour the dynamics stability, and hence indicate that both systems are stable during the time of 50 ns. Dynamics Stability and Flexibility Profiling of the Two Ligand Bound Complexes To understand the dynamic features of these two ligand bound systems, root mean square deviation (RMSD) for each system was calculated after 50 ns. As given in Figure 5, the average RMSD for both systems remained 1.2 Å. It can be seen from the ligand bound systems that (3h-15-LOX) (A) the RMSD did not show any major convergence, except a little increment in the RMSD between 35 and 40 ns. At the start, when the system was not in the equilibrium state i.e., between 3 and 6 ns, an usual increment was observed. In the case of the 3h-b-CA II complex, the RMSD initially remained uniform until 25 ns. However, an acceptable convergence between 25−30 and 36−38 ns was observed. Overall, these results shows that the two ligand bound systems favour the dynamics stability, and hence indicate that both systems are stable during the time of 50 ns. On the other hand, the residual flexibility was also calculated, which shows that the binding of 3h has produced its effect upon the binding. In the case of 15-LOX (A), only the atoms 195-200, 310-320, and 580-600 showed relatively higher fluctuations than the other atoms, while the rest of the atoms showed lower fluctuation. In addition, the b-CA II (B) showed relatively higher fluctuation for most of the residues. The RMSF plots for both of the complexes i.e., 3h-15-LOX and 3h-CA-II, are given in Figure 6. The average of all backbone residues of atoms was taken into account to obtain the RMSF data, which examined the local changes in protein flexibility for both complexes (Figure 6a,b). The aforementioned variations play a significant part in the flexibility of protein complexes, which in turn affects the activity and stability of protein-ligands. The largest level of fluctuation in the residue locations of 330 and 580 at 0.5 and 0.3 nm of the backbone structure are shown by the RMSF graph for the 3h-15-LOX complex in Figure 6A, whereas the minimum RMSF value reveals extremely restricted movements. Figure 6B displays the RMSF graph for the 3h-CA-II complex. The 3h-CA-II complex has attained the amino acid residues at 230, which also show a fluctuation at 0.4 nm in RMSF. The other amino acid residues between 170, 220, and 230 have shown medial deviation. On the other hand, the residual flexibility was also calculated, which shows that the binding of 3h has produced its effect upon the binding. In the case of 15-LOX (A), only the atoms 195-200, 310-320, and 580-600 showed relatively higher fluctuations than the other atoms, while the rest of the atoms showed lower fluctuation. In addition, the b-CA II (B) showed relatively higher fluctuation for most of the residues. The RMSF plots for both of the complexes i.e., 3h-15-LOX and 3h-CA-II, are given in Figure 6. The average of all backbone residues of atoms was taken into account to obtain the RMSF data, which examined the local changes in protein flexibility for both complexes (Figure 6a,b). The aforementioned variations play a significant part in the flexibility of protein complexes, which in turn affects the activity and stability of protein-ligands. The largest level of fluctuation in the residue locations of 330 and 580 at 0.5 and 0.3 nm of the backbone structure are shown by the RMSF graph for the 3h-15-LOX complex in Figure 6A, whereas the minimum RMSF value reveals extremely restricted movements. Figure 6B displays the RMSF graph for the 3h-CA-II complex. The 3h-CA-II complex has attained the amino acid residues at 230, which also show a fluctuation at 0.4 nm in RMSF. The other amino acid residues between 170, 220, and 230 have shown medial deviation. Binding Free Energy Calculation MM-GBSA was used to determine the overall binding energy of 3h as well as a tional energy terms like vdW and electrostatic energy to further confirm its acti against the two targets. 3h-15-LOX's total binding energy was determined to be −5 kcal/mol. The observed vdW and electrostatic energy, however, were −19.22 kcal/mol −44.51 kcal/mol, respectively. The overall binding energy for the 3h-b-CA II complex discovered to be −53.41 kcal/mol. The electrostatic energy was −18.64 kcal/mol and measured vdW for the 3h-b-CA II complex was −46.38 kcal/mol. In light of these findi it is clear that the 3h has potent inhibitory effects on the identified targets. The bind free energy results for both complexes are given in Table 2. Characterization of Compounds Digital Gallenkamp (SANYO) was used to measure the melting points of synth compounds. In order to determine the 1 H NMR and 13 C NMR spectra, a Bruker AM spectrometer operating at 300 MHz was used. A Bio-Rad-Excalibur Series Mode FTS 3 MX spectrophotometer was used to obtain FTIR spectra. Agilent Technologies 6890N mental analyses were carried out utilizing an LECO-183 CHNS analyzer and Mass Spe Binding Free Energy Calculation MM-GBSA was used to determine the overall binding energy of 3h as well as additional energy terms like vdW and electrostatic energy to further confirm its activity against the two targets. 3h-15-LOX's total binding energy was determined to be −57.84 kcal/mol. The observed vdW and electrostatic energy, however, were −19.22 kcal/mol and −44.51 kcal/mol, respectively. The overall binding energy for the 3h-b-CA II complex was discovered to be −53.41 kcal/mol. The electrostatic energy was −18.64 kcal/mol and the measured vdW for the 3h-b-CA II complex was −46.38 kcal/mol. In light of these findings, it is clear that the 3h has potent inhibitory effects on the identified targets. The binding free energy results for both complexes are given in Table 2. Characterization of Compounds Digital Gallenkamp (SANYO) was used to measure the melting points of synthetic compounds. In order to determine the 1 H NMR and 13 C NMR spectra, a Bruker AM-300 spectrometer operating at 300 MHz was used. A Bio-Rad-Excalibur Series Mode FTS 3000 MX spectrophotometer was used to obtain FTIR spectra. Agilent Technologies 6890N elemental analyses were carried out utilizing an LECO-183 CHNS analyzer and Mass Spectra (EI, 70 eV) on a GC-MS. On 0.25 mm silica gel plates (60 F254, Merck), thin layer chromatography was carried out and UV at 365 and 254 nm was used to visualize the chromatograms. Rf values were calculated using a mobile phase that included a 4:1 ratio of petroleum ether and ethyl acetate. According to 1.0 mM of each precursor employed, the yields (%) were computed. Synthesis of 2-(hetero (aryl) methylene) hydrazine-1carbothioamides (Schiff Bases) (3a-j) With constant stirring, a solution of compound 1 (0.138 g, 1.0 mM) was poured into 25 mL of absolute ethanol. Substituted aldehydes or ketones (1.0 mM) were then added, along with 2-3 drops of concentrated sulfuric acid. After 12 hours of refluxing, the mixture was brought to room temperature. In the end, ethanol was filtered out of solid particles, which then recrystallized to form the compounds 3a-3j. The detailed characterization of compounds 3a-3j is given in the Supplementary File. Lipoxygenase Assay The lipoxygenase activity was determined with the some modifications to the reported method [33]. The compounds were first screened at 0.1 mM. Assay buffer was made containing 100 mM KH 2 PO 4 and the pH was adjusted at 8.0. Initially, 145 µL of assay buffer was added in each well of a 96-well plate, followed by the addition of 10 µL of 15-LOX enzyme (42.5 units/well). Then, 20 µL of test compound was added in each well and incubated for 10 min at 25 • C. Absorbance was measured at 234 nm by multimode micro-plate reader, FLUOstar Omega, Germany as a pre-read value. Further reaction was carried out by adding of 25 µL of substrate linoleic acid and incubation for 10 min at 25 • C. After incubation, absorbance was calculated again at 234 nm as after read. Then, 10% DMSO and quercetin were taken as negative and positive controls, respectively. Percent inhibition and IC 50 values were determined for the compounds that exhibited a percentage of inhibition more than 50%. The IC 50 values for all experiments were determined using a nonlinear curve fitting tool and all experiments were carried out in triplicate format in GraphPad PRISM 5.0 (GraphPad, San Diego, CA, USA). Carbonic Anhydrase Assay Carbonic anhydrase (b-CA II) inhibition activity was determined according to the previously reported protocol [34]. IC 50 values were calculated through GraphPad PRISM 5.0 Software Inc., San Diego, CA, USA. Molecular Docking Studies Molecular docking was performed using SeeSAR v12.1 for the potent inhibitor 3h against b-CA II and 15-lipoxygenase. Crystal structures for both enzymes were taken from RCSB protein data bank (http://www.rscb.org), accessed on 20 September 2022. The PDB ID 1V9E was utilized for b-CA II docking [35], while the PDB ID 1IK3 was used for soybean lipoxygenase docking [36]. The binding sites were identified using the dimensions of co-crystal ligand, which was further validated by CASTp pocket identifier. The 3D structures of ligand molecules were drawn using ACD/ChemSketch (Toronto, 2009) and 3D optimized. The "Prepare Receptor" module of the SeeSAR v12.1 performed the protonation, charges, selection of the relevant flips, and tautomeric state of residues for the crystal structure. Prior to docking, the previously drawn and 3D optimized inhibitor structure was protonated at physiological pH. Based on the presence of co-crystallized inhibitor sites, the docking site was chosen. The docking conformations were scored and ranked using a hybrid technique based on entropy and enthalpy. The contributions of each inhibitor atom were calculated using a Hyde assessment of the top-ranking conformations to estimate the binding affinities. The reliability and validation of the docking protocol were assessed by re-docking the co-crystal ligand with an active pocket of selected proteins. The applied procedures and sampling algorithms were evaluated for their reproducibility in replicating the co-crystal ligand conformation and interaction with amino acid residues with a low RMSD of less than 2 angstroms. A RMSD value of less than 2 angstrom between native pose and re-docked conformation ensures the validity of the applied procedures. Molecular Dynamics Simulation Using the Amber 18 package and the Amber 14ff force field, the two complexes' MD simulations were conducted. Upon the addition of Na+ ions, they were neutralized after being solvated with the TIP3P water model. The ligand parameters were generated using the GAFF2 ligand force field. Initially, the system was minimized into two steps. The initial minimization was performed for 10,000 steps, while the second one was performed for 6000 steps. Following minimization, heating of both the systems was performed. The production run temperature was maintained at 300 K and around 1 atm pressure. A production run of about 50 ns was performed at a constant pressure and Langevin thermostat (1 atm, 300 K) was used to maintain the system temperature around 300 K [37]. Long-range interactions were computed using the particle mesh Ewald (PME) algorithm [38,39]. The cutoff distances were adjusted to 10 Å. The SHAKE algorithm was used to restrain hydrogen bonds [40]. GPU accelerated simulation using (PMEMD.CUDA) was used for the production run. CPPTRAJ and PYTRAJ [41] were used for post-simulation analyses such as stability (RMSD) and residues' flexibility analysis (RMSF). Binding Free Energy Calculations To estimate the real-time binding energy, MM-GBSA is among the most reliable and accurate approaches. This method has been widely utilized by different studies [42][43][44]. To calculate the binding free energy, 2500 frames from a 50 ns trajectory were considered. It shows that the total binding energy (∆Gbind) of the complex is made as a sum of the total energy of the complex and ligand, as given above. Furthermore, each contributing energy partner (vdW, electrostatic, polar, and non-polar) is calculated by considering the following equation. Conclusions In the present work, we have synthesized and characterized the hydrazine-carbothioamides derivatives. They are further evaluated for their anticancer potential by treating them with carbonic anhydrase and lipoxygenase. The obtained results showed that some derivatives were potent inhibitors of both b-CA II and 15-LOX. Except for compounds 3b, 3d, 3g, and 3j, all other derivatives displayed non-selective and significant inhibitory activity towards b-CA-II and 15-LOX. The compound 3h was the most potent inhibitor of b-CA II and 15-LOX with IC 50 ± SEM of 0.13 ± 0.01 and 0.14 ± 0.01 µM, respectively, as compared with its standard inhibitors. To evaluate the binding interaction of these active compounds in the carbonic anhydrase and lipoxygenase enzymes' active sites, molecular docking experiments were also carried out. This study demonstrated a unique insight into the dual inhibitory properties of carboamide derivatives on CA and lipoxygenase. Additional research may be conducted to determine their potency and selectivity as a potential new therapeutic target.
6,104
2022-12-01T00:00:00.000
[ "Chemistry", "Biology" ]
Theta Function Solutions of the 3 + 1-Dimensional Jimbo-Miwa Equation Nonlinear phenomena arise in many physical problems in a variety of fields. Solutions of the governing nonlinear equations can shed light on these phenomena. There are various systematical methods for constructing solutions, for example, nonlinearization method of Lax pairs [1–3], extended F-expansion method [4–6] and homogeneous balance method [7–10], and dressing method and generalized dressing method [11–16]. It is well known that the Hirota method with the aid of Riemann-theta function is a good method to obtain periodic and quasiperiodic solutions. Nakamura [17, 18] used this method to study some famous equations such as KdV, KP, Boussinesq, and Toda. By extending the approach adopted by Nakamura, Dai et al. obtained the graphic quasiperiodic solutions for the KP equation for the first time [19] and later they also gave the quasiperiodic solutions for Toda lattice [20]. Recently, a lot of researchers have used this method to study various soliton equations [21– 24]. In the present paper, we consider the 3 + 1-dimensional Jimbo-Miwa equation uxxxy + 3uxyux + 3uyuxx + 2uyt − 3uxz = 0, (1) Introduction Nonlinear phenomena arise in many physical problems in a variety of fields.Solutions of the governing nonlinear equations can shed light on these phenomena.There are various systematical methods for constructing solutions, for example, nonlinearization method of Lax pairs [1][2][3], extended -expansion method [4][5][6] and homogeneous balance method [7][8][9][10], and dressing method and generalized dressing method [11][12][13][14][15][16].It is well known that the Hirota method with the aid of Riemann-theta function is a good method to obtain periodic and quasiperiodic solutions.Nakamura [17,18] used this method to study some famous equations such as KdV, KP, Boussinesq, and Toda.By extending the approach adopted by Nakamura, Dai et al. obtained the graphic quasiperiodic solutions for the KP equation for the first time [19] and later they also gave the quasiperiodic solutions for Toda lattice [20].Recently, a lot of researchers have used this method to study various soliton equations [21][22][23][24]. In the present paper, we consider the 3 + 1-dimensional Jimbo-Miwa equation which is the second member of a KP hierarchy [25].It has important physics to describe 3 + 1-dimensional waves.In the last decade or so, many researchers have studied this equation.Multiple-soliton solutions of (1) and its extended version were given in Wazwaz [26].Tang in [27] obtained its Pfaffian solution and extended Pfaffian solutions with the aid of the Hirota bilinear form.Su et al. [28] constructed its Wronskian and Grammian solutions.Multiplefront solutions for (1) were obtained by employing the Hirota bilinear method in [29].Ozixs and Aslan in [30] derived exact solutions of (1) via Exp-function method.In [31], Ma and Lee have obtained rational solutions of (1) including travelling wave solutions, variable separated solutions, and polynomial solutions by using rational function transformation and Bäcklund transformation.Li et al. have utilized generalized three-wave method to derive explicit three-wave solutions, such as doubly periodic solitary wave solutions and breather type of two-solitary wave solutions for (1) in [32].Zhang et al. have obtained generalized Wronskian solution in [33].Dai et al. obtained new periodic kink-wave and kinky periodic wave solutions for (1) in [34].Ma in [35] has derived exact solutions by using Lie point symmetry groups of (1).Tang and Liang in [36] have obtained two types of variable separation solutions and abundant nonlinear coherent structure by using multilinear variable separation approach.Ma et al. obtained new exact solutions for (1) by utilizing improved mapping approach [37].Liu and Jiang by applying the extended homogeneous balance method have obtained new solutions of (1) in [38]. Authors in [39] obtained special solutions by using extended homogeneous balance method.Hu et al. [40] discussed 3soliton and 4-soliton solutions with the aid of bilinear form of (1).In [41], some complexion type solutions of (1) are presented by using two kinds of transcendental functions.Authors presented rational solutions of (1) with the aid of the generalized Riccati equation mapping method [42].By utilizing two-soliton method and bilinear method, cross kink-wave and periodic solitary solutions of (1) are given in [43].With the help of the bilinear form, here we construct one-periodic and two-periodic solutions of this equation ( 1) by utilizing the method of Dai et al. [19]. The paper is organized as follows.In Section 2, we obtain the formula of one-periodic wave solutions and discuss its asymptotic behavior.Further, in order to analyze the solution, some solution curves are plotted.In the third section, twoperiodic wave solutions, their asymptotic behaviors and the solution curves are given. One-Periodic Wave Solution of the 3 + 1-Dimensional Jimbo-Miwa Equation and Its Asymptotic Form We consider the 3 + 1-dimensional Jimbo-Miwa equation Substituting the transformation into (2) and integrating once again, we derive the following bilinear form: where is an integration constant.The Hirota bilinear differential operator is defined by [44] The -operator possesses the good property when acting on exponential functions: where = + + + + 0 , = 1, 2. More generally, we have (8) We see that the well known soliton solution of the 3 + 1dimensional Jimbo-Miwa equation can be obtained as limit of the periodic solution (17). Proof.We write as with = + + + + 0 .If we make an arbitrary phase constant slightly as 0 = ξ0 − (1/2) and have small amplitude limit of = → 0, then we derive proper limit It is easy to obtain the (18).In fact, we have By utilizing (16), it is easy to deduce that The one-periodic solution curves are presented in Figures 1 and 2 for 0 = 0, respectively, in two-dimensional and three-dimensional space.It is obvious that the solution is periodic and cuspon from the above solution graphs.It is different to the Pfaffian solutions and extended Pfaffian solutions derived by Tang in [27].The results are different to one-soliton solution and two-soliton solutions represented by researchers in [28,[30][31][32][33].There are some difference between new types of exact periodic solitary wave and kinky periodic wave solutions in [34] by Dai et al. and the solutions in the paper. Using (1, 1) → 0, we obtain In order to show the solution character, we drop the solution curves of real and imaginary .Figures 3 and 4 plot the real and imaginary of , respectively, in threedimensional space.From the solution graphs, we can see that the solutions are periodic and cuspon.The derived two-periodic solutions in the paper are different to the two-solitary solutions in [32] and rational solutions presented by Ma and Lee in [31]. Figure 1 :Figure 2 : Figure of u
1,499.4
2017-06-22T00:00:00.000
[ "Mathematics" ]
Effects of particle size on physicochemical and functional properties of superfine black kidney bean (Phaseolus vulgaris L.) powder Black kidney bean (Phaseolus vulgaris L.) powder (BKBP) with particle sizes of 250–180, 180–125, 125–75, 75–38, and <38 μm was prepared by using coarse and eccentric vibratory milling, respectively. Physicochemical properties, cholesterol adsorption, and antioxidant activities of powders were investigated. Size and scanning electron microscopy analyses showed that particle size of BKBP could be effectively decreased after the superfine grinding treatment, and the specific surface area was increased. Flow properties, hydration properties, thermal stability, and cholesterol adsorption efficiency significantly improved with the reducing of particle size. The superfine powder with sizes of 75–38 or <38 μm exhibited higher antioxidant activity via 2,2-diphenyl-1-picryhydrazyl, hydroxyl radical-scavenging, and ferrous ion-chelating assays. The results indicated that the BKBP with a size of <38 μm could serve as a better potential biological resource for food additives, and could be applied for the development of low-cholesterol products. INTRODUCTION As an essential crop, kidney beans (Phaseolus vulgaris L.) are particularly popular in Africa, Latin American, and Asia (Beninger & Hosfield, 2003), and consumed as a human food source throughout the world representing 50% of the grain legumes (Camara, Urrea & Schlegel, 2013). Potential benefits to human health have been explored during the kidney beans consumption, including lowering postprandial glucose and insulin responses, preventing obesity, reducing the risk of cardiovascular diseases and preventing phenomena indicated the physicochemical properties seemed to be unpredictable, and would be related to particle size reduction, various grinding treatments, and raw materials. Thus, the effects of superfine grinding treatment on the physicochemical and functional properties of BKBP should be explored due to the little information. The newly-designed eccentric vibratory mill is remarkable nowadays, exhibiting a decisive intensification of the impact force among the grinding rollers for improved effectivity (Gock & Kurrer, 1999). In addition, the power consumption of eccentric vibratory milling is significantly decreased (up to 50% compared to conventional vibratory tube mills), due to the decrease of the ratio between kerb mass and payload, as well as the rational bearing load (Beenken, Gock & Kurrer, 1996), then it is increasingly used for fine grinding and pulverization of raw materials on an industrial scale (Baláž & Dutková, 2009;Godočíková et al., 2006). Thus, the BKBP was developed via eccentric vibratory milling in this study, and the effects of particle size on physicochemical, microstructural, cholesterol adsorption, and antioxidant properties of the resulting powders were investigated. The results are favorable for the development of value-added products using the BKBP. Materials Black kidney beans (P. vulgaris L.) were obtained from a local supermarket in Hefei, Anhui Province, China, and with a species authentication by Heilongjiang Crops Variety Examination Committee (Heilongjiang Province, China). Ferrozine, 2,2-diphenyl-1-picryhydrazyl radical (DPPH), ferrous sulfate, salicylic acid, and cholesterol were purchased from Sinopharm Chemical Reagents Co. (Shanghai, China). All other used chemicals were of analytical grade. Micronization processing of black kidney bean The dried black kidney beans were milled coarsely by a domestic disc-mill (DS-T200A model, Shanghai Dingshuai Electric Co., Ltd., Shanghai, China) for a 3 min discontinuous grinding, and then screened through 250-180 mm sieves. The resulting coarse samples were re-milled through an eccentric vibratory mill (XDW-6J model; Jinan Micro Machinery Co., Ltd., Shandong, China) for 10 min, and superfine powders with the particle sizes of 180-125, 125-75, 75-38 and <38 mm were then obtained via sieving. The eccentric vibratory mill was consisted of cylindrical-like elastically suspended grinding pipes, and kept the frequency of an unbalanced drive constant at 1,000 rpm during grinding. The circulating cold water was applied to maintain a low temperature. Particle size distribution and specific surface area analysis Particle size distribution of BKBP was analyzed via laser diffraction particle size analyzer (Mastersizer 2000; Malvern Instruments Ltd., Worcestershire, UK). The samples were dispersed in the ethanol before measured, and the volume weighted mean diameter of D [4,3] , as well as the selected percentile points of D 10 , D 50 , and D 90 , which represent 10%, 50%, and 90% volume of the particle mass diameter that is smaller than the size indicated, respectively, was used to characterize the particle size distribution of the superfine powder. The specific surface area (m 2 /g) was also calculated based on the volume distribution by the particle size analyzer. Scanning electron microscopy analysis Morphological characterization of the BKBP particles was performed using scanning electron microscope (SEM) (JSM-6490LV; JEOL Ltd., Tokyo, Japan) at an operating voltage of 20 kV with working distance of 11 mm. Color analysis The color of sample was detected via an automatic color difference meter (WB2000-IXA; Shanghai Exact Science Instrument Ltd., Shanghai, China) using the Hunter scale of L à , a à , and b à values as indicators. Water holding capacity and water retention capacity analyses The water holding capacity (WHC) was determined using the method of Zhao et al. (2010). The weights of cleaned centrifuge tubes (M 0 ) and dry BKBP samples (M 1 ) were measured, and the samples were then dispersed in the water with a ratio of 0.05:1 (w/w) and incubated at 60 C for 10, 20, 30, 40, 50, and 60 min, respectively. After centrifugation for 20 min at 5,000 rpm, the supernatant was removed, and the centrifuge tubes with the powder (M 3 ) were weighed. The WHC of BKBP was calculated as follows: Water retention capacity was defined as the quantity of water that remains bound to the hydrated fiber following application of an external force. The samples (M 3 ) were dried at 105 C for 2 h, and then weighed (M 4 ) again to calculate the WRC as follows: Thermal property analysis The thermal property was analyzed via the differential scanning calorimetry (DSC) method using a TA ultrasensitive differential scanning microcalorimeter (Model TA Q200; TA Instruments Co., New Castle, DE, USA). Eight milligrams of each sample were put into a hermetic aluminum pan and heated from 20 to 220 C at a rate of 10 C/min in a 50 mL/min nitrogen flow, using an empty aluminum pan as reference. Each curve obtained by the instrument was further analyzed via Universal Analysis 2000 software (TA Instruments Co., New Castle, DE, USA). Cholesterol adsorption capacity analysis The cholesterol adsorption capacity was expressed as the quality of adsorbed cholesterol for per gram of BKBP, which was estimated by the method of Chen et al. (2015b). The cholesterol solution with different concentrations was prepared in glacial acetic acid. The BKBP was added in cholesterol solution with a selected mass ratio, and then placed in a shaker water bath at 37 C for 90 min at 90 rpm. At the end of adsorption, two mL of the supernatant were used for cholesterol estimation. The cholesterol adsorption capacity was calculated using the following formula: Where V represents the volume of the cholesterol solution, ρ 0 and ρ represent the concentrations of cholesterol solution before and after adsorption, respectively, and m represents the weight of BKBP. Effects of the particle size, powder dosage, initial concentration of cholesterol, absorption time, and absorption temperature on cholesterol adsorption capacity were evaluated. Antioxidant activity analysis The antioxidant activity was determined via the scavenging activities of DPPH and hydroxyl free radicals. Two milliliters of BKBP solution (five mg/mL) were mixed with 2.5 mL DPPH-ethanol solution (100 mM) for a 30 min reaction at 37 C. Then, the mixture was centrifuged at 10,000 rpm for 10 min, and the absorbance of the supernatant (Abs sample ) was recorded at 517 nm. Blank absorbance (Abs blank ) was measured using methanol to replace the sample. Vitamin C (V C , five mg/mL) was used as positive control. The DPPH radical scavenging activity (%) was calculated using the equation of [(Abs sample -Abs blank )/Abs blank ]  100% (Andrade et al., 2017). The hydroxyl radical scavenging activity (%) was estimated following a previously reported method (Zhao et al., 2015). Two milliliters of BKBP solution (five mg/mL) was used for testing. The reaction mixture solution was centrifuged at 10,000 rpm for 10 min to determine the absorbance of the supernatant at 510 nm. Methanol was applied to determine the blank absorbance (Abs blank ), and the hydroxyl radical scavenging activity (%) of BKBP was calculated by [(Abs sample -Abs blank )/Abs blank ]  100%. In addition, the Fe 2+ chelating capacity was also measured. One milliliter of BKBP solution (five mg/mL) was mixed with 2 M FeCl 2 solution (0.1 mL) under addition of 0.2 mL of five mM ferrozine and was left standing for 10 min. The supernatant after centrifugation was recorded at 517 nm and the reaction mixture without sample was used as a blank (Abs blank ); then, the Fe 2+ chelating activity (%) was obtained via the equation of [(Abs sample -Abs blank )/Abs blank ]  100% (He et al., 2018). Statistical analysis All experiments were repeated and analyzed at least in triplicate. Results were expressed as the mean ± SD, and one-way analysis of variance was employed to determine the significant differences between the means at P < 0.05 using SPSS version 13.0 (SPSS Inc., Chicago, IL, USA). Particle properties and microphotographs of BKBP The particle size distribution and specific surface area of BKBP were presented in Table 1. With particle size decreasing, all cumulative undersize centiles (D 10 , D 50 , and D 90 ) of BKBP significantly (P < 0.05) decreased. D [4,3] values of the powder decreased from 226.658 to 24.835 mm for a particle size ranging from 250-180 to <38 mm (Table 1). Furthermore, the specific surface area increased with the decrease of particle size, and the BKBP with the smallest particle size (<38 mm) showed the highest specific surface area of 0.520 m 2 /g, suggesting that the surface parameter of BKBP was negatively related to the projected size of the corresponding particle. The shape and surface morphology of BKBP were observed using SEM (Fig. 1). As the particles size decreased, it was possible to see the transition of typical blocky shape (coarse powder) into short ones (Fig. 1C, 125-75 mm), until very small parts and fragments were achieved (Figs. 1D and 1E, <38 mm). From Figs. 1B-1E, with the improvement of mechanical force, the transformation of BKBP from an ordered structure to a disordered structure was clearly presented via the breakage of intermolecular bonds as well as the reduction of particle size. It was notable that Figs. 1D and 1E exhibited an increased aggregation of BKBP, due to the various shapes of black kidney bean particles resulted from the extensive milling combination of flattening, aggregation and fracture. Under the higher magnification (Figs. 1F-1J), it could be clearly seen that the particles surface tends to be flat and smooth with the size decreasing. Color As listed in Table 2, L à increased slightly (P < 0.05) when the BKBP size decreased from 250-180 to 75-38 mm, and no significant difference (P > 0.05) in lightness was found between the sample sizes of 75-38 mm and <38 mm. Furthermore, an increase in a à value could be observed, but it was difficult to visually obtain due to the smaller variance. The b à value decreased from 30.654 to 15.805 with BKBP size decreasing from 250-180 to 125-75 mm, while increasing to 35.653 at the particle size <38 mm. Note: The results were expressed as mean ± standard deviation. Data in the same column with different letters were significantly different (P < 0.05). Flow property To evaluate the flowability of BKBP, the integrative characteristics of powder were analyzed. As the particle size decreased from 250-180 to <38 mm, the bulk density decreased from 0.439 to 0.364 g/mL, and the largest bulk density (0.439 g/mL) was found in the particle size of 250-180 mm (Table 2). In contrast, the tapped density of BKBP increased from 1.435 to 2.645 g/mL with BKBP size decreasing from 250-180 to <38 mm. The values of tapped density were significantly higher than the bulk density. Moreover, the angle values of repose and slide decreased significantly (P < 0.05) with the reduction of particle size. The BKBP with a particle size of <38 mm had the lowest angles of repose (43.282 ) and slide (33.259 ). Hydration property The hydration property of BKBP was determined by WHC and WRC assays. With the reduction of particle sizes from 250-180 to <38 mm, the WHC values of BKBP increased, ranging from 5.98 to 6.26 g/g, 6.03 to 6.87 g/g, 6.18 to 7.41 g/g, 6.28 to 7.86 g/g, 6.31 to 7.81 g/g, and 6.44 to 8.03 g/g for 10, 20, 30, 40, 50, and 60 min of soaking ( Fig. 2A), respectively. A similar tendency was also found in the WRC assay for the BKBP under the same soaking conditions (Fig. 2B). Thus, the hydration property of the BKBP with a particle size of <38 mm was higher. It was also worth mentioning that the WHC values of different sized BKBP increased slowly during the initial 10 min soaking. Thermal property The thermal property of BKBP with different sizes was further analyzed via DSC curves (Fig. 3). Compared to the endothermic peaks (T m ) observed in the curve of coarse powder, the peak around 97.07 C disappeared in the analyses of superfine powder with sizes of 180-125, 125-75, 75-38, and <38 mm. Notably, an intense endothermic peak was found from 128.56 to 178.10 C in all curves, and the peak temperatures exhibited a significant increasing tendency with the decreasing of particle size. Cholesterol adsorption of BKBP As shown in Fig. 4A, the adsorption capacity for cholesterol significantly increased with the reduction of particle size. The BKBP with a size of <38 mm showed the strongest Note: The results were expressed as mean ± standard deviation. Data in the same column with different letters were significantly different (P < 0.05). adsorption capacity (27.27 mg/g); thus, it was chosen for further evaluation. The cholesterol adsorption capacity decreased dramatically from 26.95 to 15.51 mg/g with adsorbent dosage increasing (Fig. 4B). With the increase in initial concentration of cholesterol, the adsorption capacity increased (Fig. 4C). Furthermore, the cholesterol adsorption capacity for different adsorption time (min) and temperature ( C) was shown in Figs. 4D and 4E, respectively. The adsorption increased quickly with increasing time from 10 to 60 min, reaching a plateau in the following 60-150 min (Fig. 4D); nevertheless, the cholesterol adsorption capacity decreased when temperature increased (Fig. 4E). Adsorption isotherms analysis The relationship between the adsorption capacity (q e ) and the concentration of cholesterol at equilibrium (C e ) was further analyzed via fitting to Langmuir and Freundlich isotherms models, respectively (Ngah & Hanafiah, 2008). The Langmuir model is considered as a monolayer adsorption processing, which assumes monolayer adsorption onto an adsorbent surface. The linear equation is given by 1=q e ¼ ½1=ðK L  q max Þ= 1=C e ð Þ, where q max represents the maximum adsorption capacity (mg/g), C e represents the concentration of adsorbate (mg/mL) at equilibrium, and K L represents a constant related to energy of adsorption, which quantitatively reflects the affinity between adsorbent and adsorbate. The maximum adsorption capacity of cholesterol adsorption was calculated as 53.476 mg/g for BKBP (Table 3). Moreover, the essential feature of the Langmuir model was expressed with a dimensionless constant separation factor (R L ), which was calculated using the equation of 1/(1 + K L  C 0 ), where C 0 represents the initial cholesterol concentration (mg/mL). Therefore, the R L was 0.370-0.804 for the initial cholesterol concentration ranging from 0.25 to 1.50 mg/mL, respectively, indicating a favorable adsorption of cholesterol using the BKBP (Fig. 4F). The Freundlich isotherm model was considered to be multilayer adsorption and could be suitable to highly heterogeneous surface, which could be expressed with the linear equation of lg q e = C e /n + lg K F , where K F and n represent the Freundlich constants indicative of the adsorption capacity and intensity, respectively. The value of 1/n determined via the Freundlich isotherm was 0.697 (1/n <1) (Table 3), confirming the high adsorption efficient of BKBP. Antioxidant activity analysis The antioxidant activity of the BKBP with different particle sizes was evaluated by different in vitro assays (Fig. 5). Regarding radical scavenging activity using DPPH assay, the finer powders with particle sizes of 75-38 and <38 mm exhibited higher DPPH scavenging activities of 87.30% ± 1.77% and 89.40% ± 0.81%, respectively (Fig. 5A). As shown in Fig. 5B, the powder with particle size of 75-38 mm exhibited the strongest hydroxyl radical scavenging activity (88.92% ± 1.38%) among all tested samples, while the BKBP with (Fig. 5B). Furthermore, an increase in ferrous ion-chelating effects was observed when the particle size decreased from 250-180 to 75-38 mm, and the BKBP with a size of 75-38 mm exhibited the strongest chelating activity of 81.16% ± 1.72% (Fig. 5C). DISCUSSION Taking into account the nutritional and economical aspects of black kidney beans, fortifying varied bean powders appears to be promising for the production of health food (Lee, Hung & Chou, 2008). Conventional milling methods have generally been used in the pulverization and research of kidney bean (Anton, Fulcher & Arntfield, 2009;Malav et al., 2016). However, until now, systematic studies on superfine kidney bean powder are still limited. In the present study, BKBP with sizes between 180 and <38 mm was prepared via eccentric vibratory mill. Elliptical, circular and linear vibrations could be generated via eccentric vibratory mill instead of homogeneous circular vibrations, which would increase the amplitude of the individual grinding media and increase the rotational speed of the grinding media filling (Gock & Kurrer, 1999). Consequently, as shown in the particle size and SEM analyses (Table 1; Fig. 1), BKBPs were efficiently broken into smaller fractions, and the shape and original structure of particles were changed to be smoother by the inhomogenous impact force. Therefore, physical-chemical properties of BKBP would be altered with the sieving of special size parameters (250-180, 180-125, 125-75, 75-38, and <38 mm), confirmed the importance of micronization equipment on the fluidity, dissolution, and surface activity of powders (Muttakin, Kim & Lee, 2015). The color parameters of BKBP (Table 2) depicted their relations to particle size and morphology. The increase of L à values was as expected with the reduction of particle size, due to the increase in surface area, and that would allow more reflection of light . Meanwhile, the loss of pigment and the exposure of internal materials during superfine grinding also could contribute to the improvement of brightness. Thus, the BKBP with a size of <38 mm was brighter, which might be favorable for the applications as food ingredients. The decrease of bulk density and increase of tapped density of fine powders (Table 2) exhibited the enhancement of inter-particulate interactions, which indicated the improvement of flowability of the BKBP. Moreover, according to Table 2, the decreasing angle of repose and slide of superfine BKBP with smaller size also might indicate the increase of flowability (Zhao et al., 2010). But the result was not in agreement with the investigation of Lee & Yoon (2015), who found that the soybean powders with the smallest particle size (250-150 mm) showed a poor flowability because of the cohesion. However, Fu et al. (2012) reported that powder shape significantly affected the flow characteristics of the powder, and stated that more circular and smooth shaped particles had the higher flowability, which was consistent with our results for the morphology analysis (Fig. 1), and confirmed the efficiency of eccentric vibratory mill. Thus, the BKBP with a size of <38 mm had a larger number of particles per unit weight and achieved the higher flowability, which would be beneficial to fill tablets or capsule products to achieve homogeneity state when mixed with other additives. Furthermore, the decrease of particle size has a substantial effect on the hydration properties of BKBP. Particles with a size of <38 mm exhibited the highest hydratability during soaking (Fig. 2), which was higher than previous data on soybean flours (4.1 g/g) (Heywood et al., 2002) and superfine wheat (Triticum aestivm L.) bran superfine powder (7.0 g/g) (He et al., 2018). Superfine grinding treatment might result in the surface properties changes of the BKBP, such as the increase of surface energy, greater surface area, and the exposure of hydrophilic groups, which led to an easy integration with water (Zhao et al., 2009a). Additionally, high content of protein (20-30%) within the BKBP could also held water through weak forces, such as hydrogen bonds (Shi et al., 2016). Similar results were also confirmed by Chen et al. (2015a) and Zhao et al. (2009a, 2010. In contrast, Raghavendra et al. (2006) found that the hydration properties of coconut dietary fiber were decreased when its particle size was decreased from 550 to 390 mm. It has been reported that grinding the dry fibrous material to fine powder adversely affected its WHC and swelling capacity, presumably attributed to the collapse of the fiber matrix by milling (Kethireddipalli et al., 2002). Hence, various physicochemical characteristic would be discovered because of the diversity of materials and grinding treatments. Eccentric vibratory milling treatment might result in the damage on BKBP structure, and the particle size would be too smaller to compensate differences on the hydration properties. High hydration capacities of BKBP would increase the affinity between the powder and water, and might keep more water in the inner part (He et al., 2018), which would lead to the enhancement of evaporation energy, and exhibited an improved thermal stability (Fig. 3). Therefore, BKBP with the smaller size (such as <38 mm) could be more suitable for water retention, and might thus be more potentially applied as functional ingredient to prevent syneresis and improve textural properties, as well as be utilized in the higher-temperature processing, such as baking or steaming. It is well known that the surplus cholesterol in the human body forms an initial pathogenic factor of arteriosclerosis, resulting in apoplectic stroke, angina pectoris, and cardio sclerosis (Soh, Kim & Lee, 2003). Food material as biosorbent for cholesterol reducers/extractors is of growing interest, due to many advantages, such as natural, wide availability, healthy, and nontoxicity. Good cholesterol binding capacities have been found using the cereal brans, such as rice bran, oat bran, wheat bran, and corn bran (Kahlon & Chow, 2000). Adsorption properties of four legume seeds (green lentil, white small bean, yellow pea, and yellow soybean) have been evaluated by Górecka, Korczak & Flaczyk (2003), grinding degree was found to be significantly influenced the adsorption properties. In the present study, superfine BKBP was found to have a high cholesterol adsorption capacity by in vitro assays, which was probably correlated with their high contents of dietary fiber, especially hemicelluloses and lignin (Górecka, Korczak & Flaczyk, 2003), thus it could be recommended in the in lipid disorders prophylactic. Particle size, powder dosage, initial concentration of cholesterol, absorption time, and absorption temperature were all found to significantly affect the cholesterol adsorption of BKBP (Fig. 4). Decreasing particle size could effectively improve cholesterol adsorption capacity of BKBP due to the increase of specific surface area, thus lead to a larger contact area with cholesterol and shorter absorbing path distance (Chen et al., 2015b). It was interesting that the relative lower temperature would be favorable for the cholesterol adsorption, thus cholesterol adsorption process using BKBP should be controlled at below 18 C for a 60 min reaction. Furthermore, the maximal adsorption capacity (53.476 mg/g) of BKBP was successfully predicted by Langmuir adsorption isotherms analysis (Table 3), which was higher than the ability of okra superfine powder around 18.75 mg/g (Chen et al., 2015b), and carrot pomace insoluble dietary fiber around 30 mg/g (Yu et al., 2018), but lower than thyme (Thymus vulgaris L.) powder of 84.74 mg/g (Salehi et al., 2018). Besides, the value of 1/n (0.697) obtained from the Freundlich model was less than unity, indicating the favorability of the adsorption. The two fitting models suggested that BKBP would be effective as a potential adsorbent. Therefore, it seemed that the BKBP could be applied in the functional food manufactures, such as biscuits and other healthy products, to reduce calories and cholesterol without loss in physical and structural properties (Prokopov, 2014). It has been confirmed that the plant source of seed powder might have hypolipidemic effect on diabetic patients (Kassaian et al., 2009). Thus, superfine BKBP might act as a novel nutraceutical additive/excipient in tablets, such as simvastatin, to provide synergistic effects for lowering serum cholesterol level (Swami et al., 2010). Besides, it would be also interesting to employ the BKBP as the potential biosorption materials in the developments of low cholesterol milk or milk beverages (Oliveira et al., 2015), and even in the extracorporeal perfusion to immediately reduce the content of the lipids in the blood (Salehi et al., 2018). In addition, multiple antioxidant assays including DPPH and hydroxyl radical scavenging activity, as well as ferrous ion-chelating effects, were carried out in the experiments, and particle sizes showed significant effects on the activities (Fig. 5). The capability of stable free radical 2,2-diphenyl-1-picrylhydrazyl to react with H-donors, including phenolics in natural materials, could be evaluated by the DPPH test in the visible region after a fixed incubation time (Roginsky & Lissi, 2005;López-Alarcón & Denicola, 2013). In this study, higher DPPH scavenging activity was obtained with the decrease of BKBP particle size (Fig. 5A). Meanwhile, the powder with a particle size of 75-38 mm exhibited the strongest hydroxyl radical scavenging activity in Fig. 5B. Hydroxyl radicals (OH) are the most commonly formed reactive oxygen species and have been linked to many clinical disorders, such as brain ischemia, cardiovascular disease, and carcinogenesis (Hu, Chen & Ni, 2012). Several reports also indicated that the OH scavenging effects could be related to hypoglycemic activity (Chen et al., 2009;Chen, Zhang & Xie, 2005). As shown in Fig. 5C, the ferrous ion (Fe 2+ )-chelating activity of the BKBP was favorably affected by the reduction of particle size, which would prevent the generation of free radicals, oxyradicals, and lipid peroxidation (Singh & Rajini, 2004). According to the previous studies, polyphenols and flavonoids compounds were main antioxidant compounds presenting in kidney bean (P. vulgaris L.), containing free and bound forms (Cardador-Martínez, Loarca-Piña & Oomah, 2002;Malav et al., 2016). Meanwhile, as one kind of black coat bean, a high accumulation of anthocyanins relating to antioxidant activity would be found in the epidermis palisade layer, and was up to 13,955 (mg CGE/kg) (Žili c et al., 2013). Therefore, the increase of antioxidant availability in the BKBP with the smaller particle sizes (such as 75-38 and <38 mm) might be attributed to the fact that finer particles would be beneficial for the dissolution of free-form antioxidant compounds. In addition, superfine grinding broke the structure of protein and fiber matrix (as shown in SEM images), and thus increased the availability of bound-form antioxidant compounds linked or embedded in the matrix. However, as shown in Figs. 5B and 5C, compared with the sample size of 75-38 mm, the antioxidant activities of BKBP with a size of <38 mm have a slight decrease (P > 0.05), which might be attributed to the inevitable mechanical impact and heating effect during superfine grinding, leading to altering or disrupting of antioxidant compounds within BKBP. Therefore, controlling grinding degree is of importance, as it will influence powders' functional properties, and superfine BKBP with a size of 75-38 mm exhibited a potential application as antioxidative products. CONCLUSIONS Fine BKBP with smooth surface was obtained using the eccentric vibratory milling, and the application potential of BKBP was improved with the decrease of particle size. The BKBP with a particle size of <38 mm exhibited good flowability, hydration properties, and thermal stability. Adsorption isotherm analysis highlighted the promising potential of the superfine BKBP with a particle size of <38 mm as a cholesterol sorbent or an alternative source to absorb harmful lipids. Moreover, compared with the other particle sizes, the superfine BKBP with sizes of <75 mm showed improved antioxidant activities in the free radical scavenging activities. Overall, the BKBP prepared by eccentric vibratory mill with a particle size of <38 mm showed great potentials in the food industry and pharmaceutical field for the health product developments. In the future, in vivo evaluations of the BKBP would be urgently carried out, and the BKBP produced using the eccentric vibratory mill should be further evaluated under various processing conditions to better understand the attributes of the grinding technology.
6,637.4
2019-02-04T00:00:00.000
[ "Agricultural and Food Sciences", "Materials Science" ]
Friends in high places: Interspecific grooming between chimpanzees and primate prey species in Budongo Forest While cases of interspecies grooming have been reported in primates, no comprehensive cross-site review has been published about this behavior in great apes. Only a few recorded observations of interspecies grooming events between chimpanzees and other primate species have been reported in the wild, all of which have thus far been in Uganda. Here, we review all interspecies grooming events recorded for the Sonso community chimpanzees in Budongo Forest Reserve, Uganda, adding five new observations to the single, previously reported event from this community. A new case of interspecies play involving three juvenile male chimpanzees and a red-tailed monkey is also detailed. All events took place between 1993 and 2021. In all of the six interspecific grooming events from Budongo, the ‘groomer’ was a female chimpanzee between the ages of 4–6 years, and the ‘recipient’ was a member of the genus Cercopithecus. In five of these events, chimpanzee groomers played with the tail of their interspecific grooming partners, and except for one case, initiated the interaction. In three cases, chimpanzee groomers smelled their fingers after touching distinct parts of the receiver’s body. While a single function of chimpanzee interspecies grooming remains difficult to determine from these results, our review outlines and assesses some hypotheses for the general function of this behavior, as well as some of the costs and benefits for both the chimpanzee groomers and their sympatric interspecific receivers. As allogrooming is a universal behavior in chimpanzees, investigating the ultimate and proximate drivers of chimpanzee interspecies grooming may reveal further functions of allogrooming in our closest living relatives, and help us to better understand how chimpanzees distinguish between affiliative and agonistic species and contexts. Supplementary Information The online version contains supplementary material available at 10.1007/s10329-023-01053-0. Introduction Anecdotal observations suggest that many chimpanzee communities engage with sympatric primate species through a variety of interactions ranging from agonistic to affiliative-with community-specific and individual variation in behavioral responses (Teleki 1973;Hobaiter et al. 2017). Aggressive agonistic interspecific interactions appear to be the best documented, including interactions driven by competition (Morris and Goodall 1977;Matsumoto-Oda 2000), predation through hunting (Nishida et al. 1979;Boesch and Boesch 1989;Stanford et al. 1994;Uehara 1997;Mitani and Watts 2001;Teelen 2008;Newton-Fisher et al. 2002;Hobaiter et al. 2017), or possibly a combination thereof (e.g., during territorial boundary patrols; Southern et al. 2021). Both types of interactions can include chasing, physical contact (Brown and Crofoot 2013), or lethal aggression (Southern et al. 2021). Less active forms of aggressive agonism are also common, including facial expressions, threatening vocalizations, or displays (Brown and Crofoot 2013). Potentially neutral interactions, including co-feeding, have also been reported (Hosaka and Ihobe 2015), in which chimpanzees were observed ignoring prey species in 1 3 feeding contexts, despite the prey's proximity and capturability. Affiliative interspecific interactions in chimpanzees have also been observed in the wild including play (Goodall 1986;Teleki 1973), and grooming (see Tsutaya et al. 2018;Bakuneeta 1996;John and Reynolds 1997). Chimpanzees share their home ranges with multiple fauna, and interactions between chimpanzees and sympatric species have been widely reported across African field sites (e.g., Hosaka and Ihobe 2015;Hockings et al. 2012). Across many sites "play bouts" have been reported between chimpanzees and sympatric species. However, in most cases, these interspecific play bouts appear non-mutual, and often involve the chimpanzee 'player' using interspecific 'playmates' as objects. Many of these reported cases result in the death of the 'playmate'. In the wild, young chimpanzees in Taï Forest, Ivory Coast, have been observed engaging in non-mutual object play with both duikers (Cephalophus sp.; Boesch and Boesch 1989) and flying squirrels, (Anomalurus derbianus; Boesch and Boesch-Achermann 2000). Duikers are also occasionally preyed upon by these apes although squirrels appear to be neglected by Taï chimpanzees (Boesch and Boesch 1989). In at least one of these cases, the nonmutual play bout was reported to have ended with the death of the recipient (Boesch and Boesch 1989). Chimpanzees in Bossou, Guinea, have also been observed catching and playing with western tree hyraxes (Dendrohyrax dorsalis; Hirata et al. 2001) and African wood-owls (Ciccaba woodfordi; Carvalho et al. 2010), with no attempt at ingestion. In one of the cases, Hirata et al. (2001) observed an adolescent female chimpanzee carrying a dead hyrax that was killed by other members of the group, for 15 h, sleeping with it and grooming the corpse. Of the two chimpanzee-hyrax interactions (Hirata et al. 2001), one hyrax survived. Neither owl used for play survived. Of the other great apes, this kind of interspecies non-mutual play has also been reported in bonobos (Pan paniscus). A recently published anecdote reported a non-lethal interaction between a bonobo and a duiker at Wamba (Yokoyama 2021). In this event, an adult female bonobo was seen carrying a living duiker around for 30 min without injuring it. The authors describe the behavior of the bonobo toward the duiker as characteristic of play but note that elsewhere duikers are a bonobo prey species. Of the reported affiliative interspecific interactions between primates, affiliative interactions, particularly cross-species grooming events, remain relatively rare. The drivers and functions of this behavior are undetermined. Interspecies grooming events have been observed between a wide number of primate species in captivity and in the wild (summarized in Table 1). In the wild, outside of great apes, interspecies grooming events involving at least one Vasava et al. (2013) primate participant have been observed between several species and can include non-primate recipients. Amongst wild great apes, interspecific grooming events with other primate species have only been reported in chimpanzees (Tsutaya et al. 2018;Bakuneeta 1996;John and Reynolds 1997) and bonobos (Sabater Pi et al. 1993;Ihobe 1990). A few cases of affiliative interspecific interactions involving chimpanzees and non-primate species have been described, both in the wild (Hockings et al. 2012) and in captivity (Ross et al. 2009), however, a cross-site compilation of reported interspecific grooming events in chimpanzees has not yet been published. Anecdotal reports of interspecies grooming between great apes and other primate species are notably underrepresented in the primatological literature (though see Tsutaya et al. 2018;Bakuneeta 1996;John and Reynolds 1997), however, there are likely many other observed cases of interspecies grooming including primate species that have remained unpublished, leading to an underreporting of this behavior. To date, published anecdotes of chimpanzee interspecies grooming events are restricted to three reports: from Kalinzu Forest Reserve, Western Uganda (Tsutaya et al. 2018), Kaniyo-Pabidi community (Bakuneeta 1996) in Budongo Forest, Uganda and from the Sonso community (John and Reynolds 1997) also in Budongo Forest, Uganda. These events are summarized in Table 2. At Kalinzu, four cases of interspecies grooming have been reported, two of which involved female chimpanzees grooming adult, male, blue monkeys, and the other two involved female chimpanzees grooming adult, male, red-tailed monkeys (Tsutaya et al. 2018). In all cases at this site, an adult, male, monkey recipient approached and solicited grooming from a female chimpanzee groomer. In no case did the monkey recipient reciprocate. In three cases, monkey recipients solicited grooming from mother-infant chimpanzee pairs, and in the fourth case, the monkey approached a juvenile female who had been traveling with a nulliparous adult female. At Kaniyo-Pabidi, Bakuneeta (1996) observed an unidentified monkey following a group of chimpanzees. The monkey was observed feeding with and grooming the chimpanzees in this group. The monkey was also groomed by members of the group. The case recorded from the Sonso community in Budongo (John and Reynolds 1997) discussed later as observation 1 involves an adult, monkey recipient and a juvenile female chimpanzee groomer. Amongst the other great apes, only bonobos (Pan paniscus) have been observed engaging in affiliative relationships with other primate species in the wild. In Wamba, DRC, Ihobe (1997) reported that guenons, including red-tailed monkeys (C. ascanius) and Wolf's mona monkey (C. wolfi), were seen approaching bonobos without initiating direct contact, traveling, feeding, and resting together. In one case, also reported from the site, a young colobus monkey (Colobus badius) followed a group of bonobos for 18 consecutive days (Ihobe 1997). Bonobos are also the only other great apes who have been observed interspecies grooming with sympatric primates (i.e., Ihobe 1990Ihobe , 1997Sabater Pi et al. 1993;Yokoyama 2021). Bonobos from the Lilungu region of the Democratic Republic of Congo (DRC) engaged in affiliative and social activity with captured young colobus monkeys (Colobus angolensis) and red-tailed monkeys (Cercopithecus ascanius;Sabater Pi et al. 1993). In both reported cases involving interspecific interactions between bonobos and red-tailed monkeys (Sabater Pi et al. 1993), the bonobos groomed the red-tailed monkeys before subsequently killing them. In Wamba, DRC, in at least two cases, adult male colobus monkeys (Colobus badius) were also observed grooming bonobos (Ihobe 1990). Like chimpanzees, bonobos also hunt mammal species for meat, including sympatric primates, although hunting of other primates is relatively rare (but see Surbeck and Hohmann 2008). As far as the authors know, there are no published cases of interspecific grooming between wild great ape species (but see Sanz et al. 2022 for recent evidence of chimpanzee-gorilla play interactions). In primates, allogrooming is defined as "caregiving through physical contact, typically where one animal uses its hands, mouth, or other part of its body to touch another animal" and usually occurs between members of the same species (Russell 2018: pp. 1). Allogrooming involves a minimum of two John and Reynolds ( 1997) members of the same species (a groomer and recipient) (Lee et al. 2021) and can be both unidirectional and/or mutual. In chimpanzees, polyadic grooming is also common, occurring among triads or larger chains (Goodall 1986;Nakamura 2000;Girard-Buttoz et al. 2020). Allogrooming in primates has been shown to be multifunctional (Spruijt et al. 1992), allowing for the establishment and maintenance of social bonds (Lehmann et al. 2007) between kin (Schino and Aureli 2010) and nonkin conspecifics (Dunbar 1991;Goosen 1981;Crockford et al. 2013). Allogrooming also appears to improve hygiene by reducing external parasite loads in recipients (Akinyi et al. 2013, Mooring et al. 2004Zamma 2002;Schino et al. 1988;Keverne et al. 1989;Tanaka and Takefushi 1993;Aureli et al. 1999;Radford 2012). Recipients of grooming can also benefit from stress reduction (Boccia et al. 1989;Shutt et al. 2007;Maestripieri et al. 1992;Schino et al. 1996) and thermoregulation (McFarland et al. 2016). However, allogrooming also has costs, including depletion of energetic budgets and opportunity costs (Dunbar 1992), potential proximity to aggressive conspecifics (Schino and Alessandrini 2015), and exposure to ectoparasites and infective stage endoparasites (Hernandez and Sukhdeo 1995;Veà et al. 1999;MacIntosh et al. 2012;Russell and Phelps 2013;Lee et al. 2021). While it is likely that interspecific grooming bouts also have benefits and costs, there are also likely fewer mutualistic advantages. However, site-specific anecdotes suggest possible explanations for this unusual behavior. While interspecies grooming could be a form of interspecies play for the chimpanzee groomers, for clarity, this paper will draw a distinction between interspecific 'grooming events' in which grooming appears to be the primary goal of the affiliative interaction and interspecific 'play events' which include varied non-aggressive behaviors such as chasing, slapping, and non-predatory physical contact. We discuss seven events, six observations of interspecies social grooming, and one of interspecies play in the Budongo Forest recorded between 1996 and 2021. These occurred between East African chimpanzees (Pan troglodytes schweinfurthii) from the Sonso community and two individuals of the Cercopithecus genus. Five out of six of the reported interspecies grooming events and the interspecies play event occurred between chimpanzees and red-tailed monkeys (C. ascanius), while one interspecies grooming event occurred between a chimpanzee and a blue monkey (C. mitis). Both species of Cercopithecus are also known prey species for this chimpanzee community (Hobaiter et al. 2017;Newton-Fisher et al. 2002). Study site and subjects The Budongo Forest Reserve is a semi-deciduous tropical rain forest consisting of 793 km 2 of protected forest and grassland, located along the western Rift Valley in Uganda. The Budongo Forest is a medium-altitude rainforest (~ 1100 m) with high annual rainfall (~ 1500 mm per year). A dry-season occurs between December-March followed by another, even drier season during June-August (Newton-Fisher 1999). The forest contains a population of approximately 600 East African chimpanzees. There are two habituated chimpanzee communities: the Sonso community (since 1990) and the Waibira community (since 2011). In addition to chimpanzees, four other species of primate are regularly observed within the Sonso and Waibira home ranges, including Olive Baboons (Papio anubis), Blue Monkeys (Cercopithecus mitis), Red-tailed monkeys (Cercopithecus ascanius), and Black and White Colobus monkeys (Colobus guereza). The six observations recorded in this study took place in the Sonso community. At the end of the observation period in 2021, the community was considered a typical size (~ 69 individuals; Wilson et al. 2014) and had a typical female-biased sex ratio among mature individuals (M:F; 1:1.7). Ethical note All data collection in this study were observational and adhered to the International Primatological Society's Code of Best Practice for Field Primatology (Riley et al. 2014). Researchers adhered to all applicable international, national, and institutional guidelines for the care of animals. Research was approved by the Uganda Wildlife Authority (UWA) and the Uganda National Council for Science and Technology (UNCST). All work met the ethical standards of the Budongo Conservation Field Station where the observations were made. The authors declare that they have no conflicts of interest. Data availability Video of one of these events (observation 6: 9/2021) is available in the supplementary materials. Data collection Researchers and field assistants (hereby referred to as field colleagues) follow chimpanzees in Sonso daily from 07:00 to 16:30. Long-term data collection, recorded by field colleagues, includes focal individual activity and party composition taken on a 15-min scan basis. In addition, when unusual events occur, event details are recorded into the station logbook. Types of events added to these books include (but are not limited to) births, deaths, intercommunity killings, respiratory disease outbreaks, unusual feeding behaviors, and hunts. As far as the authors know, there were no major gaps in data collection. However, it is possible that not all interspecies grooming events were recorded, as this behavior may not have always been considered a behavior of interest by past researchers or field colleagues. Results Most of the observations analyzed below come from in the Sonso logbook, which contains events dating back to 1993. Observation 1 was previously published by John and Reynolds (1997). Observation 5 was not written down due to data collection disruption during the COVID-19 pandemic. This event was later transcribed post hoc from GM's field notebook. Table 3 summarizes all observations of interspecific grooming and interspecific play events recorded thus far amongst members of the Sonso community in Budongo Forest. Unfortunately, neither interspecific interaction nor opportunities for interspecific interaction are systematically recorded in the longterm data, making it impossible to calculate what proportion of interspecies group events result in affiliative (or agonistic) interspecies interactions. Observation 1: September 1996, Gonza grooms a red-tailed monkey On September 4, 1996, a group of chimpanzees, Musa (adult male), Kewaya (adult female), Zimba (adult female), and her offspring Gonza (sub-adult female) were observed together, feeding in different trees approximately 7-15 m apart. At 08:42, Gonza was seen alone in the southwest of a Khaya anotheca (KA) tree watching an adult red-tailed monkey who was also resting in the same tree. Gonza approached the monkey until she was ~ 3 m away, and then shook a branch in the monkey's direction. The monkey, however, remained resting and did not move or appear agitated by this display. Gonza repeated the branch shaking behavior three times and then moved closer to the monkey, who was facing away from her. Gonza grabbed the monkey's tail and started to shake it, in a manner that appeared to be playful, folding the tail around her neck and then shaking it again. This lasted for ~ 2 min. At 08:47, Gonza attempted to groom the monkey below the anus, which the monkey seemed to welcome, positioning its legs to give Gonza access. She groomed the monkey under the abdomen, chest, and back, interspersing grooming with play-like behaviors, including hitting the monkey's sides, and pulling the legs. At one point, Gonza Observation reported by Geresomu Muhumuza Observation 3: January 2006, Karo grooms a red-tailed monkey On the morning of January 3, 2006, Karo, a juvenile female, was observed grooming a red-tailed monkey. A group of at least 14 chimpanzees were feeding on the fruits of Ficus sur, including seven adult males, six adult females, and one sub-adult female. A red-tailed monkey joined them. The chimpanzees were high up above the monkey who began feeding below them. At 08:07, Karo approached the redtailed monkey who laid down and presented his face to Karo, who then groomed him. At 09:09, Kalema, (adult female) approached Karo and the monkey. In response, the monkey moved approximately 6 m away, where he stopped and continued to feed. At 09:10, Karo once again approached the monkey and resumed grooming him. At 09:12, Musa (adult male) approached the monkey, and the monkey jumped out of the Ficus sur tree and into another tree nearby. Observation 4: December 2007, Karo grooms a red-tailed monkey On December 2, 2007, at 09:32, Karo approached a lone red-tailed monkey who was resting in a Broussonetia papyrifera tree. This occurred directly following a disturbance caused by a crown eagle flying overhead, which had occurred at 09:13. Nearby colobus monkeys, blue monkeys, and red-tailed monkeys, upon seeing the eagle, had all begun vocalizing and dispersed in different directions. The lone red-tailed monkey, however, had remained in the tree. Karo approached the monkey and started playing with his tail. She then groomed him. Karo also inspected his backside and testes with her finger, smelled it, and then then wiped the finger on a branch. When Kalema (adult female) who was nearby started to leave, Karo ended the grooming bout and followed her. Kalema and Karo moved southwest to join the rest of the group ~ 200 m away. The red-tailed monkey moved off alone. There were no other red-tailed monkeys around (see Figs. 1 and 2). Observation reported by Catherine Hobaiter and Amati Stephen Observation 5: April 2021, Ishe grooms a red-tailed monkey In late April (exact date and time unknown) Ishe, an infant female chimpanzee, was observed grooming an adult, male red-tailed monkey. Ishe and the monkey were both sitting in a mango tree by the abandoned schoolhouse. The monkey began moving closer to Ishe, presenting his head. When she began grooming him, he turned to the side, and she groomed him there as well. Then he turned and presented his back. Ishe appeared interested in his tail and rolled it around her own neck. Irene (adult female), Ishe's mother, moved closer to the pair, and the monkey ran away. The grooming bout's duration was not recorded. Observation reported by Geresomu Muhumuza Observation 6: September 2021, Ishe grooms a red-tailed monkey and Dembe plays with his tail On September 3, 2021, Ishe groomed a red-tailed monkey (See Supplementary Materials). At 10:05, Ishe (infant female) and Dembe (infant female) were observed in a Croton sylvaticus tree (CSY), eating fruits, while their mothers (Irene and Deli) remained in the nearby Ficus variifolia (FVR). At 10:18, an adult male red-tailed monkey crossed into the CSY. Ishe approached him cautiously and then turned to present her back to him. She then turned and appeared to groom the monkey. The monkey sat upright and then turned his back to her. She groomed the back of his hind legs. While Ishe was doing this, Dembe approached but stayed behind Ishe, and then retreated. The monkey moved higher in the tree and lay down. Dembe approached again and touched the monkey's neck, then smelled her hand. This happened twice. Dembe retreated and the monkey stood quadrupedally, presenting his backside to Ishe. Dembe began to groom Ishe. The monkey turned to face Ishe again and Ishe put her hand out, moving her fingers in a beckoning motion to the monkey. Ishe turned her back to the monkey and he crossed over to her but did not groom her. He then moved off but remained close by. The red-tailed monkey did not appear scared of Ishe or Dembe. Dembe seemed hesitant about approaching the monkey but appeared to gain confidence after watching Ishe. After the red-tailed monkey moved, the two infants continued feeding. While they fed, Ishe shook a branch at the monkey a couple more times. A few minutes later, Ishe approached him again, shaking a branch in his direction. The monkey continued feeding and moved a few meters below. Dembe came to join Ishe. Dembe moved closer to the monkey, whose tail was extended upward toward her. Dembe extended a hand and grabbed the monkey's tail, slightly swinging the tail and pulling it for around 10 s, while the monkey continued feeding. At 10:32, the monkey moved away and Dembe went to join Ishe. At 10:35, Dembe moved out of the tree, and Ishe remained with the monkey. She stomped on the branch she was sitting on at 10:35, and the monkey did not react. Ishe then crossed and connected back into the FVR. No other redtailed monkeys were seen or heard during the observation (see Figs. 3 and 4). Observation 7: October 2017, Kefa, Muhumuza, and Kaija engage in interspecific play with a red-tailed monkey At 09:36, in blocks 2-1 and 2-0, while watching chimps feeding on flowers of Broussonetia papyrifera (BPY), an adult male red-tailed monkey approached three infant males (Kefa, Muhumuza, and Kaija) as they were playing in a BPY tree. The monkey presented its back first to Kefa who instead of grooming, slapped the monkey, and then Muhumuza grabbed the monkey's tail, and Kaija reached his hands to the mouth of the monkey. At one point, they started chasing each other through the canopy and the monkey followed them. The play bout lasted for about 20 min. Discussion In total, six cases of interspecies grooming, and a single case of affiliative interspecies play (involving no unidirectional or mutual grooming) with another primate species have been recorded in the Sonso chimpanzee community. These six cases add to the growing record of chimpanzeesympatric primate interspecies grooming events reported at wild chimpanzee field sites. As far as the authors know, there have been no published cases reported outside of Uganda. Of the six interspecies grooming events from Budongo, five involved red-tailed monkeys and one involved a blue monkey. In at least five of the grooming events, playing with the monkey's tail was recorded. Examination of monkey tails appears to be relatively common amongst chimpanzees, especially amongst younger individuals, and infants have been observed playing with the tails of prey after a hunt (A. Mielke, personal communication). In observations 1-4 and 6, the chimpanzee groomer appeared to initiate the interaction event by approaching the monkey, while in observation 5 the red-tailed monkey initially approached the chimpanzee. In the previously reported interspecific grooming events from Kalinzu, like in observation 5, the monkey recipient is reported to have approached the chimpanzee groomer. In all cases, it remains difficult to confirm which individual initiated the grooming bout itself. In all six observations of interspecies grooming the chimpanzee was a female between the ages of 4-6 years old, while the single case of play (with no grooming) included three infant males. This apparent sex bias towards female interspecific groomers is consistent with the four reported cases from Kalinzu (in which all chimpanzee groomers were female). As the reported cases from Kaniyo-Pabidi do not specify the sex of the groomer, this cannot be assessed. While two of the groomers in Kalinzu were adults, both had young offspring present, and the other two cases involved juvenile groomers. While the sample size limits interpretations, this fits with the tendency described in Gombe for immature female chimpanzees to groom conspecifics more frequently, while immature males tend to play with conspecifics more than females (Lonsdorf et al. 2014;Meredith 2013;Lonsdorf 2017). Why the monkeys approached females rather than male juveniles to solicit grooming in the above cases remains unexplained. In two of the interspecific grooming cases (observations 2 and 4), the female chimpanzee appeared to touch the testes of the receiving male monkey, and then smelled her fingers. In a third case (observation 6) the female chimpanzee groomer touched the neck of the receiver and then smelled her fingers. This suggests that there could be an additional olfactory or hormonal cue that the chimpanzee groomer is interested in or sensitive to. If the guenons were the initiators of these events, their potential preference for juvenile grooming partners may be explained by chimpanzee hunting patterns. In Budongo, both red-tailed monkeys and blue monkeys are hunted by Sonso chimpanzees, although blue monkeys appear to be the more popular prey target. Between 1999, Hobaiter et al. (2017 reported 23 hunting attempts on blue monkeys, and only seven attempts on red-tailed monkeys. As most hunts of guenon species are carried out by adult chimpanzees (Ross et al. 2009), adult red-tailed monkeys may feel unthreatened approaching or being approached by smaller-bodied juveniles. Even though young chimpanzees do not hunt monkeys, they may still be strong enough to injure or even kill them through rough play. However, the monkey recipients in the cases described appeared to act as if there was little risk of a fatal attack or dangerous play behaviors. Similarly, the chimpanzee groomers must have had some level of understanding to adapt their grooming and play style to the physical strength of the recipient species. As guenons have sharp canines, the monkey recipients could pose a threat to infant and juvenile chimpanzees, despite their smaller size. If, in the above cases, the juvenile chimpanzees were the primary initiators of interspecific grooming events, this would be consistent with findings that in both wild and captive chimpanzees, non-fatal and non-consumptive interspecific interactions are mostly carried out by juveniles or early adolescents (Hockings et al. 2012;Teleki 1973;Goodall 1986;Boesch and Boesch-Achermann 2000;Ross et al. 2009). Immature chimpanzees in Bossou were significantly more likely than adults to engage in play with other species, and adults never engaged in playful interactions with other species (Hockings et al. 2012). Interspecific play and grooming by juveniles, therefore, could occur as practice for conspecific grooming and exploration-using animals they see frequently, and with which they share some similar biological characteristics, to hone their skills during this critical learning period. However, a possibly more parsimonious explanation is that at this age, chimpanzee juveniles do not discriminate other species into prey and playmate categories. Their playful nature may allow them to engage in affiliative interactions with other nearby individuals, regardless of species. Across chimpanzee field sites, red-tailed monkeys appear to be the most common receivers of affiliative interspecific events, although this apparent species bias may be an artefact if red-tailed monkeys have become more habituated to human researchers at these field sites than other primate species, and are thus more easily detected in these interactions. A detection bias may also be due to the density of red-tailed moneys at chimpanzee field sites and their potential overlap with chimpanzees regarding feeding ecology or active hours. In the cases reported from Kalinzu, redtailed monkeys seem to be repeated receivers of interspecific grooming from chimpanzees. In Gombe, chimpanzees were also reported to have played with a red-tailed monkey, and in Mahale the chimpanzees of the M-Group showed tolerance toward red-tailed monkeys as they co-fed. Blue monkeys were also recipients of chimpanzee grooming at two sites at least (two cases from Kalinzu and one case from Budongo [observation 2]). While we do not have data on its frequency, observations of co-feeding events, in which groups of chimpanzees peacefully co-feed with either red-tailed or blue monkey individuals in the same tree are not uncommon, occurring perhaps several times a month depending on the food species available. While these anecdotal observations cannot be used to calculate a proportion of how many opportunities for interspecific interactions result in interspecies grooming, regular neutral interspecific interactions, such as co-feeding between chimpanzees and both Cercopithecus species, appear to be present in Budongo. Interestingly, two juvenile females (Karo and Ishe) were both observed engaging in interspecific grooming at least twice each, suggesting that these individuals may have had a preference or proclivity for interspecies grooming behaviors or, if the events were initiated by the monkeys, that they were targeted as favorable grooming partners. While this could reflect individual preferences or personality traits in the chimpanzee groomers or the monkey receivers, it could also be a result of socially learned or socially facilitated interspecies affiliative partner selection. If both chimpanzee subjects were exposed to interspecies grooming at a young age (through observations of experienced individuals engaging in this behavior), they may be more likely to seek out opportunities to groom other species themselves, which might account for the appearance of an individual-level preference for the behavior (Hockings et al. 2012). While this may explain the repeated observations of certain individuals engaging in interspecies grooming, this hypothesis cannot be tested without a larger sample size. Of these interspecific grooming bouts, cases were largely observed during the wetter seasons, with only one case occurring at the very beginning of a dry season (observation 4). These results suggest that fruit scarcity, and thus competition, is likely not a driver of forced interspecific interactions; as was the case with orangutan and red leaf monkey polyspecific associations (Hanya and Bernard 2021), as fruits are widely available during Budongo's wet season, and there are plenty of trees simultaneously bearing fruit. However, more limited competition could also facilitate interspecific grooming events, as the subsequent reduction in stress due to abundant fruit may eliminate the need for chimpanzees to act agonistically toward other potential competitors. Seasonal variation in hunting frequency has also been suggested amongst Budongo chimpanzees (Hobaiter et al. 2017); however, not enough information is available to determine whether periods with low hunting rates may correspond to periods of increased interspecies grooming. Furthermore, the diversity of tree species in which these cases were observed (Ficus sur, Croton sylvaticus, Broussonetia papyrifera (2), Magnifera sp., Khaya anotheca) suggests that a specific ecological context is also not necessarily a driver of interspecies grooming. However, as the diets of guenon species are still understudied in Budongo, dietary overlap cannot be ruled out as a factor affecting chimpanzee-guenon interaction rates or competition (but see Wrangham et al. 1998 for comparative study on primate diets in Kibale National Park). Do chimpanzees benefit from unidirectional interspecific grooming bouts at Budongo or is it a form of object play? And why do solitary guenon males appear to spontaneously approach isolated mother-offspring pairs or solitary infant/ juvenile female chimpanzees? One-way interspecific grooming by chimpanzees likely has multiple costs. For one, there are energetic costs to grooming itself and grooming slows down feeding efficiency (Russell and Phelps 2013). Being in close contact with another species may also increase chimpanzees' exposure to parasites or zoonotic illness, including novel pathogens, which may pose hygienic threats, and other harmful microbiota (Moeller et al. 2013). There is also likely a social opportunity cost to interspecies grooming, as the groomers' time could otherwise be spent grooming conspecifics and strengthening affiliation with members of their own group. Instead, the chimpanzee groomers "spend" that social investment on a species that does not directly appear to return the favor. However, there could possibly be long-term, indirect advantages, such as possibly benefiting from the arboreal monkey's vigilance to avoid risk more effectively (i.e., from snakes, hunters, and other anthropogenic disturbances). In at least two of the six observations of interspecies grooming, multiple juveniles were present when the interspecific grooming event took place, and in all cases the juvenile's mother was nearby. This is also true of the interspecific play bout (observation 7). For each of these cases, there may have therefore been some social opportunity cost to the interspecies grooming bout. However, if in these cases the chimpanzee groomers regarded the interspecies receivers as merely play objects, it could be that play with a monkey is better than no play at all or offers an alternative 'novel' source of engagement. Furthermore, the interspecies grooming bout involving two infants and a red-tailed monkey (observation 1 3 6) could also promote conspecific social bonding between the chimpanzee infants. Past papers on polyspecific associations in primates have suggested that interspecies grooming bouts could promote coalition or alliance across species, suggesting that interspecific group merging may increase group size and deter predation (de Carvalho Oliveira et al. 2017;Hanya and Bernard 2021). There would be an incentive for red-tailed or blue monkeys to stay near chimpanzees in feeding trees if other predators such as eagles were nearby and posed a greater threat than the chimpanzees. However, this hypothesis seems an unlikely explanation for interspecies grooming on the side of the Sonso chimpanzees, who would likely not immediately benefit from predator deterrence. Interspecies grooming events also appear to occur too infrequently to be a long-lasting coalitionary behavior. It is easier to identify possible benefits for the red-tailed monkeys, so they may simply be tolerated by adults and pose a source of amusement for the young chimpanzees. One possible benefit to the monkeys is hygiene. Blue monkeys and red-tailed monkeys are both highly susceptible to ticks and other ectoparasites (Freeland 1981). Unlike chimpanzees who spend much of their day allogrooming, blue monkey males migrate from their natal groups at puberty and outside the breeding season, and there is usually only one resident male per group (Cords 2000). Similarly, adult, male, red-tailed monkeys are intolerant of each other and do not form "bachelor groups" (Struhsaker 1980;Butynski 1982). Tsutaya et al. (2018) proposed that solitary male, red-tailed and blue monkeys may approach mother-offspring chimpanzee pairs to receive grooming necessary to maintain their hygiene. If true, interspecies grooming could potentially be viewed as a form of currently undescribed interspecies health maintenance behavior (sic. Huffman 1997). Struhsaker (1981) reported that lone, male, red-tailed monkeys have been observed traveling with groups of red colobus (Colobus badius) in the Kibale Forest, Uganda, and have been recipients of interspecific grooming. Detwiler (2002) also reported that in Gombe National Park, blue monkey and red-tailed monkeys hybridized and formed mixed groups, traveling, mating, and grooming with one another. Struhsaker (1981) also reported an observation of a solitary male red-tailed monkey traveling with and being groomed by Abyssinian black-and-white colobus in the Kalinzu Forest Reserve. However, this hypothesis is complicated by the fact that in five of the six cases reported here, the chimpanzee groomers appeared to have been the initiators of the grooming bouts, approaching the tolerant guenons, and that interspecies grooming seems too infrequent to make a substantial impact on guenon health status. To test this hypothesis, further research should be done on the seasonality of ectoparasite loads in non-human primates (i.e., Klein et al. 2018) and grooming patterns amongst peripheral, male guenons to see whether there are periods more likely than others when such interactions could be more beneficial. The small number of reported interspecies grooming events at Budongo, as well as the dearth of reported cases in the primatological literature, suggests that interspecies grooming is likely a rare behavior amongst wild chimpanzees. However, it is likely that reporting bias could contribute to this underrepresentation. Many observations of interspecies grooming or play events are not recorded or filmed due to lack of targeted research on these behaviors. The six cases of chimpanzee interspecies grooming reported here may be only a few of many cases that have occurred at Budongo and across other chimpanzee field sites. It is essential that anecdotal evidence from primate field sites be shared not only to encourage cross-site comparisons, but also to avoid losing valuable information about the behaviors of our closest primate cousins. Reporting these affiliative interactions between primate species can also reveal which species depend on each other in any given habitat and help prevent or predict ripple effects of extinction or endangerment. To better understand how rare this behavior is across chimpanzee field sites, future studies could survey site directors to determine whether attention is paid to interspecific affiliative interactions, and if so, how these events are recorded. Collecting quantitative data on affiliative interactions will also be crucial to further understanding cross-species relationships between sympatric primates. Future study into interspecies affiliative interactions may also contribute useful context to the field of paleoanthropology, adding new ways of interpreting species proximity in the fossil record. Chimpanzee field sites should consider codifying interspecies interactions into their long-term data collection methods to begin gathering data which will allow researchers to quantify these behaviors more accurately. Many questions remain unanswered about interspecies grooming. Why are some interspecies interactions affiliative, while other interactions with the same species are neutral or agonistic (predator-prey relationship)? What is the adaptive function for the 'groomer' in interspecies one-directional grooming events? Are interspecies grooming behaviors and preferences for this behavior socially learned? The importance of collecting and publishing anecdotes remains paramount, as does communication between field sites and field researchers. to the BCFS management, and the other researchers working at the site for their support and to Vernon Reynolds who founded the field station. Thanks to Daniel Sempebwa for helping to facilitate remote communication between the authors. We would also like to thank the Royal Zoological Society of Scotland, which provides core funding that keeps the station operational. We would also like to thank the Uganda Wildlife Authority and the Uganda National Council for Science and Technology for granting permission to work in Uganda. EF would also like to thank all members of the Primate Models for Behavioural Evolution Lab at the University of Oxford for their valuable comments and revisions. Funding EF's fieldwork was supported by The Clarendon Fund at the University of Oxford, The British Institute of Eastern Africa (BIEA) and Keble College at the University of Oxford. Data availability All data are available upon reasonable request. Conflict of interest The authors declare that they have no conflicts of interest. Ethics approval All data collection reported in this paper was observational and adhered to the International Primatological Society's Code of Best Practice for Field Primatology. All international, national, and institutional guidelines for the care of animals which applied, were followed. Research was conducted with the written approval by the Uganda Wildlife Authority and the Uganda National Council for Science and Technology and adhered to all specified protocols mandated by these permits. All work met the ethical standards of the Budongo Conservation Field Station where research was carried out. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
9,139.2
2023-02-15T00:00:00.000
[ "Environmental Science", "Biology" ]
MNLS simulations of surface wave groups with directional spreading in deep and finite depth waters We simulate focusing surface gravity wave groups with directional spreading using the modified nonlinear Schrödinger (MNLS) equation and compare the results with a fully-nonlinear potential flow code, OceanWave3D. We alter the direction and characteristic wavenumber of the MNLS carrier wave, to assess the impact on the simulation results. Both a truncated (fifth-order) and exact version of the linear dispersion operator are used for the MNLS equation. The wave groups are based on the theory of quasi-determinism and a narrow-banded Gaussian spectrum. We find that the truncated and exact dispersion operators both perform well if: (1) the direction of the carrier wave aligns with the direction of wave group propagation; (2) the characteristic wavenumber of the carrier wave coincides with the initial spectral peak. However, the MNLS simulations based on the exact linear dispersion operator perform significantly better if the direction of the carrier wave does not align with the wave group direction or if the characteristic wavenumber does not coincide with the initial spectral peak. We also perform finite-depth simulations with the MNLS equation for dimensionless depths (kpd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k_{\text {p}}d$$\end{document}) between 1.36 and 5.59, incorporating depth into the boundary conditions as well as the dispersion operator, and compare the results with those of fully-nonlinear potential flow code to assess the finite-depth limitations of the MNLS. Introduction The modified nonlinear Schrödinger (MNLS) equation is frequently used in studies of "rogue" or "freak" ocean waves due to the low computational expense and high fidelity of the simulations. A comprehensive overview of rogue wave studies can be found in Kharif and Pelinovsky (2003), Kharif et al. (2008), Dysthe et al. (2008) and Adcock and Taylor (2014). Schrödinger equations are also frequently used to investigate optical rogue waves (Akhmediev et al. (2013), Onorato et al. (2013), Dudley et al. (2014)) including the dynamics of optical solitons (see, e.g., Pinar et al. (2020)). In this study, we use the MNLS equation for the first harmonic of the surface elevation as presented in Trulsen et al. (2000), based on the work of Trulsen and Dysthe (1996) to simulate directionally-spread, steep groups of ocean waves formed by dispersive focusing. Here, B represents the complex wave envelope based on the first harmonic of the surface elevation and the characteristic wavenumber and angular frequency of the carrier wave are denoted by k 0 and ω 0 . Note that the velocity potential expression corresponding to (1) reverses the sign of the B 2 (∂ B * /∂ x) term to be negative, as listed in Trulsen and Dysthe (1996). The dispersion operator L in (1) may be based upon a truncated version of the linear dispersion relationship (see Trulsen and Dysthe (1996)) or an alternative pseudo-differential operator that preserves the exact linear dispersion relationship (see Trulsen et al. (2000)). We contrast the performance of the two dispersion operators and compare the results with a fully-nonlinear potential flow solver. We also consider the effect of selecting different characteristic wavenumbers and directions for the MNLS carrier wave. Finally, we perform finite-depth simulations with (1) using the arbitrary-depth linear dispersion relationship, ω = √ gk tanh (kd) to evaluate the exact version of LB with finite-depth also incorporated into the boundary conditions to assess the finite-depth limitations of (1). We focus on the impact of four-wave, or "quartet", interactions on the shape and spectral evolution of the focusing wave groups and assess the fidelity of the various MNLS formulations. The MNLS formulation is based upon a carrier wave that requires a characteristic wavenumber and direction. In random seas, the selection of the characteristic wavenumber and direction is typically based on the background sea state. Examples of MNLS simulations of random seas include Dysthe et al. (2003), Socquet-Juglard et al. (2005), Gramstad and Trulsen (2007), Xiao et al. (2013) and Adcock et al. (2015). An appropriate value for the characteristic wavenumber may be clear if the wave spectrum is symmetric about a spectral peak. Similarly, a sea state featuring a concentration of wave components aligned to a particular direction provides a clear choice for the carrier wave direction. However, sea states without a clear spectral peak or dominant wave direction provide less clarity for the carrier wave parameters. For long simulations there may also be a change in the spectral peak due to non-linear physics and, if activated, energy input or damping. Furthermore, the characteristics of an individual steep wave event in a random sea may not be consistent with the background sea state. Individual wave events may form at an angle to the dominant wave direction and local spectral distortions may also arise in the vicinity of a steep wave event due to nonlinear wave-wave interactions. Thus, the selection of the characteristic direction and wavenumber for the MNLS carrier wave can present obstacles. We investigate the sensitivity of our results to the selection of the characteristic wavenumber and direction of the MNLS carrier wave. We deliberately test the MNLS equations beyond the parameter range expected in practice, to ascertain the limits of the various formulations. We simulate isolated wave groups formed by dispersive focusing rather than random seas. An isolated wave group, based on a coherent phase distribution, features the same nonlinear wave-wave interactions observed in random seas. However, the effect of the nonlinear interactions can be more easily identified and the computational expense is lower. Figure 1 shows focused wave groups simulated with a potential flow code. Identical initial conditions were used for the wave groups shown in Fig. 1a, b. However, Fig. 1a shows the focused wave event with linear free-surface boundary conditions and Fig. 1b shows the focused wave event with the fully-nonlinear free-surface boundary conditions. As can be seen, the shape of the focused wave event in Fig. 1b differs from Fig. 1a. The largest crest sits in the center of the wave group in Fig. 1a but has moved to the front of the wave group in Fig. 1b. Energy transfers to oblique components also Fig. 1 Surface elevation of wave groups at focus with a steepness (Ak p ) of 0.3, simulated with identical initial conditions using the fullynonlinear potential flow solver OceanWave3D: a time marched with linear free surface boundary conditions; b time marched with nonlinear free surface boundary conditions. The formation of wing waves is indicated with W results in the formation of "wing waves" in Fig. 1b, denoted with W. Thus, nonlinear wave-wave interactions can significantly influence the formation of a steep wave event, and we investigate the fidelity of various MNLS formulations in resolving the nonlinear interactions. The wing-waves are themselves propagating at approximately 12 • to the mean direction and are thus an example of the phenomena which may be poorly captured by an MNLS type model. The MNLS equation accounts for linear dispersion of the wave components as well as nonlinear wave-wave interactions. The MNLS equation of Trulsen and Dysthe (1996) performs an expansion on the linear dispersion operator with truncation at the fifth order. In contrast, the MNLS equation of Trulsen et al. (2000) retains the exact linear dispersion relation by numerical evaluation of the linear dispersion operator. Use of the exact linear dispersion relation by Trulsen et al. (2000) is expected to increase the bandwidth limits of the MNLS equation while improving the resolution of fourwave "quartet" interactions and eliminating energy leakage. In this study, we contrast the results of the two dispersion operators. We note that the MNLS equations of Trulsen and Dysthe (1996) and Trulsen et al. (2000) are both fourth-order in steepness, shown by Stiassnie (1984) to only resolve quartet interactions. Thus, all the spectral changes observed in this study are attributed to quartet interactions. The impact of finite depth on quartet interactions has been previously investigated. Benney and Roskes (1969) and McLean (1982) showed that the dominant directions of energy transfer for a degenerate quartet depend upon the dimensionless water depth. A variety of MNLS formulations have been proposed to account for the impact of depth on quartet interactions. Third-order "cubic" Nonlinear Schrödinger (NLS) equations for three-dimensional waves at finite depths have been presented by Benney and Roskes (1969) and Davey and Stewartson (1974). A fourth-order equation for water waves at finite depths has been presented by Brinch-Nielsen and Jonsson (1986). The classic MNLS equation of Trulsen and Dysthe (1996) does, however, incorporate finite-depth into the boundary conditions and the exact dispersion operator of Trulsen et al. (2000) allows depth effects to be included in the dispersion relationship with ease. Trulsen and Dysthe (1996) show that the classic MNLS formulation captures the bifurcation of the most unstable perturbation for a Stokes wave at finite depths, suggesting that the classic MNLS equation may be appropriate for some finite-depth simulations. Depth-sensitive coefficients for the nonlinear terms of the MNLS equation have also been proposed by Sedletsky (2003) and we use these coefficients together with the exact linear dispersion operator of Trulsen et al. (2000) as a potential finite-depth MNLS model. To assess the performance of our finite-depth MNLS simulations, we compare the results with those of a fully-nonlinear potential flow code. Numerical methodology We perform simulations with the MNLS equation as well as the fully-nonlinear potential flow code OceanWave3D based on wave groups formed by dispersive focusing. Our simulations follow a three-step process: (1) We use the theory of quasi-determinism to determine the shape of the wave group at focus; (2) Using the linear dispersion relation, we propagate the wave components for 15 characteristic wave periods backwards in time to calculate the initial conditions at t/T 0 = −15; (3) We initialise the simulations at t/T 0 = −15 and run the simulation forwards in time for 30 characteristic wave periods until t/T 0 = +15. Steps 1 and 2 are identical for the OceanWave3D and MNLS simulations, i.e., we use identical initial conditions for the two types of simulations. Only step 3 differs, in terms of which code is used to do the forward propagation of the wave components in time. Initial conditions Implementation of the theory of quasi-determinism requires the underlying wave spectrum of the sea state. We define the variance density spectrum F(k, θ) as the product of a wavenumber magnitude spectrum S(k) and a spreading function D(θ ). We use a Gaussian function as the wavenumber magnitude spectrum: where k is the wavenumber, k p is the wavenumber corresponding to the initial spectral peak, and k w is the spectral width. A Gaussian distribution has also been used for the spreading function: based on the initial spreading parameter (ς 0 ) and the direction of the wave component (θ ). Here, χ represents the dominant angle of propagation for the wave components. The variance density spectrum F(k, θ) is thus defined as the product of two Gaussian functions, and Table 1 lists the values used in this study. All the simulations considered in this investigation are based upon a fixed steepness (Ak p ) with fixed spectral parameters (k p , k w , ς 0 ). Only the parameters of the MNLS carrier wave and the dimensionless depth (k pd ) of the domain are varied. Barratt et al. (2021) showed that the absence of the spectral tail can result in augmented wave-wave interactions. Thus, the spectra used in this study represent a conservative test of the MNLS equation, since more realistic spectra are likely to result in weaker wave-wave interactions than those observed in this investigation. The initial spectrum used is reasonably narrow-banded and thus one would expect the MNLS model to perform well. However, the rapid weakly nonlinear wave-wave interactions will cause a broadening of the bandwidth in the mean wave direction (Gibbs and Taylor (2005); Adcock et al. (2012)) which potentially invalidates the narrow-banded assumption. Quasi-determinism (QD) theory, based on Boccotti (2000) and Lindgren (1970), indicates that the average shape of an extreme event in a random, linear Gaussian field is the scaled auto-correlation function. The linear surface elevation of the wave group is, thus, given by: Here, k i is the magnitude of the wavenumber component, θ j is the propagation direction and A L is the linear amplitude of the wave group at focus. A phase offset ϕ 0 is included in (6) to implement the "phase separation" method of removing bound harmonics (see Fitzgerald et al. (2014)). For the MNLS simulations, removal of the bound harmonics is not necessary and ϕ 0 is set to zero, causing linear focus to occur at (x = 0, y = 0, t/T 0 = 0). For the OceanWave3D simulations, each simulation is repeated with ϕ 0 values of 0 • , 90 • , 180 • and 270 • to remove the bound harmonics with four-phase separation, as used by Barratt et al. (2021). The angular frequency of each component (ω i ) is calculated from the arbitrary-depth linear dispersion relationship, ω i = √ gk i tanh (k i d), allowing the initial conditions to be calculated at t/T 0 = −15. We calculate the corresponding velocity potential and apply exact second-order correction of the initial conditions using Dalzell (1999). The surface elevation and velocity potential are prescribed as initial conditions for the potential flow simulations. For the MNLS simulations, we calculate the initial complex envelope B(x, t) using the linear surface elevation η L and the Hilbert transform of the linear surface elevation η H L following Osborne (2010): Here, k 0 and ω 0 are the properties of the carrier wave. Potential flow simulations OceanWave3D numerically solves the governing equations of potential flow for surface gravity waves (Currie 1993, pp. 201-204), including the fully-nonlinear free surface boundary conditions. Described in detail by Engsig-Karup et al. , OceanWave3D is based on an Eulerian frame of reference and the three-dimensional spatial domain is discretized with Cartesian coordinates (x, y, z). Table 2 lists the horizontal grid resolution ( x, y), which is uniform throughout the domain, together with the number of grid points (N x , N y ) in the x and y-directions. We utilise a symmetry plane along the centreline of the wave group (y = 0) for the potential flow simulations. Thus, the domain width based on Table 2 only represents half the effective domain width for the potential flow simulations. The vertical distribution of grid points follows the symmetric half of a Chebyshev-Gauss-Lobatto (CGL) distribution with the vertical number of grid points (N z ) listed in Table 2. We use eighth-order finite differencing of the spatial derivatives throughout the domain combined with fourthorder Runge-Kutta time marching and a Courant-Friedrichs-Lewy (CFL) condition of 0.5, based on the phase speed (c 0 ) associated with the wavenumber of the spectral peak (k p ) and the horizontal grid resolution in the x direction ( x). Our selection of the simulation parameters is informed by the numerical error assessment of Barratt et al. (2020) based on similar simulations. The simulations of Barratt et al. (2020) were found to agree well with other potential flow codes with total energy conservation within 0.04% over 40 wave periods. For comparison against the MNLS simulations, we use the four-phase separation technique (Fitzgerald et al. (2014)), to remove the bound harmonics from the potential flow simulations and approximate the linear surface elevation (η L ). Using the Hilbert transform of the linear surface elevation (η H L ), we calculate the absolute value of the complex wave envelope, |B|: to compare the envelope steepness between the potential flow and MNLS simulations. Note that the ability of the MNLS formulation to model bound harmonics has been considered by Adcock and Taylor (2016). MNLS simulations We perform simulations with the MNLS equations of Trulsen and Dysthe (1996) as well as Trulsen et al. (2000). We repeat the MNLS simulations based upon different characteristic wavenumbers for the carrier wave. We also alter the direction of the carrier wave relative to the direction of wave group propagation using the parameter χ as included in (3). The carrier wave is always aligned with the x-axis, corresponding to θ = 0 • , while the direction of wave group propagation is determined by χ . The MNLS formulation assumes that the surface elevation η(x, t) may be represented by modulation of a carrier wave with the characteristic wave vector k 0 = (k 0 , 0) based upon a characteristic wavenumber magnitude k 0 . In this study, we normalise k 0 by the wavenumber of the initial spectral peak k p to define the ratio: We perform simulations with β values between 0.7 and 1.3 in increments of 0.1. We define the direction of the carrier wave relative to the direction of wave group propagation, following the coordinate system shown in Fig. 2. Our coordinate system aligns the carrier wave with the x-axis and the direction of wave group propagation aligns with the x * -axis. The angle between the axes is denoted χ and we perform simulations with χ values between 0 • and 30 • in intervals of 5 • . In the spectral domain, wavenumbers k x and k y correspond to the x and y-axes respectively while wavenumbers k * x and k * y correspond to the x * and y * -axes, respectively. All spectral evolution plots are shown in terms of k * x and k * y based on the coordinate system of the wave group. The nonlinear evolution of the complex envelope is simulated with the MNLS equation, see (1), subject to the free surface and bottom boundary conditions, as well as continuity for the mean flow potential φ: In (1), L represents a dispersion operator, acting upon the complex envelope B, which can be expressed as: Here, u = (λ, μ) is the modulation wavenumber and ω 0 is the frequency corresponding to the characteristic wavenumber k 0 based on the linear dispersion relation. Expansion of ω(k 0 + µ) in (11) utilising the linear dispersion relation, followed by truncation at the fifth order, yields the linear part of the Trulsen and Dysthe (1996) equation. Direct numerical evaluation of (11) avoids truncation, retaining the exact linear dispersion relation, as shown by Trulsen et al. (2000). Retention of exact linear dispersion in (11) increases the bandwidth limits of the MNLS equation and improves the resolution of four-wave interactions while eliminating energy leakage (see Martin and Yuen (1980) and Yuen and Lake (1980)), with almost no additional computational cost. We use both the truncated and exact versions of (11) in our simulations and compare the results to assess the impact of the truncated/exact linear dispersion operators. To perform our MNLS simulations, we incorporate the boundary conditions, (8) and (9), directly into the MNLS equation, (1), using the continuity condition for the mean flow, (10), as done with the fourth-order envelope equation of Janssen (1983). A single governing equation is, thus, obtained: where F denotes a 2D Fourier transform in x and y and F −1 denotes the inverse operation. Note that (12) includes a depth-dependent return current term which results from the incorporation of finite depth into the bottom boundary condition in (9). For our simulations based on (12), we use the arbitrary-depth linear dispersion relation, ω = √ gk tanh (kd), to evaluate the dispersion operator L B with the exact version of (11). We also perform MNLS simulations based on infinite depth (|k|d goes to ∞) with the corresponding expression: For our simulations based on (13), we use the deep-water linear dispersion relation, ω = √ gk, to evaluate the dispersion operator L B. Our comparison of the truncated and exact versions of (11) is based upon (13). We directly discretize and numerically solve (12) and (13) using a split-step algorithm. We use spectral methods to evaluate both the exact and truncated versions of (11), using the Fourier transform to treat the spatial derivatives as multiplier operators for the linear dispersion terms presented in Trulsen and Dysthe (1996). We use fourth-order finite differencing with symmetric stencils for the spatial derivatives in the nonlinear terms. Time marching is performed with the classic fourth-order Runge-Kutta scheme. We perform simulations with Courant-Friedrichs-Lewy (CFL) conditions of 0.5 and 1.0 to assess the effect, based on the group speed (c g ) of the wave group (k p ) and the horizontal grid resolution in the x-direction ( x). Note that the definition of the CFL differs between the potential flow and MNLS simulations since the group speed (c g ) is the characteristic velocity of the wave envelope in the MNLS simulations while the phase speed (c 0 ) is the characteristic velocity of the free surface in the potential flow simulations. We also perform finite-depth MNLS simulations by combining the exact version of (11), based on the arbitrary-depth linear dispersion relation, with depth-sensitive coefficients for the nonlinear terms proposed by Sedletsky (2003): denoted as q 3 , Q 41 and Q 42 in (14) and plotted against dimensionless depth (k 0 d) in Fig. 3. We note that the coefficients were first derived by Sedletsky (2003) and later confirmed by Slunyaev (2005). However, Slunyaev (2005) includes one additional term in the expansion of the mean flow and we use the versions of q 3 , Q 41 and Q 42 listed in Gandzha and Sedletsky (2017), consistent with the results of Slunyaev (2005). An expansion of the induced mean flow allows the effect of the return current term to be encompassed within the coefficients q 3 , Q 41 and Q 42 . Thus, (14) does not contain a return current term. Finite-depth is known to suppress quartet interactions and Fig. 3 shows that the values of q 3 , Q 41 and Q 42 decline as the dimensionless depth (k 0 d) is reduced from 5.592 to 1.363. (14), and the dotted lines represent the asymptotic infinite-depth limits, used in (12) and (13) Grid resolution and CFL We have analysed the discretization error for the MNLS simulations. The OceanWave3D simulations in this study are based on the same parameters as Barratt et al. (2020) and a detailed assessment of the discretization errors can be found therein. The MNLS simulations have been performed with two grid levels, termed "intermediate" and "fine", with the parameters listed in Table 2. Note that a symmetry plane has not been used for the MNLS simulations. We have assessed the effect of the grid resolution and CFL with the results listed in Table 3. We consider the maximum steepness of the wave group (Ak * p ), observed at any time in the simulation, as well as the corresponding time at which the maximum is reached (t * /T 0 ). The Nonlinear Schrödinger (NLS) equation has an infinite number of conserved quantities (Zakharov and Shabat (1972)) and we consider the conserved quantity I 2 . typically associated with energy conservation. We calculate I 2 at each time step and the maximum discrepancy in I 2 relative to the initial value (denoted as I * 2 ), has been recorded and listed in Table 3 for the different grid resolutions and CFL conditions. Our assessment of grid resolution is based on dimensionless length scales for the wave envelope ( x , y ) in the x and y directions: based on the spectral bandwidth (k w ), the initial peak wavenumber (k p ) and the initial spreading parameter (ς 0 ) in radians. Dimensionless metrics for grid resolution, in the x and y-directions can, thus, be defined as: which approximately represent the number of grids spanning the wave envelope in the x and y-directions. Table 3 lists the values of n x and n y for the different MNLS grid resolutions. Table 3 indicates that the maximum steepness of the wave group does not differ significantly between the different grid resolutions and CFL conditions for the MNLS simulations. An Ak * p value of 0.304-0.305 occurs in all the cases. However, the time at which the max steepness occurs does show a dependency on the CFL condition. A combination of the intermediate grid resolution with a CFL value of 1.0 results in premature focusing of the wave group. Thus, we use the intermediate grid resolution with a CFL value of 0.5 which shows close agreement in focal time with the fine grid cases. We note that the I * 2 value of 0.0106% indicates negligible changes to the conserved quantity I 2 , associated with energy conservation. The maximum steepness Ak * p and focal time t * /T 0 of the MNLS simulations do differ from the potential flow results. We attribute the differences to an overestimation of nonlinear interactions by the MNLS equation, as discussed in the results section. Results and discussion We investigate focusing wave groups, in deep and finite depths, and compare the results of MNLS simulations, based on exact and truncated versions of the dispersion operator, with fully-nonlinear potential flow simulations performed with OceanWave3D. The impact of the carrier wavenumber for the MNLS simulations is assessed as well as the impact Comparison of OceanWave3D and MNLS results We compare the simulation results from OceanWave3D with infinite-depth MNLS simulations based on (13), both in terms of envelope steepness and spectral evolution. Figure 4 depicts envelope steepness A(t)k p over time for the OceanWave3D and MNLS simulations. The envelope amplitude A(t) is the maximum elevation of the envelope at time t. The general agreement between the potential flow and MNLS results is good, although the MNLS simulations tend to overpredict the steepness of the wave group at focus. The construction of the wave group implies that the steepness curve shown in Fig. 4 should be symmetric about the time of focus (t = 0) if the evolution were linear. Thus, asymmetry in the steepness curve and a delay in focus beyond t = 0 are the result of nonlinear wave-wave interactions. The competing effects of dispersion and nonlinear wave-wave interactions are captured by the Benjamin-Feir index, as discussed by Janssen (2003), impacting the lifespan of focused wave events. The potential flow results in Fig. 4 show evidence of suppressed dispersion causing the wave group to focus after t = 0 and remain steep after focus, extending the lifespan of the focused wave event. Suppressed dispersion is also apparent for the MNLS simulations, but the effect is less noticeable than observed in the potential flow simulations, with only a small degree of asymmetry apparent for the steepness curve shown in Fig. 4. The MNLS results based on the exact and truncated dispersion operators also agree closely. The MNLS results based on the different dispersion operators are almost identical in the early stages of wave focusing but discrepancies arise during and after the nonlinear focused event. The MNLS equation is limited in terms of bandwidth and steepness. Thus, the discrepancies may be caused by the increasing steepness of the wave group approaching focus and the oblique energy transfers which increase the spectral bandwidth. The exact dispersion operator is expected to have broader bandwidth limits and improved resolution of four-wave interactions. Thus, discrepancies between the exact and truncated dispersion operator may be expected for wave groups which are particularly steep or broadbanded, accounting for the difference observed during and after focus. Agreement between the MNLS simulations improves towards the end of the simulation, once the post focus wave group has dispersed and the steepness of the group is again reduced. Both of the MNLS results appear to overestimate nonlinearity, resulting in wave groups which are steeper at focus than the OceanWave3D result. However, the discrepancy in envelope amplitude at focus is less than 4%, indicating good agreement between the fullynonlinear potential flow simulations and the approximate MNLS results. We also compare the spectral evolution of the Ocean-Wave3D and MNLS simulations to assess the resolution of four-wave interactions. Figure 5 shows the OceanWave3D result, depicting the amplitude spectrum of surface elevation for the initial condition (t/T 0 = −15) in Fig. 5a and near the time of focus (t/T 0 = 0) in Fig. 5b. Post focus results are depicted at t/T 0 = 7.5 in Fig. 5c and t/T 0 = 15 in Fig. 5d. The initial condition shows a concentration of (a) wave components around the spectral peak consistent with the wave spectrum defined by (2) and (3). Approaching nonlinear focus, the wave spectrum exhibits energy transfers to higher wavenumbers and oblique wave components, as can be seen in Fig. 5b. Post focus, the energy transfers to oblique components intensifies, as can be seen in Fig. 5c, d. Figures 6 and 7 show the corresponding results from the MNLS simulations. Figure 6 is based on the MNLS equation of Trulsen et al. (2000), using the exact version of (11). Figure 7 is based on the MNLS equation of Trulsen and Dysthe (1996), using the fifth order truncated version of (11). As can be seen, both versions of the MNLS equation capture the spectral evolution of the wave group. The energy transfers to oblique and high-wavenumber components are captured in the MNLS wave spectra, near the time of focus and post focus, in Figs. 6 and 7. Wavenumber of MNLS carrier wave The effect of the carrier wavenumber on the evolution of the wave envelope is shown in Fig. 8 for infinite-depth MNLS simulations based on (13). The results based on the exact dispersion operator are shown in Fig. 8a and the results based on the truncated dispersion operator are shown in 8b. Both figures depict the increasing envelope steepness during focusing followed by a post-focus decline in steepness as the wave group disperses. Figure 8a demonstrates that β values less than unity do not significantly alter the evolution of the wave envelope if the exact dispersion operator is used. However, β values greater than unity do alter the evolution of the envelope for the exact dispersion operator; a β of 1.3 reduces the amplitude of the focused event by 4.0%. If the truncated dispersion operator is used, Fig. 8b reveals that β values both great and less than unity can impact the evolution of the wave group, indicating a 9.6% reduction in amplitude at focus for a β of 0.7. Thus, the truncated operator appears to be more sensitive than the exact operator to the selection of the β value. Selecting a carrier wavenumber above/below the spectral peak effectively tests the bandwidth limits of the equation since the largest amplitude wave components exist around the spectral peak. Moving the carrier wavenumber away from the spectral peak, thus, shifts some large amplitude components away from the characteristic wavenumber. The superior performance of the exact dispersion operator, thus, demonstrates the improved bandwidth limits of the MNLS equation with exact dispersion, as indicated by Trulsen et al. (2000). The impact of the β value on spectral evolution is demonstrated by Figs. 9 and 10 with the results for the exact dispersion operator shown in Fig. 9 and the results for the truncated dispersion operator shown in Fig. 10. The amplitude spectrum of surface elevation is shown at the end of the simulation for various β values, including: β = 0.7 in Figs. 9a and 10a; β = 0.8 in Figs. 9b and 10b; β = 1.0 in Figs. 9c and 10c; β = 1.2 in Figs. 9d and 10d. Notably, both the exact and truncated dispersion operators appear to under predict oblique energy transfers for β values less than unity and both appear to over predict oblique energy transfers for β values greater than unity. As can be seen Figs. 9 and 10, the exact and truncated dispersion operators exhibit similar spectra for β values of 1.0 and 1.2. However, for β values less than unity, differences in the spectra arise between the exact and truncated dispersion operators. The truncated dispersion operator exhibits energy leakage to wavenumbers above the spectral peak forming a local peak at k * x /k p = 1.67 for β = 0.7. However, no such energy leakage is apparent with the exact dispersion operator for β values of 0.7 and 0.8. Energy leakage in the MNLS equation, reported by Martin and Yuen (1980), is a source of inaccuracy which can contaminate the solution, particularly in the context of long-term random sea simulation in which the effect accumulates over time. Trulsen et al. (2000) indicate that the exact dispersion operator eliminates energy leakage, and no significant evidence of energy leakage can be seen in Fig. 9 for all β values. Direction of MNLS carrier wave The impact of the carrier wave direction on the evolution of the wave envelope is shown in Fig. 11 for infinite-depth MNLS simulations based on (13). The results for the exact dispersion operator are shown in Fig. 11a and the results for the truncated dispersion operator are shown in Fig. 11b. The steepness of the wave envelope over time is shown for relative angles (χ ) between 0 • and 30 • in intervals of 5 • . The exact dispersion operator performs well even for a large relative angle between the carrier wave and direction of wave group propagation; an angle of 30 • results in a 3.1% reduction in amplitude at focus. The truncated dispersion operator performs well for small relative angles, an angle of 10 • results in a 4.5% reduction in amplitude at focus. However, the truncated dispersion operator performs less well for large relative angles; an angle of 30 • results in a 18.5% reduction in amplitude at focus. A relative angle between the carrier wave and the direction of propagation for the wave group is another means of testing the bandwidth limits of the MNLS equations. Introducing an angle between the carrier wave and the components which comprise the wave group effectively shifts the characteristic wavenumber in the azimuthal direction away from the spectral peak. Thus, the lower sensitivity of the exact dispersion operator to the relative angle demonstrates the superior bandwidth limits of the MNLS equation based on exact dispersion. The spectral evolution of the wave group for various angles of the carrier wave (χ ) is shown in Figs. 12 and 13 for the exact dispersion operator and the truncated dispersion operator, respectively. The amplitude spectrum at the end of the simulation is shown for various angles of the carrier wave, including: χ = 0 • in Figs. 12a and 13a; χ = 10 • in Figs. 12b and 13b; χ = 20 • in Fig. 12c and 13c as well as χ = 30 • in Figs. 12d and 13d. The wave group is spatially symmetric about the y * -axis, corresponding to spectral symmetry Fig. 12 shows a low sensitivity to the angle of the carrier wave; an angle of χ = 10 • introduces minor asymmetries into the spectral evolution but the result is highly consistent with the χ = 0 • result. An angle of χ = 30 • intensifies the asymmetries but the exact dispersion operator still facilitates the resolution of oblique energy transfers to positive and negative k y components. The truncated dispersion operator shows a higher sensitivity to the angle of the carrier wave, as depicted in Fig. 13. Oblique energy transfers to positive and negative k y components continue to be resolved for a relative angle of χ = 10 • . However, signs of energy leakage to oblique high-wavenumber components are apparent, resulting in significant asymmetry for χ = 20 • and χ = 30 • . Thus, oblique energy transfers to positive k y components are not resolved for large angles of χ with the truncated dispersion operator. Thus, the results shown in Figs. 12 and 13 demonstrate the superior bandwidth limits of the exact dispersion operator as well as the improved resolution of four-wave interactions. The resonance conditions of Phillips (1960) are based upon the linear dispersion relation. Thus, accurate resolution of four-wave interactions is expected to depend on the dispersion operator. Trulsen et al. (2000) indicate that improved resolution of four-wave interactions is expected for the exact dispersion operator, and Fig. 12 demonstrates that high-angle oblique energy transfers continue to be resolved if the MNLS equation is combined with an exact linear dispersion operator. Finite-depth MNLS simulations We compare finite-depth MNLS simulations, based on (12) and (14), with the results of OceanWave3D for various dimensionless depths. Our comparison is based upon amplitude spectra of surface elevation at the end of simulation (t/T 0 = 15) for dimensionless depths (k p d) of 5.59, 3.14, 2.60, 2.00, 1.60 and 1.36. The OceanWave3D results are shown in Fig. 14, demonstrating a weakening of wave-wave interactions with a reduction in depth as observed in previous studies. The spectra corresponding to k p d = 5.59 show energy transfers along the k x -axis and towards oblique high-wavenumber components, as expected in deep water- McLean (1982) showed that deep-water waves of finite amplitude feature unidirectional as well as oblique instabilities. The unidirectional energy transfers are expected to be suppressed by depth due to an interplay between the modulation instability and the return current, as found by Benjamin (1967) and Whitham (1974) and discussed by Janssen and Onorato (2007). Furthermore, Benney and Roskes (1969) and McLean (1982) showed that the dominant/fastest component growth rates become oblique in waters of intermediate depth. In Fig. 14, the spectra all exhibit a significant reduction in energy transfers along the k x -axis while the oblique energy transfers show less sensitivity to depth. Thus, the potential flow simulations exhibit an increasing dominance of oblique over unidirectional energy transfers as the water depth is reduced. The corresponding MNLS results are shown in Figs. 15 and 16. Finite-depth MNLS simulations based on (12), the classic MNLS equation with arbitrary-depth linear dispersion and a depth-sensitive return current term, are shown in Fig. 15. Figure 16 shows the results based on (14), a combination of the exact version of (11), with arbitrarydepth linear dispersion, and the depth-dependent coefficients for the nonlinear terms proposed by Sedletsky (2003). Both finite-depth MNLS formulations exhibit weakening collinear energy transfers along the k x -axis as depth is reduced from k p d = 5.59 to k p d = 1.36 accompanied by oblique energy transfers. Thus, both MNLS codes capture the shift of the dominant component growth rates away from the k x -axis to oblique components as the dimensionless depth is reduced. However, the results based on the classic MNLS formula- (a) (b) (c) (d) (12) show excellent agreement with OceanWave3D for the full range of depths, 1.36 ≤ k p d ≤ 5.59. In contrast, the results based on (14) agree well with OceanWave3D for depths in the range 2.00 ≤ k p d ≤ 5.59 but the agreement deteriorates for k p d < 2.00. Figure 16e, f depict a clear suppression of the oblique energy transfers for depths of k p d = 1.60 and k p d = 1.36 with negligible changes to the spectrum, throughout the entirety of the simulation, for the case of k p d = 1.36. Thus, the dimensionless depth k p d of 1.36 appears to be completely stable for the simulations based on (14), presumably since q 3 goes to zero and the two remaining nonlinear respectively diminish in amplitude or turn negative. In contrast, the potential flow simulations continue to feature oblique energy transfers for k p d = 1.60 and k p d = 1.36 and the case of k p d = 1.36 still results in oblique energy transfers and significant changes to the spectrum during the focused wave event. Thus, the formulation presented in (14) appears to under estimate the extent of the oblique energy transfers for k p d ≤ 2.00. The good agreement between Figs. 15 and 14 suggests that the coefficients of the nonlinear terms do not require modification for the range of finite-depths considered in this study. The scale of the return current beneath the wave group implies that the return current is especially sensitive to depth and the depth-sensitive return current term in (12) accounts for this effect. Combined with the arbitrary depth linear dispersion relation and the exact dispersion operator of Trulsen et al. (2000), the evidence suggests that (12) provides an excellent finite-depth MNLS model for narrow-banded wave groups with dimensionless depths between 5.59 and 1.36. Conclusion We have simulated directionally-spread surface gravity wave groups, formed by dispersive focusing, in deep and finite depths using the MNLS equation. We have compared the results with a fully nonlinear potential flow code and find that the fifth-order truncated dispersion operator of Trulsen and Dysthe (1996) and the exact linear dispersion operator of Trulsen et al. (2000) both perform well if the wavenumber of the carrier wave coincides with the spectral peak and the carrier wave direction is aligned with the direction of wave group propagation. The truncated dispersion operator shows marginally higher levels of diffusivity, impacting the steepness of the wave group after focus, but the spectral evolution agrees well between the truncated and exact dispersion operators. Selecting carrier wavenumbers above/below the spectral peak significantly impacts the results obtained with the truncated dispersion operator, reducing the steepness of the wave group at focus. Selecting a carrier wavenumber below the spectral peak can also aggravate energy leakage if the truncated dispersion operator is used. In contrast, the exact dispersion operator exhibits less sensitivity to the selection of the carrier wavenumber. Selecting a carrier wavenumber below the spectral peak does not significantly influence the steepness of the wave group at focus, if the exact dispersion operator is used, but carrier wavenumbers above the spectral peak can marginally reduce the steepness of the wave group at focus. Similarly, the truncated operator is more sensitive than the exact dispersion operator to misalignment between the carrier wave and the direction of wave group propagation. The steepness of the wave group at focus is significantly impacted by misalignment of 10 • or more, if the truncated operator is used. The exact dispersion operator demonstrates low sensitivity to misalignment, providing similar wave group steepnesses even for angles as large as 30 • . The spectral evolution results show that misalignment aggravates energy leakage, if the truncated operator is used, reducing the resolution of oblique energy transfers. In contrast, the exact dispersion operator continues to resolve oblique energy transfers, even for the largest angles of misalignment, with no evidence of significant energy leakage. Thus, this study provides evidence that the exact dispersion operator of Trulsen et al. (2000) does extend the bandwidth limits of the MNLS equation for steep wave groups with directional spreading, while improving the resolution of fourwave interactions and suppressing energy leakage. We find that the MNLS equation of Trulsen et al. (2000) also works well at finite depths if the arbitrary depth linear dispersion relation is used to evaluate the dispersion operator and the bottom-boundary condition is imposed at finite depth. We observe good agreement with our fully-nonlinear potential flow code for narrow-banded wave groups with dimensionless depths between 5.59 and 1.36.
10,050.6
2021-06-10T00:00:00.000
[ "Environmental Science", "Physics" ]
Texture, mineralogy and geochemistry of Teri sediments from the Kuthiraimozhi deposit, Southern Tamilnadu, India: implications on provenance, weathering and palaeoclimate The study examines about the red sand dune deposit locally designated as teri deposits; it is an omnipresent geomorphologic feature present in the coastal region of Thoothukudi and Ramanathapuram districts of Tamil Nadu, India. One of the inland teri sand dune outcrops is located around the Kuthiraimozhi village of Thoothukudi district in Tamil Nadu, India. Textural, mineralogical and geochemical studies were carried out in the teri sediments and its compact sandstone outcrops. The sediments are moderately sorted to well-sorted and finely skewed nature which indicates that fluvio-marine depositional environment. Geochemical analysis results of major, trace and rare earth elements for teri deposits help to predict the provenance, weathering status, depositional environment and climate. The geochemical study reveals that the sediments were derived from marine and non-marine sources. Teri sediments are geochemically classified as lithic arenite or wacke. Petrography and X-ray diffraction analysis reveal the predominance of quartz and feldspars along with the accessory minerals like ilmenite, rutile, garnet, magnetite, hematite, zircon, diopside, hypersthenes and biotite. Mineralogical observation illustrates that the teri sediments have originated from the weathering of felsic and mafic source rocks. The Chemical Index of Alteration (CIA) values of sediments represent moderate to the high status of weathering conditions in the source area. The depositional environment indicates that the sediments are fine-grained with high maturity index. Despite that the sediments are formed by fluvio-marine sources, the reddening character in the teri deposits is due to oxidation and leaching of iron-bearing minerals by percolating surface water from high rainfall and groundwater fluctuation of the aquifer under arid and semi-arid climate conditions. Introduction The recent and sub-recent red sediments (or) locally called as 'teri' sediments are ubiquitous linear geomorphologic features in the coastal region of Thoothukudi and Ramanathapuram districts of Tamil Nadu, India. These features occur as detached patches of amidst black soil and loamy soil. The thickness of these red deposits generally decreases from northeast to southwest direction. Based on the occurrences, teri deposits are categorized as coastal teri, midland and inland teri deposits. The inland teri deposit is located as a surficial outcrop at the contact zone of metamorphic and sedimentary rocks. The top surface of the teri deposit is generally loose and unconsolidated, whereas the bottom bed is compact, consolidated and form as a red sandstone bed. Due to more evaporation processes, the groundwater makes calcrete deposit as veining and replacement within the teri sediments. The origin of teri sediment was geologically under debate. Numerous researchers, not only in India, but also in various parts of the world, have concentrated their research on sediment texture, mineralogy and geochemistry (Cox et al. 1995;McLennan 1993;Angusamy and Rajamanickam 2000;Behera 2003;Angusamy et al. 2004;Poppe and Eliason 2008;Dayal Responsible Editor: Domenico M. Doronzo and Moorthy 2006; Ramaswamy and Rao 2006;Islam and Rahman 2009;Udayanapillai et al. 2015Udayanapillai et al. , 2016Ashraf and Hoque 2015;Zaid 2015;Hendrik et al. 2015;Miao et al. 2016;Mir and Mir 2019; Armstrong-Altrin 2020; Perumal and Udayanapillai 2020; Ramos-Vazquez and Armstrong-Altrin 2020; Armstrong-Altrin et al. 2020;Madhavaraju et al. 2020). Thiruvikramji et al. (2008) stated that red coloration of Sathankulam teri deposits is caused by Holocene climate change. Udayanapillai and Ganeshamoorthy (2013) found that teri deposit of Surangudi area in Thoothukudi district of Tamil Nadu was derived from black soil. Red coloration of the deposit may be due to leaching action and oxidation of iron-bearing heavy minerals by rainfall and groundwater. Since teri deposit is highly enriched with heavy mineral concentration, many private sectors occupy and establish their mining sites for sand mining; an attempt has been made to infer the texture, mineralogy and geochemistry of the Kuthiraimozhi teri sediments. Study area The location map of the study area is shown in Fig. 1. The proposed research area bounds between latitudes from 8°15′ 0″ N to 9°25′ 0″ N and longitudes from 77°05′ 0″ E to 78°35′ 0″ E. The study area is well connected by national and state highway road network from Tuticorin, Tirunelveli and Tiruchendur. The physiography is almost flat with little elevation ranges from 5 to 62 m above MSL. Drainage density is low, and its pattern is sub-dendritic nature. Southern branch of Tamirabarani and Karamanayar rivers flows in the north and south directions respectively to the study area. The area shows strong tectonic evidence of both progradation and upliftment. The elevated teri sand dune feature of the Kuthiraimozhi region acts as a good groundwater potential zone in the coastal plain. The average rainfall of the study area is approximately 700 mm, influenced by southwest and northeast monsoons. Geology Proterozoic rocks such as quartzite, crystalline limestone, hornblende-biotite gneiss, charnockite, granitic gneiss, granite and pegmatite occur as basement rocks. The basement rocks are overlain by Tertiary and Quaternary shell limestone and arenaceous limestone outcrops, respectively. Tertiary and Quaternary beds are followed by Holocene to Pleistocene laterite and calcrete. Recent and sub-recent black soil, red (teri) soil, river alluvium and beach sand rest on sub-surface Quaternary and Tertiary outcrops. Material and methods A total of ten representative spatial teri sand samples, including sub-surface compact sandstone, were collected from the dune outcrop of Kuthiraimozhi region. Then, 200-g sediment samples were subjected to sieve analysis for 15 min by using the Ro-Tap sieve shaker machine using a stack of quarter phi intervals, such as 10, 18, 35, 60, 120, 230 and 400 ASTM sieves. Each fraction was weighed. Graphical measures, such as histogram and cumulative frequency against weight percentage of data, were drawn. Basic geostatistical parameters, such as mean, median, standard deviation, skewness and kurtosis, were calculated by the formulae of Folk and ward (1957). The selected samples of fine fraction retained from 120 to 230 mesh materials were taken for heavy mineral analysis. The heavy minerals were separated from light fraction by using heavy liquids bromoform (Milner 1962). The separated heavy minerals were identified through a camera attached polarised binocular microscope. The prepared thin section of the selected compact red sandstone was also photographed for petrographic analysis. The selected two representative samples are powdered teri sediments that were subjected to Xray diffraction analysis (XRD) through the instruments 'Xpertpro' from the laboratory of the Department of Physics, M.S University, Tirunelveli. Mineralogy of the samples was identified through respective spacing values and their intensity (Sachinathmitra 1989) and also from other published literatures. Major, trace and rare earth elements were analysed by X-ray fluorescence (XRF) and inductively coupled plasma mass spectrometry (ICPMS) instruments respectively from National Geophysical Research Institute, Hyderabad. Results and discussion The results and discussion of grain size analysis, mineralogy and major, minor or trace and rare earth elements geochemistry of teri sediments of Kuthiraimozhi region are given as follows. Grain size analysis The results of the grain size analysis of the teri sediments are given in (Table 1). The grain size analysis data of teri sediments are obtained from the graphical measures of the histogram and cumulative frequency curves. The histogram shows most of the sediments of the area are uni-modal distribution ( Fig. 2a-j.). The uni-modal distribution of grain size indicates the uniform mixing of grains (Verma and Prasad 1974). The cumulative frequency curve depicts the relationship between sediment transport dynamics and grain size distribution. The cumulative frequency curves of the teri sediments of the area are given in Fig. 3a-j. The graphical measurement results of mean (0.62 to 2.03), median (0.6 to1.8), standard deviation (0.04 to 0.96), skewness (0.34 to1.78), and kurtosis (0.61 to 0.78) values ranges are respectively. Sorting nature of sediments is from wellsorted to moderately sort. Skewness results shows +ve skewed in nature, whereas the kurtosis class is platykurtic. Moderately sorted to well-sorted nature indicates the fluvio-marine depositional environment (Friedman 1961;Folk and Ward 1957). Such a similar condition had existed in the study area. Negative and positive skewed sediments indicate erosion and deposition, respectively (Cronan 1972). The positive skewness, platykurtic nature and fine-grained characteristics of sediments indicate that sediments are brought from both fluvial and marine environment. The classificatory environmental study of standard deviation and graphic mean illustrate the depositional environments of the sediments; the area has river and coastal dune environment (Friedman 1961;Moiola and Weiser 1968). Mineralogical observation The teri sediments consist of ninety percentage of light minerals and ten percentage of heavy minerals. The heavy minerals are controlled by lithology of source rock, differential stability of the minerals, the durability of minerals on long abrasion, hydrodynamic factor, shape and specific gravity and post-depositional survival character. Numerous researchers have worked on heavy mineral studies of coastal sediments (Angusamy and Rajamanickam 2000;Behera 2003, Angusamy et al. 2004Ergin et al. 2 0 0 7 ; V e r m a a n d P r a s a d 1 9 7 4 ; P e r u m a l a n d Miao et al. 2016;Ramasamy and Rao 2006). The petrographical study and XRD analysis reveal the mineralogical content, such as ilmenite, magnetite, rutile, garnet, zircon, diopside, tourmaline, hematite, goethite, kyanite and along with light minerals, such as quartz, feldspar and biotite ( Fig. 4 (a, b); Table 2). Mineralogy reveals that teri sediment has originated early as fluvial black soil regolith deposits derived from the source rock of quartzite, calc-granulite, khondalite, charnockite, granite, etc. The sediments may be transported by the fluvial action of Tamirabarani and Karamaniyar rivers. The iron-rich heavy minerals, such as ilmenite, magnetite, garnet, hypersthene and rutile, in the soil, undergo leaching first by meteoric or surface water and groundwater fluctuation of the aquifer and then leads to oxidation process in situ, due to favourable arid and semi-arid climatic conditions. This above two actions cause for reddening colour of the teri soil. The presence of spherical and sub-spherical quartz indicates that the sediments have a long transformational history, due to the fluvial action. The presence of angular and sub-angular grains may be due to less transport history by the adjacent coastal wind action from the Gulf of Mannar region ( Fig. 5a-d). These above observations support the evidential characteristics of the sediments as fluvio-marine origin. The veining character of calcrete in teri sand may be due to the evapotranspiration process of alkaline-rich groundwater originated from the sources of the calcareous sandstone aquifer of Tertiary and Quaternary sedimentary basement rocks below red teri soil. Geochemistry The results of major, trace and rare earth elements of teri sediments are shown in Tables 3, 4, and 5. Major elements The average distribution trend of major elements of the area is given in Fig. 6a and Nagarajan et al. 2007). SiO 2 vs Al 2 O 3 shows a high degree of positive correlation (r = 0.84) that could have derived from the same sources. The Fig. 3 a-j The cumulative frequency curve for the teri sediments showing grain size distributions (Visher 1969) concentration of SiO 2 is more than Al 2 O 3 due to the supply of unaltered quartz and feldspar from the sources by fluvial action of Thamiraparani River and coastal wind. SiO 2 and Al 2 O 3 are generally obtained from weathering products of various silicates. Orthoclase (KAlSi 3 O 8 ) is formed by isomorphous substitution of Si 4+ with Al 3+ because of their identical ionic radii (0.50 Å and 0.55 Å, respectively). Alkali metals are introduced to compensate for the changes as in the formation of K feldspar in which 25% of Si 4+ has been replaced by Al. The clay minerals are hydrous aluminium silicate and a variety of heavy minerals formed by partial isomorphic substitution Si 4+ with a variety of ions, via Be 2+ , Al 3+ , Al 3 , Fe 2− , Mg 2 − , Ca 2− , etc. Hence, in sediments, the percentage of alumina is incomparable with that of Silica (Vetha Roy and Chandrasekar 2007). But the negative correlation of Al 2 O 3 vs SiO 2 in teri sand of the Surangudi region in Thoothukudi district represents that it may be derived from different sources (Udayanapillai and Ganesamoorthy 2013). The K 2 O/Al 2 O 3 ratio reveals the original composition of sediments. The K 2 O/Al 2 O 3 ratios for clay minerals and feldspar are different (0 to 0.2; 0.3 to 0.9) in sediments, respectively (Cox et al. 1995). The average result of the K 2 O/Al 2 O 3 ratio of the area is 0.10 which is closer to the clay mineral contents. It suggests that Kaolin or Illite may be dominant minerals from the sediments. The SiO 2 /Al 2 O 3 and SiO 2 / MgO ratios of Proterozoic Pakhal Shale represent the ratio values of 3.35 and 31.3, respectively, which suggest a fairly good amount of felsic sources in the provenance (Dayal and Moorthy 2006). But the result of SiO 2 /Al 2 O 3 and SiO 2 /MgO ratios in the area show different values such as 5.54 and 698.20 respectively which suggest mixing of felsic and mafic rock sources (Ramos-Vázquez and Armstrong-Altrin 2019). The correlations between Fe 2 O 3 vs MnO, Fe 2 O 3 vs CaO, Fe 2 O 3 vs Na 2 O, Fe 2 O 3 vs K 2 O, Fe 2 O 3 vs TiO 2 and Fe 2 O 3 vs P 2 O 5 show positive correlations respectively which indicates derived from lithogenic sources. The negative correlation between Fe 2 O 3 vs MgO represents the indication of different sources. MgO may be derived from the association of calcrete sources of groundwater origin, whereas Fe 2 O 3 may be derived from the lithogenic sources from iron-bearing minerals of igneous and metamorphic rocks. TiO 2 is mainly obtained from phyllosilicates (Condie et al., 1992;Nagarajan et al., 2007). It represents source rock, due to its immobility character (McLennan 1993;Nagarajan et al. 2007). The average value of TiO 2 is 1.10%. TiO 2 vs Al 2 O 3 shows a very low degree of negative correlation (r = −0.19). So, they are independent Fig. 4 (a, b) X-ray diffraction patterns of the teri sediments variables derived from different sources. Most of the samples have low P 2 O 5 content which may be a lesser amount of accessory minerals, such as apatite and monazite. The Ca and Mg are widely distributed in the earth's crust. CaO is the greater importance in the biosphere. So, it is also classified as biophile elements. CaO vs Na 2 O, K 2 O, TiO 2 and P 2 O 5 represent positive correlations. The coefficient values of r = 0.59, 0.97, 0.86 and 0.70, respectively. These positive correlations represent the same lithogenic association of mineral sources. Positive correlations of Na 2 O vs K 2 O, Na 2 O vs TiO 2 , K 2 O vs TiO 2 and K 2 O vs P 2 O 5 suggest that they are derived from the same lithogenic sources. The teri sand occurs as an in situ isolated deposit amidst black soil and loamy soil. Red coloration may be due to leaching and oxidation of iron-bearing mineral under favourable climatic conditions (Udayanapillai and Ganesamoorthy 2013). Such similar action may cause the reddening coloration of the teri sediments. Trace elements The average distribution of trace elements of teri sediments is presented in Fig. 6b and Table 4. V, Cr, Ni, Cu, Zn, Rb, S, Zr, Nb and Ba elements are concentrated higher above 10ppm, whereas other elements Sc, Co, Ga, Y, Cs, Hf, Ta, Pb, Th and U contents are less in a concentration below 10ppm. The values Ga, Y, Nb, Cs, Hf, Ta, Pb and U contents are less below 10%. Based on geochemical affinity, the trace elements are classified as (1) The very high concentration of trace elements of the sediment shows the ascending trends of Zr > Rb > Ba > V > Zn > Cr > Sr as 28.00 > 40.00 > 41.10 > 57.00 > 57.30 > 73.60 and > 82.8, respectively. The presence of more zirconium in red sediments may be derived from the heavy mineral zircon derived from the sources of granite and charnockite rocks from the adjacent terrain. The occurrences of zircon crystal in granite and charnockite have already been reported from the adjacent area of Sathankulam region of Tamil Nadu (Perumal 2017). Rb is a strongly lithophile element and alkali metal which is also available from the potash feldspar, viz., orthoclase and microcline minerals. These sources may be obtained from the granite and pegmatite source rocks. The high content Ba in sediments is normally derived from barite crystals of syn-sedimentary origin (Armstrong-Altrin et al. 2019; Nagarajan et al. 2007;Udayanapillai et al. 2020). So, the syngenetic deposition of barium concentration may cause for the Ba sources in red teri sediments. Vanadium is a lithophile element and abundant in association with apatite, biotite, muscovite, magnetite, pyroxene and amphibole. The presence of 'V' in soil depends on parent rock (Shaheen et al. 2019). The weathering of this mineral available in granite, hornblendebiotite gneiss and charnockite in the study area may contribute to the 'V' concentration to the red teri sediments. Vanadium content in a sedimentary rock reflects the primary abundance of detrital ferric oxide, clay minerals, hydroxide of Fe and Mn (Meunier 1994). So, iron-bearing minerals may also contribute 'V' to the teri sediments. The geogenic Zn sources may be derived from the plagioclase and microcline mineral contents from granitic rocks. Chromium is a lithophile element. It is associated with the minerals of iron-bearing silicates, such as diopside, hypersthene, biotite, hornblende, magnetite and inclusion in quartz. It makes isomorphous substitution in aluminium and ferric spinel, given closer ionic radii (Cr=0.64 Å) to that of (Al=0.57 Å) and (Fe=0.67Å) (Dhir et al. 2004). High concentration of chromium content has been observed from the beach sands of Central coast of Ghana (Mahamuda and Emmanuel 2020). Bijilal and Senthil Nathan (2016) established that Cr is more available in the lateritic clay profile in Kerala. Reduction of chromium and iron by kaolin, Rare earth elements Rare earth element concentration of the teri sediments of the analysed samples is given in Table 5. The chondrite normalized (Taylor and McLennan 1985) plot for teri sediments is given in Fig. 6c. Rare earth elements generally reside in the fine fraction, such as silt and clay. The clay minerals enriched with alumina and ferric ion may readily accommodate the trivalent rare earth elements (Cullers et al., 1988 Classifications Classifications of sediments are established by the value of the Log(SiO 2 /Al 2 O 3 ) vs Log(Na 2 O/K 2 O) ratio of their respective sediments (Pettijohn 1975;Armstrong-Altrin et al. 2004;Islam and Rahman 2009;Madhavaraju and Lee 2010). The same above composition of teri sediments in the study area falls on the field of lithic arenite consists of quartz and feldspar and rock fragments with fine clay matrix in which the grain size is from 0.06 to 2 mm. Wacke is also a sandstone composed of sand-sized grains of 0.063 to 2 mm with a finegrained clay matrix. Provenance The provincial characteristics of any sediment are found by using the geochemical signature of clastic sediments (Taylor and McLennan 1985;Condie et al. 1992;Cullers 1995;Madhavaraju and Ramasamy 2002;Armstrong Altrin et al. 2004;Perumal and Udayanapillai 2019). Al 2 O 3 /TiO 2 ratio of clastic rocks is generally used to know the source rock composition. Hayashi et al. (1997) found that the ratio of Al 2 O 3 / TiO 2 of the sediment increases from 3 to 8 derived from mafic igneous rock sources, from 8 to 21 derived from intermediate igneous rock sources and from 21 to 70 derived from felsic igneous rocks. The Al 2 O 3 /TiO 2 ratio of the area studied ranges Floyd et al. 1989;Anaya Gregorio et al. 2018), between TiO 2 vs Ni, further indicate that the teri sediments were mainly derived from felsic or acidic granitoid basement rocks (Fig. 8). The Cr and Ni are abundant in silicic clastic provenance (Nagarajan et al. 2007). Cr and Ni ratio in teri sediments of the area are somewhat higher (Table 5) value than other minor/ trace elements. But the Cr/Ni ratio of teri sediment is low (3.44) compared with PAAS value (42.00). Despite the sediments derived from the granitoid basement, the in situ leaching and oxidation process of iron-bearing heavy minerals and silicate minerals in teri sand dunes of the area under favourable climate causes for reddening character and also represent Ni and Cr abundance in the sediments. Th/Co and La/Sc ratio of the teri sediments of the study area falls on the plot diagram of Fig. 7b (Cullers 2002) of silicic rock composition. These results indicate that the teri sediments are derived from the source rocks of granite, charnockite, hornblende biotite gneiss and quartzite. The elemental ratios of Th/Sc, Th/Co, Th/Cr, Cr/Th and La/Sc of the results are 0.79, 1.03, 0.09, 11.34 and 1.8, respectively. They are compared with the data of Garudamangalam sandstone (Babu 2017), Upper continental crust (UCC, Taylor and McLennan 1985), Post Australian Archean Shale (PAAS) (Taylor and McLennan 1985) and the range of sediments value of felsic rocks and mafic rocks (Table 6). This comparison also represents that the teri sediment values have coincided within the range of felsic rocks. The La/Th and Th/Sc are quite constant in sedimentary rocks as 2.4 and 0.9, respectively (Taylor and McLennan 1985;Anaya Gregorio et al. 2018;Perumal and Udayanapillai 2019). The La/Th and Th/Sc ratios of teri sediments are 2.31 and 1.03, respectively. These ratios are coincided with the values of sources of silicic or felsic rock composition. The REE pattern and Europium anomaly will represent an important clue about the source rock character (Armstrong-Altrin 2020). Higher LREE/HREE ratios and negative Eu anomalies are generally observed in felsic rocks, whereas lower LREE/HREE ratio and no or small anomaly are observed in mafic rocks (Cullers 1995). The average LREE/HREE ratio of teri sediments of the study area is 13.35 and the negative Eu anomaly as 2.01. These results are higher than the above ratio of mafic rocks and it represents that the teri sediments are derived from the felsic source rocks, such as granite and charnockite. The Eu/Eu* vs (Gd/Yb)n plot (Fig. 7c); (McLennan and Taylor 1991) indicate the teri sediment of the area plot fall on the Archean field. So, it is originated from Archean granite, granitic gneiss and charnockite rocks of the study area. Chemical Index of Alteration and weathering Nesbitt and Young (1982) established the Chemical Index of Alteration (CIA) values using the chemical molecular composition of bulk sediments as given by the following formula. Fig. 7 a TiO 2 vs Ni bivariate provenance plot for teri sediments (after Floyed et al. 1989;Nagarajan et al. 2007), b Th/Co vs La/Sc ratio provenance plot derived by after Cullers, (2002), latter followed by Nagarajan et al. (2007), c (Gd/Yb)n vs (Eu/Eu*) ratio provenance plot of the teri sediments ( after Taylor and McLennan, 1985) CIA in which CaO* represents silicate phase value. CIA value 100% represents for kaolin and chlorite, 70-75 for average shale. The high values 76-100 represent an index of chemical weathering in the source area. The low values (50 or less) of CIA indicate low weathering of the source area. The average CIA value of the teri sediments represents the 84%. It indicates intense chemical weathering, and the low Na 2 O also represents high weathering of source rock (Fig. 9). Depositional environment and climate Sediments containing higher Al 2 O 3 /Na 2 O and K 2 O/Na 2 O ratio and low CaO represent fine-grained clastic sediments that are chemically matured. Dayal and Moorthy (2006) and Suttner and Dutta (1986) have proposed a climate boundary classification based on the total geochemical value of Al 2 O 3 + K 2 O + Na 2 O vs SiO 2 ratio The results of the teri sediments indicate semi-arid to arid climate with high maturity index. Such a similar result existed in the Surangudi teri sediments. The classification based on log(MgO/Al 2 O) vs log (K 2 O/ Al 2 O 3 ) (Roaidest 1978;Armstrong-Altrin et al. 2004;Shehata and Abdou 2008) ratio depicts the depositional environment of sediments belonging to marine and non-marine origin. This interpretation is coincided with the petrography results. Conclusion Textural characteristics of sediments reveal that moderately sorted to well-sorted fine skewed and platykurtic nature are represent the mixture of river and coastal dune environment. The thin section study and XRD analysis reveal less mineralogical contents of ilmenite, rutile, garnet, magnetite, hematite, zircon, diopside, hypersthene, biotite and hornblende along with an abundance of quartz and feldspar. The sediments are derived from the felsic and mafic source rocks of the adjacent Cullers (1995Cullers ( , 2002; Cullers and Podkovyrov (2000); Cullers et al. (1988) 3 Taylor and McLennan (1985) Fig . 9 A-CN-K triangular plot after Nesbitt and Young (1982) of the area (UCC; Taylor and McLennan 1985) study area and from coastal wind sources of beach sand of the Gulf of Mannar. The geochemistry of the sediments reveals the abundance of elements of SiO 2 , Fe 2 O 3 , MgO, CaO, TiO 2 , V, Cr, Ni, Cu, Zn, Rb, Sr, Zr, Nb and Ba light rare earth elements with negative europium and positive cerium anomalies. Despite teri sediments are derived from fluvio-marine sources; the in situ leaching and oxidation process of ironbearing heavy minerals and silicate minerals cause the reddening of soil under favourable arid and semi-arid climatic conditions. The geochemistry data reveals that the sediments were derived from the felsic and mafic source rocks of granite, charnockite, hornblende biotite gneiss, khondalite, calcgranulite and aeolian sources from beach sand. The CIA results indicate a moderate to high weathering in the source area. Depositional geochemical analysis reveals finegrained clastic sediments that are chemically matured with high maturity index formed under the arid and semi-arid climate conditions. Declarations Conflict of interest The authors declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
6,224.4
2021-02-23T00:00:00.000
[ "Geology", "Environmental Science", "Geography" ]
Inferring nonequilibrium thermodynamics from tilted equilibrium using information-geometric Legendre transform Nonstationary thermodynamic quantities depend on the full details of nonstationary probability distributions, making them difficult to measure directly in experiments and numerics. We propose a method to infer thermodynamic quantities in relaxation processes by measuring only a few observables, using additional information obtained from measurements in tilted equilibrium, i.e., equilibrium with external fields applied. Our method is applicable to arbitrary classical stochastic systems, possibly underdamped, that relax to equilibrium. The method allows us to compute the exact value of the minimum entropy production (EP) compatible with the nonstationary observations, giving a tight lower bound on the true EP. Under a certain additional condition, it also allows the inference of the EP rate, thermodynamic forces, and a constraint on relaxation paths. Our method uses a Legendre transform of EP at the level of probability distributions, which we develop based on a similar Legendre transform in information geometry. I. Introduction Relaxation processes are ubiquitous in nature, and they undergo various nonstationary probability distributions before relaxing to the final stationary distribution.For example, biological systems such as biochemical signaling pathways [1,2] and neurons [3,4] respond to external signals in a transient manner to convey information.Relaxation processes also include various nontrivial physical phenomena, such as nonmonotonic relaxations [5][6][7], slow glassy relaxations [8], and sudden cutoff relaxations [9,10]. These processes are out of equilibrium and thus have inevitable thermodynamic costs.According to stochastic thermodynamics [11][12][13], thermodynamic costs such as entropy production (EP) and thermodynamic forces depend on the full details of nonstationary probability distributions.Therefore, to obtain thermodynamic quantities directly in experiments and numerical simulations, we need to measure all the details of probability distributions by accumulating many realizations of the same relaxation process and computing the histogram at each time point.This nonstationary measurement is practically impossible for systems with more than a few states. In this paper, we propose a method of thermodynamic inference for relaxation processes that uses measurements in tilted equilibrium, i.e., the equilibrium under the application of external fields to the system.Our approach combines the nonstationary measurement of a few observables with the tilted equilibrium measurement of the same set of observables.From these data, our method allows us to compute the exact value of the minimum EP compatible with the nonstationary data, which constitutes a tight lower bound on the true EP over the relaxation from any intermediate distribution to the final equilibrium.Moreover, if the system satisfies a condition called realizability condition, which says that the nonstationary distribution is exactly realized as a tilted equilibrium, our method provides us with additional information about the process: the exact value of the true EP, the instantaneous EP rate, the nonstationary thermodynamic forces, and a constraint on relaxation trajectories.Our method applies to arbitrary classical stochastic systems relaxing to equilibrium, including overdamped and underdamped systems, that may have continuous or discrete state spaces. This paper is organized as follows.In Sec.II, we state the setup and define the problem.Section III is the main section of this paper, where we introduce the tilted equilibrium measurements and establish the inference method of the minimum EP.In Sec.IV, we numerically demonstrate our proposed method with a one-dimensional Brownian particle.Section V develops additional inference methods under the realizability condition.In Sec.VI, we sketch the derivation of the results.In the derivation, we develop a Legendre transform at the level of probability distributions from a similar Legendre transform in information geometry [47,48].Section VII concludes the paper. A. Setup We consider a stochastic system in contact with a single heat bath at a constant temperature T .The system stochastically moves around a continuous state space X ⊂ R d or a discrete state space X = {1, . . ., N}.Let p(x) denote a probability density function for a continuous system and a probability mass function for a discrete case, which satisfies X dx p(x) = 1.Here and hereafter, X dx should always be replaced by x∈X for discrete cases.The state x has energy ϵ(x), which is assumed to be time independent.The state energy determines the equilibrium distribution p eq ≡ {p eq (x)} x∈X with p eq (x) ∝ exp[−ϵ(x)/k b T ], where k b is the Boltzmann constant. The system exhibits a relaxation process p(t) ≡ { p(x; t)} x∈X for t ≥ 0, where t is the time variable.For notational convenience, we use tilde ( ˜) for quantities associated with the relaxation process of interest.We assume that the process p(t) converges to the equilibrium distribution p(t) → p eq as t → ∞.We do not assume any specific time evolution law unless otherwise noted. The fundamental assumption of this paper is that the details of p(x; t) are not measurable, but we can measure the expectation values of a few observables B 1 , . . ., B K .The number K ≥ 1 is arbitrary, and it can be a small number such as 2 or 3.An observable B α ≡ {B α (x)} x∈X is any real-valued function over X, and its expectation value over a probability distribution p is ⟨B α ⟩ p X dx B α (x)p(x).We use the notation B ≡ (B 1 , . . ., B K ) T and ⟨B⟩ p ≡ (⟨B 1 ⟩ p , • • • , ⟨B K ⟩ p ) T .Without loss of generality, we assume that the K observables are linearly independent of each other.We also assume that the K observables are linearly independent with a constant observable (a trivial observable whose value does not depend on x) because such an observable gives no information about the process.We write the set of expectation values measured at time t as which is the only nonstationary data needed for our proposed method [Fig.1(a)]. B. EP and the minimum EP The fundamental quantity characterizing the thermodynamic cost of a relaxation process is EP [12].For systems FIG. 1. Schematics of the proposed inference method.(a) A system undergoing a relaxation process with a time-independent energy ϵ(x) and a nonstationary distribution p(t).Our method requires only the data of the expectation values of a few observables, η(t) = ⟨B⟩ p(t) , along the process.(b) To systematically collect data from tilted equilibrium measurements, we apply the external field v(x; θ) and measure the expectation values of the observables, η(θ) = ⟨B⟩ pte(θ) , for many θ's.We also compute the tilted equilibrium free energy ∆F (θ) from the measured values of η(θ). in contact with a single heat bath, the EP of the relaxation process from p(t) to p eq is given by T −1 ∆H[ p(t)], where [12,49].The first term in Eq. ( 2) accounts for the change in the Shannon entropy of the system, while the second term gives the change in the bath entropy due to heat flux.Since T is set constant throughout the paper, we abuse terminology and call ∆H[ p(t)] the EP.The EP depends on the details of the probability distribution, and we need p(x; t) for all x and all t if we want to compute ∆H[ p(t)] directly from Eq. ( 2). Since the measured data η(t) are not sufficient to determine a single value of ∆H[ p(t)], we follow the general concept in Ref. [23] to focus on the range of EP compatible with the data η(t).There is no upper limit of the compatible range because hidden (unobserved) degrees of freedom can cause an arbitrarily large EP [21].On the other hand, there is a lower bound, which we write as for any set of expectation values η.Here, the minimum is taken over all probability distributions q that satisfy ⟨B⟩ q = η, i.e., that are compatible with the set of expectation values η.Since ∆H[q] is the dissipation of the relaxation process from q to p eq , any relaxation process that exhibits the set of expectation values η at some point in time must dissipate at least ∆H m (η) before relaxing to the final equilibrium.For this reason, ∆H m (η) is interpreted as the fundamental cost associated with the set of expectation values η. A. Tilted equilibrium Our main result is a method for calculating the minimum EP ∆H m (η) from tilted equilibrium measurements.The method uses a family of external fields parameterized by θ = (θ 1 , . . ., θ K ) T : The external field v(x; θ) is the superposition of the fields proportional to the observable B α with an intensity θ α .The external field v(x; θ) incurs the tilted equilibrium p te (θ) ≡ {p te (x; θ)} x∈X with Here, we use the convention for the sign of the external field such that it modifies the state energy from ϵ(x) to ϵ(x) − v(x; θ).We propose the following procedure for collecting tilted equilibrium data to perform the thermodynamic inference systematically [Fig.1(b)]: 1. Realize the tilted equilibrium distributions for many sets of parameter values θ and measure the expectation values of the observables B, for each θ. Interpolate the measured expectation values to infer the functional dependence η(θ) over a range of θ. 3. Calculate the tilted equilibrium free energy ∆F (θ) from the data η(θ) as described below. The tilted equilibrium free energy is defined as and ∆F (θ) F (θ) − F (0).The difference ∆F (θ) is computed from the data η(θ) as (see Sec. VI A for derivation) where the line integral is over any curve connecting θ ′ = 0 and θ ′ = θ, and the resulting value does not depend on the intermediate path. As we show in Sec.VI A, the correspondence from θ to η, denoted by η(θ) in Eq. ( 6), is invertible.We write the inverse function as θ(η), which solves for each η.In other words, θ(η) is the unique set of parameter values that incurs the set of expectation values η. Using the interpolated data of η(θ), one can find θ(η) for a given η if the data range covers the given η.For simplicity, we assume that the data range is large enough to cover any η that appears below.In summary, we have the data of η(θ) and ∆F (θ) from the tilted equilibrium measurements, and when a set of expectation values η is given, we can find θ(η) from the tilted equilibrium data. B. Inference of minimum EP In terms of these tilted equilibrium data, the minimum EP in Eq. ( 3) is expressed as which is the central relation of our inference method.We can calculate the right-hand side of Eq. ( 10) for a given η from the tilted equilibrium data.To do so, we first find θ(η), i.e., the set of parameter values corresponding to the given η, from the interpolated data η(θ).Then we look up the value of the tilted equilibrium free energy ∆F (θ(η)).Equation ( 10) is derived in Sec.VI A and Appendix B. From Eq. ( 1) and the definition of ∆H m (η) in Eq. ( 3), the true EP of the process for each t is lower bounded as Equation (11b) can be calculated by combining the nonstationary data and the tilted equilibrium data.To do so, we first use the tilted equilibrium data to find θ( η(t)), i.e., the set of parameter values that incurs the same set of expectation values as the nonstationary data η(t).Then we find the value of the tilted equilibrium free energy ∆F (θ( η(t))). We emphasize that the calculated value of ∆H m ( η(t)) is not merely a lower bound on the true EP but a meaningful thermodynamic cost.Indeed, it is the fundamental cost for producing the observed set of expectation values η(t) in relaxation processes, as discussed below Eq. (3). C. Equality condition and tightness If the inequality in Eq. (11a) holds with equality, we can use Eq.(11b) to calculate the exact value of the true EP from the nonstationary and tilted equilibrium data.As shown in Sec.VI B, this happens if and only if namely, p(t) is exactly realizable as a tilted equilibrium.We call the condition in Eq. ( 12) the realizability condition.We discuss when the realizability condition holds in Sec.V A. Apart from the equality condition, we make two remarks about the tightness of the inequality in Eq. (11a).First, the inequality becomes tighter as we increase K by adding more observables to B. This is because increasing K has the effect of narrowing the domain of minimization in Eq. ( 3) and raising the minimum EP ∆H m (η). Second, we can sometimes get a tighter bound by considering the whole process at once, assuming that the time evolution is Markovian: The first equality is the second law of thermodynamics for the true EP, −d t ∆H[ p(t)] ≥ 0, where d t ≡ d/dt is the time derivative, which holds under the Markovian assumption [12,50].The next inequality follows from the inequality in Eq. (11a).The last side of Eq. ( 13) can be computed from the considered data using Eq.(11b), and it gives a tighter bound than Eq.(11a) if ∆H m ( η(t)) is not monotonically decreasing in time. A. Model and required measurements We demonstrate our results with a one-dimensional Brownian particle in a harmonic trapping potential plus a ragged potential, where a > 0 determines the width of the harmonic potential, and ϵ r (x) is an arbitrary ragged potential.It is a model of an optically trapped particle linked to a biomolecule [12,51], for which ϵ r (x) denotes the energy of the biomolecule pulled to a displacement x.The time evolution is governed by the Fokker-Planck equation with a uniform mobility µ, where ∂ t ≡ ∂/∂t and ∂ x ≡ ∂/∂x denote the time and spatial derivatives. As an example, we consider the case where we can keep track of the mean and variance of the position x of the particle over relaxation processes, but we cannot reliably get the higher moments due to the limited number of relaxation trajectories obtained from experiments.This assumption corresponds to the choice of two observables, B 1 (x) = x and B 2 (x) = x 2 , in our framework.To perform the tilted equilibrium measurements, we need to modify the energy to which is still a harmonic potential plus the ragged potential.Therefore, we can realize the tilted equilibrium by modulating the center and the width of the harmonic trapping potential.We need to measure the mean and variance of the position x at many tilted equilibrium distributions, interpolate the data to find η(θ), and compute ∆F (θ) as described in Sec.III A. If ϵ r (x) = 0 and the initial distribution is Gaussian, the realizability condition [Eq.( 12)] holds.To see this, we first note that p(t) is Gaussian for all t ≥ 0 for a Gaussian initial distribution [52].Since we can make any harmonic potential by changing θ 1 and θ 2 in Eq. ( 15) for ϵ r (x) = 0, we can realize any Gaussian distribution as a tilted equilibrium.Therefore, the realizability condition is satisfied, and the equality holds in Eq. (11a), allowing us to extract the exact value of ∆H[ p(t)] from the tilted equilibrium data.In the case of a nonzero ragged potential, the realizability condition no longer holds.Nevertheless, if ϵ r (x) is small enough, we expect that p(t) is approximately realizable as a tilted equilibrium distribution, and therefore, Eq. (11b) gives a fairly accurate estimate of ∆H[ p(t)]. B. Numerical results We numerically demonstrate our inference scheme for ϵ r (x) = b cos cx.In all calculations, we scale the energy by k b T , the length by l 0 √ k b T/a, and the time by t 0 1/aµ.We set b = 0 (harmonic; the upper panels of Fig. 2) or b = 1k b T (ragged; the lower panels of Fig. 2) and c = 6(l 0 ) −1 .The initial distribution is the Gaussian distribution with mean −3l 0 and variance 0.15(l 0 ) 2 for all calculations.No other parameters are free to choose after scaling the quantities.See also Appendix E for more details on the numerics. We plot the potential ϵ(x) and the time evolution p(t), both assumed to be unmeasurable, in Fig. 2(a).The ragged system has potential barriers that separate the state space into metastable wells.Figure 2(b) shows the tilted equilibrium data η(θ).In the ragged system, η(θ) has a steplike feature, reflecting the multiwell structure of the potential.We sampled η(θ) from sufficiently dense data points over the θ space, leaving the consideration of sparse data points to future work. Figure 2(c) shows the minimum EP ∆H m ( η(t)) compatible with the observed mean and variance, which is calculated using the tilted equilibrium data via Eq.(11b).The true EP ∆H[ p(t)] is also plotted, which is not measurable.The figure shows that the calculated value ∆H m ( η(t)) is equal to or less than the true EP ∆H[ p(t)], thus confirming the inequality in Eq. (11a).Moreover, these two quantities agree exactly for b = 0, and they are in fairly good agreement for b = 1k b T [Fig.2(d)].The former is expected from the realizability condition, as discussed in Sec.IV A. The latter, in contrast, is rather surprising because the height of the ragged potential is 2b = 2k b T , which is not very small compared with k b T .This example shows that, even if the realizability condition is violated, the minimum EP can be a good approximation of the true EP.We also present an extensive result over a wide range of b. V. Realizability condition and additional inference methods As discussed in Sec.III C, the realizability condition [Eq.(12)] ensures that the inequality in Eq. (11a) holds with equality, allowing the inference of the exact value of EP.We discuss three situations where the realizability condition is satisfied in Sec.V A. Moreover, assuming the realizability condition, we can extract additional information about the relaxation process from the tilted equilibrium measurements, including the EP rate, the thermodynamic force, and a constraint on relaxation paths.We present these additional inference methods in Secs.V B-V D. A. Sufficient conditions for the realizability condition There are several physically natural situations in which we can ensure the realizability condition (see Appendix A for details).The first situation is the existence of a timescale separation: We can ensure the realizability condition except for the initial fast relaxation if the states are lumped into groups, the Markovian transition rates within a group are sufficiently larger than the rates between differ-ent groups, and we can track the total probability of each group as the observables.The second situation is when the system admits a symmetry: The realizability condition holds if the system energy and the time evolution law are symmetric under some permutations of the discrete states, the initial condition has the same symmetry, and we can track all observables obeying the symmetry.Another situation is when the process is Gaussian: We can ensure realizability condition if the state energy is a (possibly multivariate) harmonic potential, the time evolution is governed by the Fokker-Planck equation with a uniform mobility, the initial distribution is Gaussian, and we can track the mean and covariance matrix as the observables.This third situation has been demonstrated in the example (Sec.IV).Note that each of these conditions jointly concerns the system energy, the time evolution law, the initial condition, and the choice of observables. If we have enough information about the system to ensure that the system satisfies one of these sufficient conditions for the realizability condition, we can obtain the exact value of the true EP from Eq. (11b), and we can obtain additional information by the methods described below in Secs.V B-V D. Alternatively, if we can expect the system to approximately satisfy one of these sufficient conditions, we expect p(t) to be approximately realized as a tilted equilibrium distribution.In this case, ∆H m ( η(t)) will be a good approximation of ∆H[ p(t)], and the additional inference methods below will provide reasonable estimates of the additional quantities. B. EP rate The first quantity obtained under the realizability condition is the EP rate, σ(t) Under the realizability condition, we have the equality ∆H[ p(t)] = ∆H m ( η(t)), and thus, we can obtain the EP rate by simply differentiating ∆H m ( η(t)), which is calculated from Eq. (11b).Alternatively, we have the following equality (see Sec. VI A for derivation): The right-hand side of this equation is calculated by differentiating the nonstationary data η(t) and finding θ( η(t)) from the tilted equilibrium data η(θ).Equation ( 16) also provides a decomposition of the EP rate into the dissipation due to the change in the expectation value of each observable.From this equation, we can regard −T −1 θ α ( η(t)) as the thermodynamic force conjugate to the probability flux incurring the change in the expectation values d t ηα (t). In Fig. 4(a), we demonstrate the inference of the EP rate from Eq. (16) for the same example systems as in Sec.IV.Without a ragged potential (the upper panel), the realizability condition is exactly satisfied, and therefore the right-hand side of Eq. ( 16) gives the exact value of σ(t).With a nonzero ragged potential (the lower panel), the realizability condition is approximately satisfied, and indeed the right-hand side of Eq. ( 16) gives a good estimate of the EP rate. C. Thermodynamic force The second quantity obtained under the realizability condition is the thermodynamic force (affinity) over the state space.For discrete-state systems, the thermodynamic force from state y to x at time t is given by [49] The thermodynamic force is related to the EP rate by σ(t) = 1 2 x,y ȷ(x, y; t) f (x, y; t), where ȷ(x, y; t) is the net probability flux from state y to x at time t [49].Thus, the thermodynamic force quantifies the EP rate due to the probability flux from y to x.Under the realizability condition, we can calculate the thermodynamic force f (x, y; t) from the considered data as (see Appendix B 4 for derivation) The right-hand side can be calculated from the nonstationary data η(t) and the tilted equilibrium data η(θ), assuming that we know the values of the observables B(x) in each state. For continuous-state systems, the thermodynamic force at x ∈ R d is defined as a continuous version of Eq. ( 17) [53], f (x; t) −T −1 ∇ϵ(x) − k b ∇ ln p(x; t), which is a d-dimensional vector.It is related to the EP rate by σ(t) = X dx ȷ(x; t) • f (x; t), where ȷ(x; t) is the d-dimensional probability current at time t.We can similarly calculate the thermodynamic force from the data as f (x; t) = −T −1 K α=1 θ α ( η(t))∇B α (x) under the realizability condition. D. Constraint on the time evolution The final piece of information obtained from tilted equilibrium measurements is a constraint on the time evolution.Let us introduce a function of the external field parameters θ: Then the time evolution of the observables η(t) must satisfy assuming a Markovian time evolution in addition to the realizability condition (see Appendix B 5 for derivation).This restriction on the possible time trajectories is different from and complementary to the second law of thermodynamics, −d t ∆H[ p(t)] ≥ 0. Note that ⟨B α ⟩ p eq is independent of θ, and the second term of L(θ) is linear in θ.If we shift the definition of each observable by a constant so that ⟨B α ⟩ p eq = 0, L(θ) coincides with ∆F (θ). In Fig. 4(b), we demonstrate the monotonicity in Eq. (20) with the same example system as above.For the harmonic system (the upper panel), the realizability condition is satisfied, and the function L(θ) is indeed monotonically increasing.For the ragged system (the lower panel), the realizability condition holds only approximately, but L(θ) is still monotonically increasing. A. Derivation of the central relation [Eq. (10)] The derivation of Eq. ( 10) involves two nontrivial tasks: expressing the minimum EP in Eq. (3) in a closed form and relating the minimum EP to the tilted equilibrium quantities.We sketch how these two tasks are accomplished, leaving the detailed calculation to Appendix B. To express the minimum EP in a closed form, we first find the following relation for any set of expectation values η and any distribution q that satisfies ⟨B⟩ q = η: (21) where D kl [p 1 ∥p 2 ] = X dx p 1 (x) ln[p 1 (x)/p 2 (x)] is the Kullback-Leibler (KL) divergence [54].Taking the minimum of both sides of Eq. ( 21) with respect to q that satisfies ⟨B⟩ q = η for a fixed η, the left-hand side reduces to the definition of ∆H m (η) in Eq. ( 3).The minimum of the right-hand side is achieved at q = p te (θ(η)) since the KL divergence D[p 1 ∥p 2 ] is nonnegative, and it is zero if and only if p 1 = p 2 .Thus we have This successfully expresses the minimum EP in a concrete form. The other element of the proof is a Legendre duality over probability distributions.Using the expression of ∆H m (η) in Eq. ( 22), we can prove that the two functions −∆F (θ) and ∆H m (η) are strictly convex and connected by a Legendre transform: where the correspondences θ(η) and η(θ) are the same as those already defined in Sec.III A. Equation (23b) is identical to our central relation in Eq. (10).Equation (23a) ensures that the correspondence between θ and η is oneto-one, and thus the solution to Eq. ( 9) is unique.Equation (23a) also proves the expression of ∆F (θ) as a line integral in Eq. ( 8), as well as the expression of the EP rate in Eq. ( 16). B. Equality condition and scaling of the error We prove that the realizability condition [Eq.(12)] is the equality condition of the inequality in Eq. (11a).Inserting q = p(t) and η = η(t) into Eqs.( 21) and (22), we find that the difference between the two sides of the inequality in Eq. (11a) is given by Since the KL divergence is zero if and only if the two arguments are equal, this difference vanishes if and only if p(t) = p te (θ( η(t))).If this happens, then obviously the realizability condition is satisfied.Conversely, if the realizability condition holds, then the set of parameter values θ in Eq. ( 12) must satisfy ⟨B⟩ p te (θ) = ⟨B⟩ p(t) = η(t), and therefore, it is given by θ = θ( η(t)).Thus, we have p(t) = p te (θ( η(t))), and Eq. ( 24) vanishes. We can also show that the difference in Eq. ( 24) scales quadratically with the magnitude of the violation of the realizability condition, which explains the observed behavior in Fig. 3(b).Consider a reference system that satisfies the realizability condition and another perturbed system with a slightly different energy, time evolution equation, initial condition, or set of observables.We use λ for the magnitude of any of these perturbations.In Appendix C, we show that the perturbed system generically obeys the scaling p(t) − p te (θ( η(t))) = O(λ), assuming a Markovian time evolution and some mild conditions.Combined with the expansion of the KL divergence between two close distributions, D kl [p 1 ∥p 2 ] ≃ 1 2 X dx [p 1 (x) − p 2 (x)] 2 /p 1 (x) in the leading order of p 1 − p 2 [48], we can see that the error term scales as D kl [ p(t)∥p te (θ( η(t)))] = O(λ 2 ).Combined with Eq. ( 24), we conclude Therefore, our method gives the true EP up to the first order in the magnitude of the violation of the realizability condition. VII. Discussion In this paper, we have developed a method of thermodynamic inference that uses tilted equilibrium measurements.The method enables us to obtain the exact value of the minimum EP ∆H m (η) compatible with the observed set of expectation values η.This method applies to any classical stochastic system that relaxes to equilibrium with any choice of observables.Furthermore, if we have enough information about the system to ensure that the realizability condition holds, or at least that the realizability condition is approximately satisfied, we can extract the true EP, the EP rate with its decomposition, the thermodynamic force, and a constraint on relaxation paths. Compared with existing methods of thermodynamic inference for nonstationary processes [24,[39][40][41], our approach significantly reduces the demand for nonstationary measurements.Our method requires only the expectation values of a few arbitrary observables, which is insufficient for any of these existing methods.This reduced demand is achieved at the expense of tilted equilibrium measurements.Therefore, our method will be useful when one cannot practically collect sufficiently many trajectories of a relaxation process to infer EP only from nonstationary data using previously proposed methods, but one can freely apply a few types of external fields to the system.This would include both experiments and numerical simulations. Our method generally provides only the lower bound of EP, but it gives the optimal lower bound in the sense that there exists a distribution p(t) that saturates the inequality in Eq. (11a) for any η(t) since ∆H m (η) is defined by a minimization in Eq. (3).In other words, given only η(t) for each t separately as nonstationary data, our lower bound is the best possible.Moreover, our lower bound ∆H m (η) is a meaningful thermodynamic cost since it is interpreted as the minimum cost required to realize the set of expectation values η, as discussed below Eq. ( 3).This is in contrast with other lower bounds of EP that involve only a few observable values, such as from thermodynamic uncertainty relations for nonstationary processes [55][56][57].These lower bounds are meaningful as statistical quantities, such as the precision of a current observable, but they do not admit interpretations as thermodynamic costs in general. Our results open an avenue of thermodynamic inference: inference for nonstationary processes based on static measurements.We leave several directions open for future work.First, our method assumes that the set of measurable observables and the set of available external fields are both given by B. However, these two sets are often different in natural situations.Considering this difference is important to make our method more useful.Second, our method is exact in the sense that it provides the exact value of the minimum EP ∆H m (η).Approximating the minimum EP with a smaller amount of tilted equilibrium data is an interesting direction.Finally, it is essential to extend the applicability of our approach.As discussed in Appendix D 1, our results can be easily extended to systems with internal entropy of states [21] and exchange of particles with a single particle reservoir.On the other hand, extending our results to systems with multiple baths is nontrivial.This extension is possible on a formal (mathematical) level by considering tilted nonequilibrium stationary distributions (see Appendix D 2), but making it practical is left to future work.Extending our approach to driven systems is also a nontrivial and important direction. From a mathematical point of view, the Legendre transform we have introduced in Eq. ( 23) is at the level of probability distributions, and it does not rely on asymptotics.Such a microscopic Legendre transform has previously appeared in other fields such as information geometry [47,48] and the foundations of statistical mechanics [58], and we have formulated the Legendre transform in Eq. ( 23) by borrowing ideas from information geometry and replacing information theoretic quantities with thermodynamic ones, such as the Shannon entropy with the thermodynamic EP.This formulation extends the existing connections between information geometry and thermodynamics [59][60][61][62].Detailed explorations of this geometric picture of stochastic thermodynamics is left for future work.JPMJPR18M2, JST ERATO Grant No. JPMJER2302, and UTEC-UTokyo FSI Research Grant Program. Appendix A: Sufficient conditions for the realizability condition We discuss three situations in which we can ensure the realizability condition in Eq. (12).A similar discussion based on information geometry that relates time evolutions of stochastic systems to a constrained set of probability distributions is found in Ref. [63]. Time-scale separation The first situation is the existence of a time-scale separation.Consider that the states are grouped (coarse-grained) into K +1 disjoint sets of states G 0 , G 1 , . . ., G K .We assume the time-scale separation, i.e., that the Markovian transitions between two states in one group are much faster than the transitions between two states in different groups [20][21][22].We further assume that we can keep track of the expectation values of the observables χ 0 , χ 1 , . . ., χ K , where χ α (x) is defined as χ α (x) = 1 for x ∈ G α and χ α (x) = 0 for x G α .The expectation value ⟨χ α ⟩ p(t) gives the total probability of the αth group at time t.The observables satisfy K α=0 χ α (x) = 1 for all x.To connect this setup to our formulation, we choose B α = χ α for α = 1, . . ., K as the observables.Here, we exclude α = 0 so that the set of observables B is linearly independent with the constant observable. Under these assumptions, we prove the realizability condition except for the initial fast relaxation.For this purpose, we introduce the coarse-grained probability Q(α; t) ⟨χ α ⟩ p(t) and its equilibrium value Q eq (α) ⟨χ α ⟩ p eq .The time-scale separation implies that the conditional probability within each group, p(x; t)/ Q(α; t) for x ∈ G α , rapidly relaxes to the conditional canonical distribution p eq (x)/Q eq (α) [20][21][22].Thus, the nonstationary probability distribution has the form of except for the initial fast relaxation.Using Eq. (A1) and p eq (x) = exp{−[ϵ(x) − F eq ]/k b T }, where F eq is the equilibrium free energy, we have . (A2) By inserting χ α (x) = B α (x) for α ≥ 1 and χ 0 (x) = 1 − K α=1 B α (x) into Eq.(A2), we can rearrange Eq. (A2) as with a set of real numbers (θ α ) K α=1 , where the constant term is independent of x.Rearranging gives Thus, p(x; t) is realized as a tilted equilibrium with an external field of the form of Eq. ( 4), which proves the realizability condition.Similarly, we can expect the realizability condition to hold approximately if the transitions within a group are faster than the transitions between different groups, but their time scales are not sufficiently separated. Symmetry Another situation is when the system and the initial distribution obey a symmetry [63].Focusing on a discrete space X = {1, . . ., N}, we consider a symmetry expressed by a permutation group G on X, whose element g ∈ G is a bijection from X to itself.We assume that the state energy is invariant under the permutations, ϵ(x) = ϵ(g(x)) for all g ∈ G and all x ∈ X.We also assume that the time evolution law obeys the symmetry.For example, if the time evolution is Markovian and given by ∂ t p(x; t) = y [W(x, y) p(y; t) − W(y, x) p(x; t)], where W(x, y) is the transition rate from y to x, then we assume W(x, y) = W(g(x), g(y)) for all g ∈ G and all x, y ∈ X.We also impose the symmetry on the initial distribution p(x; 0) = p(g(x); 0).We assume that we can keep track of the expectation values of all observables that are invariant under the permutations.In other words, the observables B 1 (x), . . ., B K (x) and the constant observable form a basis of the linear subspace {w ∈ R N | w(x) = w(g(x)), ∀g ∈ G, ∀x ∈ X} of R N . Under these conditions, we prove the realizability condition in Eq. ( 12) for all t ≥ 0. First, by symmetry considerations, the nonstationary distribution obeys the symmetry for all t > 0: p(x; t) = p(g(x); t). (A5) Therefore, the vector {k b T ln p(x; t) + ϵ(x)} x∈X has the same symmetry, and thus, it can be expanded in terms of B 1 , . . ., B K and the constant observable as with a set of real numbers (θ α ) K α=0 .Rearranging Eq. (A6) gives This is the form of the equilibrium distribution under the external field α θ α B α (x).Therefore, p(t) is realizable as a tilted equilibrium, and the realizability condition holds. Similarly, the realizability condition is expected to hold approximately if the symmetry does not hold perfectly but holds approximately. Harmonic potential The third situation is when the system has a (possibly multivariate) harmonic potential.More precisely, focusing on a continuous state space x ≡ (x 1 , . . ., x d ) T ∈ X = R d , we assume that the potential is harmonic, ϵ(x) = x T ax + b T x, where a is a d × d positive-definite symmetric matrix, and b is a d-dimensional vector.We also assume that the time evolution is given by the Fokker-Planck equation, where µ is a d × d positive-definite symmetric mobility tensor, with a Gaussian initial distribution.As for the observables, we assume that we can keep track of the mean and the covariance matrix.This amounts to choosing x 1 , . . ., x d , (x 1 ) 2 , . . . Under this setup, we prove the realizability condition in Eq. ( 12) for all t ≥ 0. First, the solution of Eq. (A8) is Gaussian for all t ≥ 0 [Sec.VIII 6,52], and therefore, it can be written in a form of using a d × d symmetric matrix ã(t) and a d-dimensional vector b(t).We can rewrite Eq. (A9) as as a linear combination of the observables.This shows that p(x; t) is realized as a tilted equilibrium distribution with an external field of the form α θ α B α (x), and thus, the realizability condition is satisfied. Even if the initial distribution is not Gaussian, the realizability condition holds asymptotically after the initial fast transition.Restricting to a one-dimensional case for simplicity, in which a, b, and µ are scalar, we write the solution of the Fokker-Planck equation in terms of the changes in the cumulants.The deviation of the nth cumulant (n = 1, 2, . . . ) from its equilibrium value decays exponentially in time as exp(−2nµat) [52], and the equilibrium values are zero for n ≥ 3. Thus, only the first two cumulants remain significant after the initial fast transition, which means that the distribution is close to Gaussian, and the realizability condition holds asymptotically. We can expect that the realizability condition holds approximately when the potential is close to but not exactly Gaussian.We explore this situation in the example in Sec.IV. Appendix B: Detailed derivation of the results In Appendixes B 1-B 3, we follow the steps outlined in Sec.VI A to derive the inference of the minimum EP [Eq.(10)].In Appendixes B 4 and B 5, we derive the additional inference schemes described in Sec.V. KL divergence We start the derivation by relating the thermodynamic quantities to the KL divergence.For any probability distribution p, where F eq ≡ F (0) is the equilibrium free energy at p eq , and we have used p eq (x) = exp{−[ϵ(x) − F eq ]/k b T }.For p = p eq , Eq. (B1) gives H[p eq ] = F eq .Thus, we obtain which is a well-known expression of the EP [64][65][66]. Next, we rewrite the tilted equilibrium free energy in terms of the KL divergence.For two arbitrary sets of external field parameters, θ and θ ′ , we obtain Substitutions θ → 0 and θ ′ → θ give an expression of ∆F (θ), where we used p te (0) = p eq .Note that a similar expression has been found in Ref. [67]. Explicit expression of the minimum EP We prove Eq. ( 21), which is one of the two elements of the proof of Eq. (10).Let η be any set of expectation values and q be any distribution that satisfies ⟨B⟩ q = η.The given set of expectation values η determines the set of parameter values θ(η) and the tilted equilibrium p te (θ(η)), which we write as θ and p te for conciseness.Equation ( 9) reads ⟨B⟩ p te = η in this shorthand notation.Then the three distributions, q, p te , and p eq , satisfy the generalized Pythagorean theorem [48]: To prove this relation, we calculate the difference as follows: The third equality follows from X dx p te (x) = X dx q(x) = 1, and the last equality is due to the relation of expectation values ⟨B⟩ p te = η = ⟨B⟩ q .Using Eq. (B2), we can rewrite Eq. (B5) as Recalling the abbreviation p te ≡ p te (θ(η)), Eq. (B7) is identical to the desired relation in Eq. ( 21).We note that inserting Eqs. ( 22) and (23b) into Eq.(B7) gives an expression like the Donsker-Varadahn representation [68], but the use of coordinates θ and the restriction of the set of observables are unique to information geometry. Legendre duality We prove the Legendre transform between ∆H m (η) and ∆F (θ) in Eq. (23), which is the second element of the proof of Eq. (10). First, we prove the Legendre transform from ∆F (θ) to ∆H m (η).We fix an arbitrary set of parameters θ and use η ≡ η(θ) = ⟨B⟩ p te (θ) for the corresponding set of expectation values.The derivative relation ∂∆F /∂θ α = −η α is proved as This relation states that the derivative of the equilibrium free energy by an external field parameter gives the corresponding expectation value, which is a classical result in statistical mechanics (e.g., Ref. [69]).To prove the relation between the two functions, ∆H m (η) = ∆F (θ) + α θ α η α , we use the expression of ∆H m (η) in Eq. ( 22) and find where we have used Eqs.(B2) and (B3).Next, we prove the inverse transformation from ∆H m (η) to ∆F (θ).For this purpose, it suffices to show that ∆F (θ) is a strictly concave function.Then the general theory of Legendre duality for strictly convex or strictly concave functions (e.g., Ref. [70]) leads to the inverse transform ∂∆H m /∂η α = θ α and ∆F (θ) = ∆H m (η) − α θ α η α .The general theory also ensures that ∆H m is strictly convex. To prove the strict concavity of ∆F , we consider two sets of parameter values θ, θ ′ .We consider the graph of the function ∆F ( • ) and its tangent plane at θ.The tangent plane, which we write as T ( • ; θ), is given by This tangent plane is always above the graph of F ( • ): where we have used the derivative relation in Eq. (B8) in the first equality, and the second equality follows from Eq. (B3).The last inequality holds with equality if and only if p te (θ ′ ) = p te (θ).This happens if and only if θ ′ = θ since we have assumed that the observables B and the constant observable are linearly independent.Thus, the function ∆F is strictly concave. Thermodynamic force We derive the relation between the thermodynamic force and the external fields in Eq. ( 18) under the realizability condition.Since the realizability condition implies p(t) = p te (θ( η(t))) (Sec.VI B), we have ln p(x; t) p(y; t) = ln p te (x; θ( η(t))) p te (y; θ( η(t))) Comparing this relation with the definition of the thermodynamic force in Eq. ( 17), it is easy to prove Eq. ( 18).The continuous-state version is proved similarly by replacing the differences in quantities between two states by their spatial derivatives, e.g., ϵ(x) − ϵ(y) by ∇ϵ(x). Monotonicity We prove the monotonicity of the function L(θ) in Eq. ( 20), assuming the realizability condition and a Markovian time evolution.Comparing Eqs.(B4) and (19), we obtain L(θ) = −k b T D kl [p eq ∥p te (θ)] for any θ.Combined with the realizability condition, we get (B13) For Markovian time evolutions, the KL divergence has the contraction property [52], for two arbitrary trajectories, p(t) and q(t), obeying the same Markovian time evolution.By substituting p eq for q(t) and noting that p eq is the fixed point of the time evolution, we obtain d t D kl [p eq ∥ p(t)] ≤ 0. Combining this equation with Eq. (B13) proves the desired monotonicity in Eq. (20).Note that substituting p eq for p(t) in Eq. (B14) and using the expression in Eq. (B2) gives the monotonicity d t ∆H[ p(t)] ≤ 0, which is the second law of thermodynamics [49].Therefore, the monotonicities of L and ∆H have similar mathematical origins. Appendix C: Perturbative calculation of the error We compare a reference system that satisfies the realizability condition with a slightly different (perturbed) system that does not fulfill the realizability condition.Our goal is to show that the discrepancy between p(t) and p te (θ( η(t))) scales linearly with the magnitude of the perturbation. We specify the reference system by the Markovian time evolution generator L , the state energy ϵ ≡ {ϵ(x)} x∈X , the initial distribution p(0) ≡ { p(x; 0)} x∈X , and the set of observables B ≡ {B(x)} x∈X .The Markovian time evolution generator, used only in this appendix, generates the time evolution as d t p(t) = L p(t).We do not assume any particular form of it, only imposing the relaxation to the equilibrium p eq (x) ∝ e −ϵ(x) .Here and until the end of this appendix, we set k b T = 1 for simplicity. At a fixed time t, the reference system has the distribution p(t), the set of expectation values η(t) = ⟨B⟩ p(t) , the corresponding set of external field parameters θ( η(t)), and the tilted equilibrium p te (θ( η(t))).For conciseness, we use the symbols p, η, θ, and p te to denote these quantities for the reference system at the fixed time t.We assume that the reference system satisfies the realizability condition.As discussed in Sec.VI B, the realizability condition is equivalent to p(t) = p te (θ( η(t))), which reads p = p te in the shorthand notation adopted here. First, the perturbed probability distribution at time t is where ≃ denotes equality to the leading order in λ.Since p = e L t p(0) for the reference system, the perturbation term has the exponent β = 1.Note that we cannot exclude the possibility that the O(λ) term in Eq. (C1) vanishes identically, but in that case, we can still set β = 1 and write Eq. (C1) as p + λ p′ + o(λ) with p′ = 0.This caveat also applies to λ γ η′ , λ µ θ ′ , and λ ν p ′ te below.The perturbed expectation values of the observables are The reference system has the set of expectation values η = X dx p(x)B(x), and thus, the exponent of the perturbation term is γ = 1. Next, we evaluate the perturbations to the set of parameter values θ.It is determined by solving with respect to θ + λ µ θ ′ , where θ + λ µ θ ′ is related to p te + λ ν p ′ te as where we have defined the covariance ⟨⟨X, Y⟩⟩ ⟨XY⟩ − ⟨X⟩⟨Y⟩.Inserting this into Eq.(C3), the resulting zerothorder equation, ⟨B⟩ = η, is the equation for determining θ of the reference system, and therefore, it is already satisfied.The remaining terms give the equation to determine λ µ θ ′ : (C7) This gives the exponent µ = 1. Finally, we calculate the perturbation to p te .Substituting µ = 1 back into Eq.(C4), we find that the exponent of the perturbation to p te is ν = 1. Combining the above results, we can evaluate the discrepancy between the nonstationary distribution and the tilted equilibrium distribution for the perturbed system as where we have used the realizability condition for the reference system.From Eq. (C8), it is fair to say that the discrepancy generically scales linearly in λ, which is the fact we used in Sec.VI B to evaluate the scaling of the error in the inference of the true EP.The exception is when p′ − p ′ te happens to vanish, in which case the scaling of the discrepancy is of higher order, and the error is even smaller. respectively, to eliminate T from the theory because the temperature T is not defined when the system is in contact with multiple heat baths.With these replacements, we can formally redefine the EP and the tilted equilibrium free energy as The difference ∆ Ĥ[p] = Ĥ[p] − Ĥ[p st ] is known as the Hatano-Sasa excess EP [71] and the nonadiabatic EP [72] over the process from p to p st .We can formally reproduce all our results with these replacements.If we can physically implement an external field that modifies the stationary distribution p st to a tilted stationary distribution proportional to exp{−[ϕ(x) − v(x; θ)]}, we can use the procedure in the main text to obtain the minimum of ∆ Ĥ[p] compatible with the observed expectation values, and we can also calculate other thermodynamic quantities under the realizability condition.However, this generalization remains formal because we cannot physically realize the tilted stationary distribution generically. Appendix E: Supplemental information on the example 1. Details of the numerical calculations In the numerical calculations (Figs.2-4), we numerically solved the Fokker-Planck equation by discretizing the space and solving the resulting discrete-space continuoustime Markov jump system.We calculated the expectation values of the observables, the EP, and the tilted equilibrium free energy by replacing the integrals with the sums over the discretized space.We confirmed that the results do not depend on the spatial mesh size.We also checked that the results are consistent with the analytical calculation when ϵ r (x) = 0, which is detailed in the next section. Analytical calculation for the harmonic system We analytically calculate the relevant quantities for the example system in the main text (Sec.IV) with ϵ(x) = ax 2 , i.e., in the absence of the ragged potential.Assuming a Gaussian initial distribution, the system satisfies the realizability condition, as discussed in the main text. In the tilted equilibrium measurements, we apply the external fields in Eq. ( 4) and measure the expectation values of the observables B 1 and B 2 .The modified state energy due to the external fields is The modified state energy is harmonic, and therefore, the tilted equilibrium is a Gaussian distribution with mean θ 1 /2(a − θ 2 ) and variance k b T/2(a − θ 2 ).The set of expectation values at the tilted equilibrium is The tilted equilibrium free energy is The inverse correspondence from η to θ is , (E5) where the term η 2 − (η 1 ) 2 is the variance of the tilted equilibrium distribution. FIG. 2 . FIG. 2. Demonstration of our inference method with a one-dimensional Brownian particle with ϵ(x) = ax 2 + b cos cx.The upper panels are with b = 0 (harmonic), and the lower panels are with b = k b T (ragged).We scale the energy by k b T , the length by l 0 √ k b T/a, and the time by t 0 1/aµ, where µ is the mobility.See the main text for parameter values.(a) The potential energy ϵ(x) (gray) and the probability distributions over a relaxation process (blue; vertically shifted).These quantities are assumed to be unmeasurable.(b) The tilted equilibrium measurement measures the expectation values of the observables (η 1 (θ), η 2 (θ)) with varying the external field parameters (θ 1 , θ 2 ).(c) Inference of the minimum entropy production (EP).The minimum EP ∆H m ( η(t)) is obtained from the measurements (red), while the true EP ∆H[ p(t)] is not measurable (blue).(d) The difference between the true EP and the obtained minimum EP, ∆H[ p(t)] − ∆H m ( η(t)). FIG. 3 . FIG. 3. Relative error {∆H[ p(t)] − ∆H m ( η(t))}/∆H[ p(t)] between the true entropy production (EP) and the calculated minimum EP.(a) Relative error over time, plotted for several values of the height b of the ragged potential.(b) Relative error over b, plotted for four time points.Each symbol represents a time point indicated in (a).The relative error grows quadratically in b (dashed line). Figure 3(a) shows the relative error, ∆H[ p(t)] − ∆H m ( η(t)) /∆H[ p(t)], for various values of b.The relative error is zero for b = 0 due to the realizability condition, and it increases with b.It does not significantly depend on t.As shown in Fig. 3(b), the relative error for a fixed t scales quadratically in b.Therefore, the calculated value ∆H m ( η(t)) equals the true EP ∆H[ p(t)] up to the first order in b.In fact, this quadratic dependence is a generic property, as discussed in Sec.VI B. FIG. 4 . FIG. 4. Additional inference under the realizability condition demonstrated with the same systems as in Fig. 2. The harmonic system (upper panels) satisfies the realizability condition, while the ragged system (lower panels) satisfies it only approximately.(a) Inference of the entropy production (EP) rate.The true EP rate T σ(t) = −d t ∆H[ p(t)] (blue) and the obtained estimate −θ( η(t)) • d t η(t) (red) are in exact agreement under the realizability condition.We also plot the decomposed values −θ α ( η(t))d t ηα (t) (green).(b) Constraint on Markovian time evolution.The function L(θ( η(t))) increases monotonically with time under the realizability condition.
12,527.2
2021-12-21T00:00:00.000
[ "Physics" ]
An Insight into the Celluloytic Potential of Three Strains of Bacillus Spp. Isolated from Benthic Soil of Aquaculture Farms in East Kolkata Wetlands, India 1Department of Zoology, Vidyasagar College, Block CL, Plot 3-8 & 44-50, Sector II, Salt Lake, Kolkata, West Bengal700 091, India. 2Department of Life Sciences, Presidency University, 86/1, College Street, Kolkata, West Bengal 700073, India. 3UGC – HRDC, Jadavpur University, Block LB, Plot No 8, Sector III, Salt Lake City, Kolkata, West Bengal 700098 Kolkata 700098, India. 4University of Burdwan, Rajbati, Burdwan, West Bengal 713104 , India. The process of photosynthesis converts light energy into chemical energy and thus results in the production of plant biomass with cellulose as a major component 1 . Thus as one of the most abundant materials on Earth, cellulose plays a key role in the material flow of this planet. Cellulose is one of the major renewable resources and is considerably available in agricultural and municipal wastes, however, most practiced disposal method of this waste is by burning which not only eliminates the possibility of recycling but also pollutes the environment. 1,2 . Hence, research has been directed towards the study of the natural process of cellulose decomposition. The biogeochemical cycle of carbon completes with the degradation of the cellulose by microorganisms of the soil and guts of animals. Cellulase, an inducible enzyme is synthesized and secreted extracellularly by a diversity of microorganisms, including bacteria and fungi during their growth in cellulosic materials 3 . Cellulose is decomposed by the microorganisms with a multi-enzyme system which converts it into its monomeric form of glucose. The enzyme system comprises Endob -1,4 glucanase which randomly cleaves b 1,4 linkage and extracts elementary fibrils from cellulose crystals, followed by Exob -1,4 glucanase which cleaves the non-reducing end of the cellulose fibrils and converts it into Cellobiose or Cellodextrose which is finally hydrolysed to glucose by b -Glucosidases 4 . In the present study, the focus is drawn towards the prokaryotic genera only. There are numerous references of cellulose degrading bacteria from soil sources such as manure compost 5 , tea garden soils 6 , the soil of paddy field 7 , etc. These studies report several genera of bacteria like Bacillus, Acinetobacter, Clostridium, Paenibacillus, Trichonympha, Actinomycetes, etc. 7,8 with various degradation ability and enzymatic activity. Cellulose degrading bacteria have also been isolated from animal guts such as insect caterpillar and snail 1 , termites 9 and grass carp 10 . T h i s p h e n o m e n o n o f c e l l u l o s e decomposition is important not only ecologically but also economically from the human perspective. Deeper insight into the cellulolytic microcosm can provide plausible remedies in the management of pollution caused by Solid Municipal Wastes 11 . On the other hand, depletion of non-renewable sources of energy is a matter of utmost concern to humanity. To solve this problem, plant biomass is the only foreseeable source that has a sustainable future. Cellulosic materials in this context are enticing because of its low cost and plentiful supply. Apart from these, the cellulolytic microorganisms are also found convenient in their industrial application where cellulose is used as a raw material. The most important of the industrial application may be the prospective use of lignocellulosic biofuels, where cellulosic materials would be used as a raw material for the production of sugars that can be converted into ethanol and other liquid fuels. The whole process would incorporate the use of microorganisms especially the prokaryotes to facilitate this conversion 12 . Other notable industrial application of the cellulose degrading bacteria are found in the composting, paper industry, breweries and food industries 13 However, data of the prokaryotes with cellulolytic potential from the benthic soil of aquatic ecosystems are scanty. The only related study has been done on mangrove soils of Mahanadi river delta, India which reports genera of bacteria like Micrococcus spp., Bacillus spp., Pseudomonas spp., Xanthomonas spp. and Brucella spp. as potential cellulolytic strains 14 . The study of the prokaryotic microcosm of the benthic soil of aquatic ecosystems is of utmost importance as they play a wide array of roles in the maintenance of the ecosystem. Recent advancements in the development of the aquaculture systems promote the idea of microbial-based culture systems over traditional culture methods. The Microbial-based culture system involves the study of the benthic soil which can lead to a better understanding of the roles of the members of the prokaryotic community in the microcosm of the aquaculture system 15 . Banking on this idea, this study investigates the benthic soil of two aquaculture farms in East Kolkata Wetlands in search of prokaryotic strains with cellulolytic potentials which can be used in the designing of a sustainable aquaculture system. (Data in this paper are from first author's thesis to be submitted shortly for the partial fulfillment of the requirements for the Degree of Doctor of Philosophy in the Department of Zoology, The University of Burdwan, West Bengal, India.) Culture Media All reagents used in the study were of analytical grades procured from HIMEDIA (India) and Sigma Aldrich Chemical (USA). Minimal salt media (MSM) was used for culture of bacterial strains as a source of essential inorganic nutrients. Regular Culture and subculture were done with nutrient agar medium having Peptone 5 g, NaCl 5 g, Beef extract 1.5 g, Yeast extract 1.5 g, Agar 2%, pH 7.4±0.2 (per litre) and Luria agar having Casein enzymic hydrolysate 10g, Yeast extract 5g, Sodium chloride 5g, Agar 2%, pH 7.0±0.2 (Per litre) used after autoclaving (120°C temperature and 20 psi pressure, 15 minutes). Collection Soil (~5g) was collected from two Aquaculture farms in East Kolkata Wetlands (22.5528° N, 88.4501° E). Soil samples were collected from the top layer of the benthic soil under 60 cm of water . Soils were aseptically transferred to sterile containers. The containers were labelled and taken to laboratory for further processing. Isolation Bacterial isolation was done following the protocol of Gupta et al., 2012, 1 with minor modifications. 2g of soil was inoculated in 50 ml Minimal salt media (MSM) with 2 % w/v Carboxymethylcellulose (CMC) as the sole carbon source. The culture was maintained for 7 days at 37°C with constant shaking at 120 rpm. The culture media was then subjected to serial dilution up to 10 6 times and was spread on Congo Red agar media KH 2 PO 4 0.5 g, MgSO 4 0.25 g, CMC 20 g, agar 15 g, Congo-Red 0.2 g, and gelatin 2 g; distilled water 1 L and at pH 6.8-7.2. This media is a fast process to test the cellulolytic activity via discoloration of Congo red around the colonies as halos. After incubation for 12 hours at 37°C, single colonies of the isolates were streaked in Congo red agar media. The colony size and diameter of the halo are noted. The Hydrolysing Capacity (HC) of the isolates were calculated using the ratio of the diameter of the halo and diameter of the colony. This method was used to determine the best isolates with considerable cellulolytic potential and were selected for further studies. In this study three strains outperformed other isolates and were selected for further enzymatic analyses. The three selected isolates were cultured in an enzyme production media composed of MSM supplemented with sterilised Filter paper (@ 0.05g / 20 ml, cut into 1cm 2 pieces) for Exo-b -1,4 glucanase assay and the same media containing CMC (0.5 g / 20 ml), at pH 6.8-7.2 for Endob -1,4 glucanase assay, and the cultures were incubated for 96 hours at 37°C in a shaker incubator at 120 rpm. After the incubation period, the cultures were subjected to centrifugation at 8000 rpm for 10 mins at 4°C. The supernatant was collected, filter sterilised and was stored at 4°C for future enzymatic assays. The pellet of the media supplemented with Filter paper was collected to analyze the degradation percentage by Gravimetric Analysis 1 . Enzyme Assay The end product of the cellulose digestion is reducing sugar 4 , thus, total cellulase activity was estimated by measuring the amount of reducing sugar formed from CMC and filter paper. The enzyme activity was determined according to the methods recommended by the International Union of Pure and Applied Chemistry (IUPAC) Commission on biotechnology following the protocol given by Rajoka and Malik, 1994 16 with minor modifications. The Exo-b -1,4 glucanase activity of the bacterial isolates were determined by incubating 0.5ml of the sterilised supernatant collected as mentioned earlier, with 1ml of 0.1M citrate buffer pH-4.8 and sterilised Whatman no1 filter paper (50mg) in the form of 1cm 2 pieces for 1hour at 50°C. The reaction was stopped by adding 3 ml of the 3,5-dinitrosalicylic acid (DNS) reagent to the reaction mixture. DNS added reaction mixture was boiled for 15 minutes and 40% sodium potassium tartrate was added to it. After cooling to room temperature, absorbance was measured at 550 nm (Systronics R2202 Double Beam Spectrophotometer) against blank with same constituents as above except the supernatant which was replaced with sterile triple distilled water of same volume. Endo-b-1,4 glucanase activity was assayed by measuring the amount of reducing sugar from Carboxymethylcellulose and was determined by a similar process as that of Exo-b-1,4 glucanase, except the absorbance, was measured at 540 nm against blank 16 . The OD value thus obtained was then put into the equation as given below to obtain the activity of the enzyme in IU ml -1 min -1 . Enzyme Activity (IU ml -1 min -1 ) = Absorbance x Standard factor A standard estimation was done using known concentration gradient of glucose (0.1 -1.0 mg ml -1 , 10 samples at 0.1 mg ml -1 interval) and following the same principle as stated above. The OD value thus obtained was used to calculate the standard factor. The same procedure was followed once for Endo-b -1,4 glucanase at 540nm and for Exo-b-1,4 glucanase at 550 nm. The Standard factor was calculated according to the following formula and finally, the mean value of the Standard factor was taken for enzyme activity estimation. Standard Factor = Concentration of Glucose standard (mg ml -1 ) / Absorbance (at 540 or 550nm) pH and Temperature stability Cell-free supernatants containing the crude enzymes were tested for their optimal pH and thermostability. The same process of enzymatic analysis was done by varying the pH of the enzyme production media and temperature of the enzyme prior to incubation. The pH was tested within a range of 4 -10 (7 samples with an interval of 1 pH) and temperature were tested from 20°C to 70°C (6 samples with 10°C interval). Biochemical Characterisation Biochemical characterisation of the bacterial isolates were done by an array of tests including MR, VP, Motility, Gram's Staining, Citrate, Amylase, Oxidase, Catalase etc. were done to characterize the strains according to Bergey's Manual of Systematic Bacteriology 17 . Identification and Phylogenetic analyses The Sequences obtained were analyzed using BLASTn in the NCBI database (http:// blast.ncbi.nlm.nih.gov/Blast.cgi). The results thus obtained were screened and ten nearest neighbors were chosen based on the Highest Max Score and Total Score. The aligned sequence files were downloaded and further alignment and phylogenetic analyses were done in MEGA7 18 . The alignment of the sequences with their nearest neighbors was done using CLUSTALW. To eliminate the possibility of incorporation of potential chimeras, the sequences were checked using online webtool DECIPHER 19 . The phylogenetic tree was constructed based on the evolutionary distances calculated from the number of base substitutions per site of all the three strains by the Neighbor-joining method 20 using Maximum Likelihood Composite model 21 . The difference in composition bias was considered in evolutionary comparisons during the construction of the Phylogenetic tree 22 . Statistical analysis All experiments were performed at least in triplicate and values are expressed as mean values ± standard deviation (S.D.)]. Statistical analyses were done and means were compared by Student's t-test using SPSS 17.0 (SPSS Inc., Chicago, USA). Differences between means at (P ≤ 0.05) level were considered significant. Characterisation of Cellulose degrading Strains The three isolates were names as strains SWA4, SWA13a, and SWO6. Results of morphological characterization of these strains like colony shape, cellular morphology, gram character and detailed biochemical characteristics are provided in Table 1. The Results showed that SWO4 and SWO13a as a Gram-negative, nonmotile strain whereas SWA6a is a gram-positive, motile strain. All the three strains were capable of producing enzymes like gelatinase, amylase, urease, citratase, catalase and oxidase other than cellulase. The strains were capable of fermenting sugars with varying potentiality. Molecular Characterisation BLASTn analysis results are depicted in Table 2. Isolates SWO13a and SWA6a were identified to be as Bacillus cereus with 97% and 99% homology respectively. Isolate SWO4 was Bacillus flexus with 98% sequence homology. The Phylogenetic tree constructed is depicted in Figure 3. The analysis of the tree shows that the isolates are in separate lineages and are different from one another with SWO4 in a separate lineage altogether. Screening test results The Screening test result performed in Congo Red agar media revealed a glimpse of the 3. Evolutionary relationships of the three strains. The Genbank Accession Numbers for each strains used in this analysis is mentioned in parenthesis. The evolutionary history was inferred using the Neighbor-Joining method. The optimal tree with the sum of branch length = 0.10455038 is shown. The tree is drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the Maximum Composite Likelihood method and are in the units of the number of base substitutions per site. The differences in the composition bias among sequences were considered in evolutionary comparisons. The analysis involved 30 nucleotide sequences. All positions with less than 95% site coverage were eliminated. That is, fewer than 5% alignment gaps, missing data, and ambiguous bases were allowed at any position. There were a total of 829 positions in the final dataset. Evolutionary analyses were conducted in MEGA7 comparative account of cellulolytic potential of the three strains involved and is given in Table 3. The highest HC value was found in Isolate SWA6a and the lowest was given by SWO4. However, it is mention worthy that these three strains excelled other strains that were subjected to the same test and hence was selected for the enzymatic assay. Enzyme Assay The enzyme assay confirms the result obtained in the screening test and is given in Table 4. Isolate SWA6a scores the best enzymatic activity for both Exo -b,1-4glucanase and Endo -b,1-4-glucanase with an activity of 0.08214 ± 0.00412 and 0.11263 ± 0.00478 IU ml -1 min -1 respectively. The Overall enzyme activity ranged from 0.06112 to 0.08214 IU ml -1 min -1 for Exo -b,1-4glucanase and 0.10018 to 0.11263 IU ml -1 min -1 for Endo -b,1-4-glucanase. Gravimetric analysis revealed that isolates SWO4, SWO13a, and SWA6a were able to degrade 55.1%, 61.9% and 63.4% of raw cellulose given in the form of filter paper respectively, in a span of 96 hours under the given conditions. Results of pH and Temperature stability estimation of both the enzymes of the three isolates are depicted graphically in Figure 1 and 2 respectively. Both the analyses showed a common pattern for both the enzymes. For pH, the maximum activity of the enzymes obtained from the strains showed the highest activity within a range of 6 -8 and diminishing at either extremities. In case of temperature the highest activity was recorded within 30 -40°C with the same trend of diminishing activity with either increase or decrease of the temperature. The isolate SWA6a not only showed the highest activity under optimum pH and temperature conditions but also was more temperature and pH tolerant when compared to the other two isolates. DISCUSSION Cellulose is indeed considered as a viable option for the supply of raw materials for several industries. Several research indicates its potentiality in this regard. However, the most environment friendly way to decompose and utilise this complex biomolecule is to degrade it using microorganisms like bacteria as a tool 23 . The past references showed varied degree of cellulolytic potential from unidentified strains with ranges of 0.012 to 0.196 IU ml -1 min -1 for Exo -b,1-4glucanase and 0.1622 to 0.400 IU ml -1 min -1 for Endo -b,1-4glucanase was estimated from bacteria inhabiting the gut of cellulose -feeding organisms 1 . Another study demonstrated the cellulolytic potential of Bacillus licheniformis isolated from Indian Hot springs to be 0.542 IU ml -1 min -1 for Exo -b,1-4glucanase and 0.120 IU ml -1 min -1 for Endo -b,1-4glucanase 24 .These two results correspond to the data obtained in the study. On the other hand, present findings do not correspond to the observations of Behera et al. who described a very high cellulolytic potential (2.471 to 98.253 IU ml -1 min -1 for Endo -b,1-4glucanase) produced by several genera of bacteria like Micrococcus spp., Bacillus spp., Pseudomonas spp., Xanthomonas spp. and Brucella spp. isolated from the benthic soil of Mahanadi river 14 . The results from various experimental protocols explored revealed that all the three isolates were capable of utilizing cellulose as a sole carbon source and their efficacy in the breakdown of cellulose is also mention worthy. Similar results were also obtained in the estimation of cellulolytic potential by the earlier workers 1,24 . The data seems to be the first report of prokaryotes with cellulolytic potential from the benthic soil of Aquaculture farms of East Kolkata Wetlands. It can be said that these bacteria are efficient in the decomposition of cellulosic materials in the aquatic ecosystem and hence maintains the ecological balance of the aquaculture ponds by maintaining a proper biogeochemical cycle of carbon providing an ecologically healthy environment for fish culture. This study also illuminates the prospective use of these strains in the cellulose dependent industry and that these strains have the potential efficacy. They can also be used in Solid Waste Management via composting method 11 . From the perspective of aquaculture, identification of such strains with cellulolytic potential paves the way for future implementation for microbial -based culture methods. This strains can also be bioaugmented in the natural aquaculture ponds with high level of plant debris and will ultimately lead to the sustenance of the pond.
4,282.6
2018-09-30T00:00:00.000
[ "Biology" ]
EXPLORING SEMANTIC RELATIONSHIPS FOR HIERARCHICAL LAND USE CLASSIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORKS Land use (LU) is an important information source commonly stored in geospatial databases. Most current work on automatic LU classification for updating topographic databases considers only one category level (e.g. residential or agricultural) consisting of a small number of classes. However, LU databases frequently contain very detailed information, using a hierarchical object catalogue where the number of categories differs depending on the hierarchy level. This paper presents a method for the classification of LU on the basis of aerial images that differentiates a fine-grained class structure, exploiting the hierarchical relationship between categories at different levels of the class catalogue. Starting from a convolutional neural network (CNN) for classifying the categories of all levels, we propose a strategy to simultaneously learn the semantic dependencies between different category levels explicitly. The input to the CNN consists of aerial images and derived data as well as land cover information derived from semantic segmentation. Its output is the class scores at three different semantic levels, based on which predictions that are consistent with the class hierarchy are made. We evaluate our method using two test sites and show how the classification accuracy depends on the semantic category level. While at the coarsest level, an overall accuracy in the order of 90% can be achieved, at the finest level, this accuracy is reduced to around 65%. Our experiments also show which classes are particularly hard to differentiate. INTRODUCTION Land use (LU) describes the socio-economic function of a piece of land. This information is usually collected in geospatial databases, often acquired and maintained by national mapping agencies. The objects stored in these databases are typically represented by polygons with categories indicating the object's LU. To keep such databases up-to-date, the content can be compared with new remote sensing data. If the new data contradict the database content for a specific object, the object class label in the database needs to be updated. To automate this process, a class label related to its LU has to be determined from the remote sensing data for every object in the database. Typically, this is achieved in a procedure consisting of two steps: first, the imagery is used to predict the land cover for each pixel; the land cover results and the images are combined in a second classification process to determine the LU for every database object (Gerke et al., 2008;Helmholtz et al., 2012). In this context, supervised classification methods are frequently applied, most recently based on Convolutional Neural Networks (CNN) (Zhang et al., 2018;Yang et al., 2019), which have been shown to outperform other classifiers such as Conditional Random Fields (CRF) (Albert et al., 2017). One problem of existing methods for LU classification is that they only differentiate a small number of classes, while the object catalogues of LU databases may be much more detailed. For instance, in the LU layer of the German cadastre, about 190 categories are differentiated (AdV, 2008). Clearly, this catalogue contains object types that cannot be expected to be differentiated from remote sensing data, but of course, the usefulness of an automatic approach grows with an increasing number of class labels. It is an important fact that many topographic databases contain LU information in different semantic levels of abstraction. At the coarsest level, only a few broad classes such as settlement, traffic or vegetation are differentiated. At the finer levels, these classes are hierarchically refined, and the full number of different categories is only differentiated at the finest level of the class structure. Fig. 1 shows two examples for database objects with corresponding imagery and the annotations from the first three levels of the object catalogue in (AdV, 2008 Figure 1: Two database objects with images (rescaled) and categories in three semantic layers. L: semantic layer starting from the coarsest (I) to the finest (III). Albert et al. (2016) investigated the maximum level of semantic resolution that their CRF-based LU classification could achieve. They divided the land use categories into two levels, both corresponding to mixtures of the three coarsest semantic levels according to (AdV, 2008). Starting from a classification of the coarse level, they refine one coarse category after the other: in a greedy iterative procedure one category is split into the maximum set of sub-categories and then sub-categories are merged if the results indicate they cannot be separated. As a result, Albert et al. (2016) obtain a class structure consisting of a mixture of 10 categories from different semantic levels of the object catalogue, and conclude that this is the largest set of classes that can be separated using their approach. In this paper we take a different direction. We propose to predict the LU categories of multiple semantic levels simultaneously using a CNN-based approach. In this context, we exploit the intrinsic relations between the categories at different layers, which leads to hierarchical LU classification. In our method, the hierarchical relations are explicitly integrated into the CNN for training and inference. To achieve our goals, we expand the existing two-step procedure of (Yang et al., 2019) to this hierarchical setting, adapting a method proposed by Hu et al. (2016) for learning structured inference neural networks of natural images by modelling label relationships for our purposes. The input consists of highresolution aerial imagery, a land cover layer obtained by semantic classification and derived data such as a Digital Surface Model (DSM) and a Digital Terrain Model (DTM). The scientific contribution of this paper can be summarized as follows:  We expand a CNN-based method for the classification of LU to predict LU categories at multiple semantic levels simultaneously, sharing the feature extraction part of the network and adding independent classification heads; this corresponds to a multi-task learning approach, e.g. (Leiva-Murillo et al., 2013). Furthermore, inspired by (Hu et al., 2016), we propose to improve this multi-task method by additional connections between the semantic layers so that the new method incorporates the semantic relations between the different hierarchical levels.  Based on the multi-task learning network, we propose two additional network variants to guarantee hierarchically consistent predictions. One variant starts from the predictions of the coarsest level and adapts the predictions in the finer levels to be consistent, and the other one works in the opposite direction. For training the two variants, two novel objective functions are proposed.  We conduct an extensive set of experiments to compare these network variants, to highlight the benefits of considering the relations between the different semantic levels and to investigate the limits of the proposed approaches in differentiating finer class structures. In section 2, we give a review of related work. Our approach for hierarchical land use classification is presented in section 3. Section 4 describes the experimental evaluation of our approach. Conclusions and an outlook are given in section 5. RELATED WORK We start this review with an overview of LU classification techniques before discussing hierarchical classification methods. As pointed out earlier, methods for LU classification usually apply a two-step procedure: first, the land cover is determined based on the given image data, and then the land cover together with image and derived data (e.g. a DSM) serves as input for LU classification. Traditionally, hand-crafted features are derived from input data. These features may quantify the spatial configuration of the land cover elements within a land use object, describing the size and shape of the land cover segments (Hermosilla et al., 2012). Other features are based on the frequency of local spatial arrangements of land cover elements within a land use object (Novack and Stilla, 2015), applying the adjacency-event matrix (Barnsley & Barr, 1996;Walde et al., 2014). Supervised classifiers applied in this context include Support Vector Machines (Montanges et al., 2015) and Random Forests (Albert et al., 2017), the latter also embedded in contextual models like Conditional Random Fields (CRF). Since the success of AlexNet (Krizhevsky et al., 2012), CNN, replacing hand-crafted features by a representation learned from training data, have been shown to outperform other classifiers. They have also been adopted in remote sensing (Zhu et al., 2017). In this context, a big challenge for applying CNN for the prediction of class labels for LU polygons is the large variation of polygon shapes and sizes. To the best of our knowledge the first work classifying LU objects from a geospatial database by CNN is (Yang et al., 2018). The authors decompose large polygons into multiple patches that can be classified by a CNN. However, they extract the employed image and land cover data inside the polygon and set the areas outside to 0, which leads to a loss of context information. Yang et al. (2019) extend this approach by constructing a representation of a polygon by a binary mask while using image data for the entire window to be classified. In this paper, we adapt their basic framework, but extend the LU classification by considering class labels at different semantic levels. Zhang et al., (2018) proposed a method to classify urban land use objects by applying two CNNs. They perform image segmentation and then use the segmentation results to obtain polygons based on which the inputs for the two CNNs are generated. However, they focus only on urban scenes, without any consideration on rural areas. Zhang et al., (2019) propose a joint deep learning framework for land cover and land use classification where they use multi-layer perceptions for land cover classification and a CNN for land use classification based on Zhang et al. (2018). They differentiate a set of about 10 LU classes in their experiments without further investigations concerning the semantic resolution that can be achieved. Albert et al. (2016) propose a method based on CRF to investigate the maximum level of semantic resolution that can be achieved, applying the greedy refinement strategy outlined earlier, but their goal is to define a suitable class structure rather than using the hierarchical structure of the object catalogue in a systematic way. Considering multiple semantic levels of categories can result in the prediction of multiple labels per object, which can pose a problem. This issue is tackled in (Hua et al., 2019). The authors propose a method for multi-label classification of aerial images by applying a CNN with LSTM (Long Short Term Memory) cells. The goal is to predict a set of labels for one input image, describing each object type that appears in that image. No semantic relations between the labels are modelled explicitly. Therefore, the method cannot be directly transferred to our problem. Different semantic levels of categories can also be dealt with as different categories, and the intrinsic relation of the different levels could be tackled by multitask learning approaches, e.g. (Leiva-Murillo et al., 2013), though this seems not to have been done yet. In computer vision, many approaches dedicated to the classification of images with semantic relations between categories exist. Deng et al. (2014) propose the first CNN-based work for classification with semantic relations between different class labels. They define a HEX (Hierarchy and Exclusion) graph to model different types of semantic relations: two labels may have a hierarchical relation; they may be exclusive or overlapping. The CNN only has one output layer for all classes, but the HEX graph is considered in both, training and inference to achieve a consistent classification result, e.g. to ensure that an image cannot be classified as showing a cat and a specific dog breed at the same time. However, this results in a very complex training and inference procedure. Guo et al. (2018) propose a CNN-RNN (recurrent neural network) strategy to address hierarchical classification. A CNN acts as a feature extractor and is trained to predict class labels at the coarse semantic level. Then, the CNN features and the output of the coarse level are fed into a RNN structure which is used to propagate the information from the coarse level to finer labels. Nonetheless, information is only predicted from the coarse level to the finer labels. Hu et al. (2016) propose a network based on a CNN for hierarchical classification in three levels, using a bidirectional message passing mechanism from the class scores of the coarse category to the class scores of the fine category and vice versa. Thus, the class scores of each level are enhanced considering information from other levels of the hierarchy. However, the message passing is done only between neighbouring levels. Though embedded in a completely different context, the method proposed in this paper is inspired by Hu et al. (2016). However, we argue that for a specific category level, all its ancestor levels and descendant levels are helpful for its identification. Thus, we adapt the message passing, so that the class scores of one level receive messages from all ancestor levels and all descendant levels. More importantly, we can guarantee consistency of the predictions with the class hierarchy. CNN-BASED HIERARCHICAL CLASSIFICATION The first input required for our method consists of a LU database in which objects are represented by polygons with LU categories at multiple semantic levels according to a hierarchical object catalogue. Furthermore, a multispectral aerial image (R, G, B, NIR), a normalised DSM (nDSM, i.e., the difference between a DSM and a DTM) and pixel-wise class scores for land cover from a previous classification step are required. In order to produce the latter, we use the CNN-based method of Yang et al. (2019), which delivers a vector of class scores for every pixel of the input image (one entry per land cover class). The input polygon is used to generate a binary object mask aligned with the image grid. The goal of the proposed method is to predict one class label per semantic level for each LU object, extending our previous work (Yang et al., 2019). While these labels are known for some of the polygons, which can be used for training the CNN, they are to be determined for the rest. In CNN-based LU classification, the large variation of polygons in terms of their geometrical extent is a challenge (see examples for a very large road and a small residential object in Fig. 1), because a CNN requires a fixed input size for the image to be classified (256 x 256 pixels in our case). The way in which the image patches of that size are prepared is described in section 3.1. Section 3.2 outlines the basic CNN structure, introducing a multitask learning scenario for LU classification at different semantic levels, while Section 3.3 describes several network variants that hierarchically interact in training and classification. Patch preparation The basic approach to prepare the input data is to extract a window of 256 x 256 pixels centred at the centre of gravity of the object from all data (image and DSM, binary object mask, land cover scores) and present it to the CNN. This is unproblematic if the polygon size corresponds well to the window size at the ground sampling distance (GSD); otherwise the window is either dominated by information outside the object (for very small objects) or the object does not fit into the window. The method we adopt to cope with the latter problem is tiling: we split the window enclosing the object into tiles (patches) of the desired size and classify all patches having a meaningful overlap with the object independently. Afterwards, the results for the individual input patches are combined (cf. section 3.3). Baseline CNN architecture The basic network architecture we use for LU classification is based on the LuNet architecture (Yang et al., 2019). LuNet consists of a series of convolutional and pooling layers before being split into two branches. The first branch consists of a set of convolutional and pooling layers while the second branch (ROI location layer) extracts a region of interest from the feature map, rescales it and applies convolutions and pooling to that rescaled feature map. Before the classification layer, the feature vectors of the two branches are concatenated; for more details, we refer the reader to (Yang et al., 2019). We keep the entire architecture except for the single classification layer, which is replaced by B classification layers (one layer per semantic category level). The resulting structure is shown in Fig. 2 for B = 3 levels. This structure corresponds to a variant of multi-task classification (Leiva-Murillo et al., 2013): the predictions of the labels at different semantic levels are considered to be different classification tasks; the prediction itself is independent, but based on a shared (512 dimensional) feature vector extracted from the input data. The parameters of all components of the network are determined simultaneously. Thus, the CNN learns to produce a representation that is meaningful for all tasks. Integration of the semantic dependencies: Given the object catalogue, the relationships between semantic levels are known. To add this prior knowledge to the network, we propose to expand the network structure so as to consider the semantic dependencies. Starting from Fig. 2, we identify each category level by a roman numeral from the coarsest level I and increasing the number as the semantic resolution is increased. For each semantic level l, the classification head consists of one fully connected (FC) layer that delivers a vector of un-normalized class scores = ( 1 , … , ) , where = { 1 , … , } is a set of LU classes at category level l and is the class score of an image X for class . Based on the un-normalized class scores , the expansion of the network structure is shown in Fig. 3. There are two additional layers per semantic layer, each with a specific structure of connections to the previous layer: First, information is passed on from coarser levels to finer levels; after that, information is passed back from finer levels to coarser levels. The expanded network is referred to as LuNet-MT (MT for multi-task) in the remainder. In the first of the two additional layers, we produce a set of intermediate class scores at each level l, where the class score at each level except the first (coarsest) one receives input from the same or from all coarser levels in the previous layer of the network. For the coarsest level (l = 1), the scores from the previous layer are copied, i.e. = . Otherwise, is computed according to: where () is the ReLU activation function and as well as , , , are the parameters of that layer that are to be learned in training along with the other parameters of the network. Here, the superscripts pos and neg specify positive and negative semantic relationships. If a category is divided into multiple sub-categories at a finer level, these sub-categories are positively related to it; a category is negatively related to subcategories at a finer level if they are not derived from it. In , , , , only the parameters with the specific relationships are learned and the others are set to 0. In the second additional layer, we produce the final unnormalized class scores at each level l. Here, the class score at each level except the last (finest) one receives input from the same or from all finer levels in the previous layer. For the finest level (l = B), the scores from the previous layer are copied, i.e. = . Otherwise, is computed according to: where and , , , are the parameters of that layer and () is the ReLU function. The superscripts pos and neg have the same meaning as in eq. 1. Finally, the un-normalized class scores are passed through a softmax layer to obtain probabilistic scores, i.e., for each layer, is used as the argument of the softmax function. Training is based on stochastic mini-batch gradient descent (SGD) with weight decay and step learning policy; the objective function is the extended focal loss (Yang et al., 2019): where is the k th image in a mini-batch, N is the number of images in a mini-batch, and , is 1 if the training label of is in level l and 0 otherwise. More details about training are given in Section 4.1. Fig. 2. Please note that ReLU activation is not shown here. Network variants and implementation LuNet-MT obtains predictions of multiple semantic levels simultaneously while exploring the semantic dependencies explicitly. However, the predictions are not guaranteed to be consistent with the object catalogue hierarchy. For instance, one object predicted as settlement at the coarse level could be predicted as road traffic at the fine level. Obviously, these two predictions are not hierarchically related. To obtain predictions that are consistent with the class hierarchy, two strategies for hierarchical training and inference are proposed. The first one is referred to as coarse-to-fine (C2F). Using this strategy, we first predict the categories at the coarsest level (I) and use them to control the predictions at the finer levels. During inference, only the un-normalized scores of the sub-categories at a finer level which are derived from the predicted category at the coarser level are used as input of the softmax function to obtain probabilistic scores. During training, the ground truth labels of coarser levels are used to select the un-normalized scores at the finer level. The second strategy is referred to as fine-to-coarse (F2C). Here, we first predict the categories at the finest level (III). Then we select the category of which the category at the finest level is a subclass as its prediction at the coarser level. An illustration of the two approaches is shown in Fig. 4. Note that if the first predictions in the C2F approach are wrong, the subsequent predictions at the finer levels will be wrong as well. Nonetheless, in the F2C approach, there is still chance to obtain right predictions at the coarser levels if the first predictions are wrong. Relying on the two approaches, two network variants based on LuNet-MT are proposed. , are the un-normalized scores in level l consistent with the coarser level. Together with the class scores ( 1 |X) of the coarsest level, these variants of the class scores are plugged into eq. 4 for optimization. HierLuNet-F2C: this variant realizes the F2C strategy. First, the probabilistic scores of the finest level (III) are determined using eq. 3. For the coarser levels (I and II), softmax is not suitable to obtain the probabilistic scores, because the classes have to be the ancestors of the class at level III and, consequently, the predictions are known. Thus, we apply the sigmoid function to the corresponding un-normalized scores to generate normalized scores. During training, the objective function consists of two parts: for the finest level, it is the same as eq. 4, referred to as , and for the coarser levels (I and II, < ), the objective function is: where ̃, is 1 if the prediction of image .is class in level l and 0 otherwise. If the prediction matches the ground truth (i.e. , =̃, = 1) , the probabilistic score of class is to be maximized; otherwise, the probabilistic score of the referenced category is to be maximized and the others are to be minimized. The sum of + , is used for optimization. Inference at object level: The inference of the objects which are not split during tiling is straightforward by using the prediction of the related patches. The inference of objects which had to be split (termed as compound objects) differs in the different network variants. In variant LuNet-MT, for a compound object, the product of the probabilistic class scores of the patches in each individual semantic level is computed. Subsequently, the product is used for obtaining the predicted label. In variant HierLuNet-C2F, for a compound object, the prediction in the coarsest level (I) is made by a majority vote of the predictions of its patches. To guarantee hierarchical consistency, the predictions in the finer levels are sorted in a descending order according to their occurrences. Searching the predictions based on the order is undertaken and the best one which is a sub-category of the prediction in the coarser level is considered as the predicted label. Finally, in variant HierLuNet-F2C, for a compound object, the prediction of the finest level (III) is taken by majority vote of the predictions of the related patches. The prediction procedure of the coarser levels is similar to the one in HierLuNet-C2F, but in the opposite direction, so that hierarchical consistency is guaranteed. Implementation: all networks are implemented based on the tensorflow framework (Abadi et al., 2015). We use a GPU (Nvidia TitanX, 12GB) to accelerate training and inference. For the training of all network variants, the hyper-parameter of the focal loss (eq. 2) is set to = 1; the hyper-parameter for weight decay is 0.0005. We train all networks for eight epochs (an epoch consists of a set of iterations so that in one epoch all samples are used for training once. The number of iterations per epoch is the number of training samples divided by the mini batch size), using a base learning rate of 0.001 and reducing it to 0.0001 after four epochs. The mini batch size is set to 12. We apply data augmentation by vertical and horizontal flipping and by applying random rotations in certain intervals, where the interval and, thus, the amount of data augmentation depends the size of the polygons. When tiling is applied, the interval is 30° for polygons that have to be split because they do not fit into the input window of the CNN and 5° for all the other polygons. Consequently, after data augmentation, there are 354178 and 479978 patches for Hameln and Schleswig, respectively. Evaluation Tab. 2 presents the results of the land use classification of all network variants in the two test sites. In section 4.2.1, we compare the results of the three network variants described in section 3.3. After that, we take an exemplary closer look at the performance of one of the better variants (HierLuNet-F2C) in section 4.2.2. Comparison between the network variants: Comparing the network variants described in section 3.3, the multi-task learning (LuNet-MT) delivers better results in terms of OA in most cases in both test sites. First, we compare the two network architectures of multi-task learning (LuNet-MT) and its variant with hierarchical training and inference in a coarse-to-fine manner (HierLuNet-C2F). In both sites, LuNet-MT performs better than HierLuNet-C2F in all evaluation metrics of all category levels. In Hameln, compared to LuNet-MT, the drops of HierLuNet-C2F in terms of OAs are around 2.5% in level II and level III, whereas the OAs of level I are very similar close (-0.2%). Besides, there are larger drops in terms of average F1 scores in level II and III, which are around 4%. However, the results of HierLuNet-C2F in Schleswig are much worse than the ones of LuNet-MT: the drops in terms of OA are 3.5% (I), 4.2% (II) and 6.0% (III), whereas the drops in terms of average F1 score are 5.2% (I), 5.1% (II) and 4.9% (III). Like in Hameln, the drops of average F1 scores are a little larger than the ones of OAs. Second, we compare LuNet-MT with HierLuNet-F2C, the one with hierarchical training and inference in a fine-to-coarse manner. In Hameln, the OA of LuNet-MT outperforms the one of HierLuNet-F2C up to 1.8% over all levels. The difference in terms of average F1 score is much larger (5.4% at level II and 3.0% at level III). Nonetheless, there is an exception for the mean F1 score at level I where there is an increase of 1.2% in HierLuNet-F2C. Looking at the results in Schleswig, there is another picture in terms of OA: HierLuNet-F2C outperforms LuNet-MT by 2.5% at level II and 1.3% at level III, but with a drop of 0.4% at level I. There is a drop of average F1 scores with 1.9% at level I, but at the level II we find an improvement of 0.6% whereas at level III the average F1 scores are most similar. In conclusion, HierLuNet-F2C performs almost equivalent as LuNet-MT in Schleswig. The final comparison is between HierLuNet-F2C and HierLuNet-C2F, where in Schleswig the former outperforms the latter in terms of OA and average F1 score over all levels, and the largest difference of OA is the one at level III with 7.3%. In Hameln, HierLuNet-F2C delivers mostly better results except for the average F1 score at level II for which there is a drop of 1.6%. Thus, it seems that the hierarchical LU classification benefits more from a fine-to-coarse procedure. Over the three variants, it is clear that the multi-task learning (LuNet-MT) delivers better results in most cases. The big disadvantage of LuNet-MT, however, lies in the fact that their predictions do not guarantee a consistent hierarchical result. For instance, in Hameln, 9.1% of the predictions are non-consistent with the hierarchy, whereas in Schleswig the amount is 15.1%. These predictions are obviously not suitable for further processing. On the other hand, the drawback of HierLuNet-C2F and HierLuNet-F2C is that if the first prediction is wrong (level I in the former and level III in the latter), the successive predictions in the finer (coarser) levels would be wrong as well. Comparing the results achieved by all variants, the expected decrease of classification accuracy when increasing the semantic resolution is obvious. At the coarsest level (I), the OA is around 90% for all variants. It would seem that CNN-based classification at this level is better than the one of the CRF-based method (85%) reported in (Albert et al., 2016), although the class structures are not identical and, thus, a direct comparison is impossible. At the intermediate level, we observe a drop in OA of about 15%-20%. The fact that the drop in the average F1 scores is even larger indicates that a non-negligible number of classes can no longer be differentiated. Finally, the performance at the finest level is even lower, with a drop in the order of another 5%-10% in OA compared to level II. Again, the drop in the average F1 scores is larger. There are two main reasons for the problems at the semantic level II. First, the number of training samples of individual classes is much lower, leading to insufficient representation of this category (cf. Tab. 1). Second, in many cases, the properties of the objects in shape and composition of land cover types are quite similar among classes derived from the same ancestor category. For instance, class industry area in level II is very similar to residential area with dense buildings and sealed streets. Detailed analysis of HierLuNet-F2C: Tab. 3 presents the F1 scores and OA for all classes achieved by this network variant, which applies hierarchical training and inference in a fine-tocoarse manner. We analyse these results level by level. Level I: In this level, the four categories can be separated easily in both Hameln and Schleswig. However, in both cases, average F1 scores of less than 80% for the class water system indicate a problem with that class. This may partly be due to the fact that there are very few samples of that class (2.0% of all objects in Hameln and 3.3% in Schleswig). Furthermore, an analysis of the confusion matrix shows that about 30% of the samples of water system are confused with traffic in both sites. The reason could be that both kinds of object are very similar in shape and land cover components (e.g. both are surrounded by grass and trees, and they may be occluded by the latter), which, in combination with the lack of training samples for water, prevents the CNN from learning to differentiate these classes. Level II: the categories of level II are related to level I based on the semantic relationships shown in Tab. 1. We analyse the results according to the categories of level I. There are only three level II sub-categories of settlement achieving F1 scores over 50% in both data sets (residential area, industry area, recreation area). Samples of the other categories are very hard to be correctly recognized. The main source of errors is a confusion between mixed-used area and industry area. Again, this may be due to their similar appearance and compositions of land cover (cf. Fig. 5-a). Among the sub-categories of traffic, the road traffic and path are differentiated most easily (F1 scores > 65% in both sites). Parking lot is classified much better in Hameln than in Schleswig. It is most frequently confused with road traffic and industry area; in Schleswig, about 34% and 39% of the parking lot objects are classified as road traffic and industry area, respectively. This may be attributed to the similar appearance of these objects. Fig. 5-a shows an example for a confusion between parking lot and industry area. Among the sub-categories of vegetation, agriculture is particularly well classified (F1 > 70%) in both cases. In Schleswig, forest also achieves a high F1 score (84.5%), while there are problems in Hameln, where much fewer samples of that class are available (48, as opposed to 288 in Schleswig). The other sub-categories are not differentiated very well. The largest amount of confusion for grove occurs with recreation area and forest. These classes mostly consist of low and high vegetation, which makes them very similar to grove (cf. Fig. 5-b). The category undeveloped is mainly confused with agriculture. Level III: while in level III, some classes can be differentiated very well, e.g. residential in use or motor-road, in general it is more difficult to separate them than those of the other levels. More than half of the categories achieve F1 scores smaller than 50%. Again, a major reason is that the number of training samples for some class is quite small. In summary, as the number of categories increases from level to level, they are harder to be classified correctly. While at the finer levels, the similarity in appearance and land cover composition of some categories (e.g. industry area vs. mix-used area; grove vs. forest) may be problematic under all circumstances, it would seem obvious that in order to achieve satisfactory results, the number of training samples has to be increased. Given the fact that the number of objects is given by the database, the way to do so is to increase the size of the area that is processed. CONCLUSION In this paper, we have presented three CNN-based methods for the classification of LU in multiple hierarchal semantic levels. and inference (coarse to fine vs. fine to coarse) in a manner that guarantees hierarchical consistency. All methods require a strategy for providing the CNN with an input of an appropriate size. The categories at the coarsest level are most easily to be discerned: in both test sites, we achieved an OA around 90%. As the number of categories is increased, they are harder to be classified correctly. The main reasons seem to be that the number of training samples per class is heavily reduced and at the finer levels, there are more and more categories that have very similar appearance. Our experimental results also show that multi-task learning without applying hierarchical training and inference delivers good results in most cases, yet suffering from severe hierarchical inconsistency. For instance, there are 15.1% nonhierarchical predictions in Schleswig. By introducing fine-tocoarse hierarchical training and inference into the CNN, the hierarchical predictions are guaranteed and the difference in terms of OA to the predictions of multi-task learning are less than 2% over all levels in both test sites, which is quite satisfactory. In the current results we have observed some overfitting when comparing the classification results on the training and the validation data set, which we will further investigate in the future by simplifying the network (and thus reducing the number of parameters to be learnt) and by increasing the amount of training data. In our future work, we want to improve the prediction procedure so that we obtain the most probable tuple of class labels that is consistent with the class hierarchy for every object rather than fixing the class label at the coarsest or the finest level of the hierarchy as it is done now in the C2F and F2C strategies. Second, similarly to (Albert et al., 2016) we will further analyse the class structures used for the classification based on the object catalogue. Finally, an increase of the number of training samples, which requires the availability of true annotations for larger areas, is a pre-requisite for reliable results (Kaiser et al., 2017). Such samples can be derived automatically from existing maps if a strategy to mitigate errors in the class labels of training samples (label noise) is developed, e.g. (Maas, et al., 2019).
8,544.2
2020-08-03T00:00:00.000
[ "Computer Science" ]
Uptake of mandatory outcome measures in mental health services Aims and method The collection of results of a specific outcome measure, the Health of the Nation Outcome Scales (HoNOS), is mandatory for mental healthcare providers in the National Health Service in England. Not all providers collect HoNOS data and coverage varies widely. This paper explores, by means of interviews with clinicians and policy makers and econometric analysis of HoNOS data, the barriers and incentives to the uptake of HoNOS and outcomes more generally, and the key characteristics associated with providers who do undertake HoNOS. Results The main barriers to the collection of outcomes involve a lack of adequate feedback mechanisms, a lack of perceived clinical relevance and poor information technology infrastructure. Econometric results show HoNOS collection is associated with providers who produce high-quality data. Clinical implications Initiatives should focus on putting systems in place to encourage feedback mechanisms for clinicians. Qualitative We conducted 28 semi-structured interviews using two NHS providers as interview sites, one which had a relatively high HoNOS coverage and the other a lower HoNOS return.We conducted four to six interviews at each site.In addition, we interviewed policy makers and experts involved in outcome measurement, including the College Research Unit, the Department of Health, the National Institute for Mental Health in England (NIMHE), the Care Services Improvement Partnership and the Service User Research Group in England.We interviewed managers, commissioners, psychiatrists and other clinical staff involved in the collection of outcomes data. Quantitative We investigated the variation in HoNOS scores across mental health providers in order to determine the factors associated with good coverage of HoNOS, using as our dependent variable the percentage of HoNOS in properly integrated records from the MHMDS for 2004/5.The unit of analysis was the 84 mental health trusts and primary care trusts that provide mental health services.We constructed a database containing a variety of explanatory variables from the MHMDS, the Healthcare Commission Mental Health Survey, Hospital Episodes Statistics and Hospital Activity Statistics.The MHMDS data covered variables on care level, ethnic coding and care prevalence.The Healthcare Commission survey data covered variables on patient care and treatment, medications, contact with health professionals, support in the community and crisis care.Hospital Episodes Statistics data covered number of admissions and the proportions of these that are emergency and day cases, as well as data about waiting times and length of stay.The age profile of patients, bed occupancy and number of bed-days were extracted from the Hospital Activity Statistics data. From the interviews it emerged that Foundation Trust status was a key factor in the uptake of routine outcome measurement.This status is the voluntary application for greater autonomy from central control given to highperforming trusts.In the period leading up to application for Foundation Trust status, hospitals have to provide evidence of value for money, and outcome measurement was seen as having a key role.Mental health trusts only became Foundation Trusts from 2005/6 onwards, but would have been engaged in the preparation for application in 2004/5.Hence we included a variable to take account of trusts that would apply for Foundation status in 2005/6.This data came from the regulator Monitor. The model was also run with and without a set of dummy variables representing the strategic health authority codes of the providers included.The strategic health authority effects might pick up higher-level factors such as data quality and performance management.The final database contained a total of 236 variables. Statistical analysis A linear probability model was estimated using ordinary least squares analysis.The coefficients on the explanatory variables indicate how the probability of a mental healthcare provider having HoNOS scores in properly integrated records in the MHMDS changes with changes in the characteristics of the provider.We tested several explanatory models using variance inflation factors (VIF) of each, the Ramsey Regression Equation Specification Error Test (RESET) and the R 2 -test, in order to determine the most appropriate model.The VIF is a measure of the multicollinearity in a regression (whether variables tend to measure the same concept), whereas the RESET is a general test for specification. 11The R 2 -test is a measure of goodness-of-fit or how much of the variation in the model is accounted for by the explanatory variables in the model.Table 1 gives a description of the explanatory variables included in our final model as well as the descriptive statistics. Qualitative A number of themes emerged from the interviews, most notably concerning the barriers and incentives to the introduction of outcome measurement, and the advantages and disadvantages of HoNOS.Specific anonymised quotes from interviewees are included. Barriers Probably the most crucial barrier to the introduction of outcome measures is that clinicians are unable to see the clinical relevance of such measures.This is partly because they have not been given a clear rationale for their use, partly because they are simply told to complete scores, but primarily because they never receive any feedback on them.There needs to be clear communication about the clinical benefits, but these would best be appreciated if feedback were received in a clinically useful and timely manner. One-off measurements simply provide a snapshot of case mix or severity, thus undermining any useful clinical feedback as a proper outcome measure.Only with repeat ratings at times T 1 and T 2 can scores properly be used as an outcome measure.In one of the trusts with the highest HoNOS collection rates, only 6% of all episodes had a paired HoNOS rating (at two time points). If feedback is not received, even on one-off measurements, completion rates will be poor and the benefits will not be appreciated.Often clinicians produce data that are then used only at aggregate level; hence it becomes extremely difficult to maintain enthusiasm for what is seen as a paper-filling exercise for managers.Many view the completion of forms as akin to 'pouring valuable clinical information into a black hole'.There were examples of where feedback mechanisms had been introduced in services with varying degrees of success.However, for clinicians to derive the most clinical value from outcomes data, they were often required to 'design' their own computing systems.This was often done on clinicians' own initiative with no trust support.The lack of feedback is associated with the inability of information technology (IT) systems to provide appropriate, useful and timely clinical feedback.Many IT systems crash often, are unreliable and require substantial investment. 'The networks are down around 20-30% of the time.People don't see IT as the solution, rather the problem.' Clinical teams often felt that informatics teams did not appreciate the clinical requirements that were deemed important and did not necessarily see the value of getting the most out of the HoNOS scores within the MHMDS. 'One can't even get one's caseload on the electronic system, so there is no belief that one could get nicely labelled graphs of routine outcome measures.' There was also concern that many clinical staff lack the necessary IT skills to interact effectively with informatics systems. In terms of other barriers to the uptake of HoNOS, most clinicians did not see the time element to completing HoNOS as a major barrier.It was felt that for the most part it was quick and easy to complete. In addition, clinicians did not see training in the use of HoNOS as a major barrier.However, it became clear during the course of the interviews that many staff were not formally trained in the use of this measure.This can have serious implications for the reliability of ratings, the validity of scores and their interpretation.However, with the many competing demands on resources, the cost of training staff to use HoNOS was not always seen as a high priority. Incentives Several possible incentive mechanisms to increase the uptake of outcomes were explored with interviewees. Ideally, commissioners should purchase healthcare according to the health outcomes achieved by providers and drive the process of getting routine outcome measures embedded in practice.However, comments around commissioners included the following: 'Extremely weak in mental health . . .nobody trusts them, therefore nobody will listen to them.'Although commissioners said that they were becoming more proactive in encouraging providers to use HoNOS, this was not the impression obtained from the providers and it was felt that until HoNOS was seen as a 'must-do' by commissioners, they were unlikely to help increase coverage.In contrast, the Healthcare Commission -now the Care Quality Commission (CQC) -was seen as driving the process of outcome collection, particularly if it became a biting target in the Commission's performance management regime.The general view was that a poor set of ratings would reduce the ability to apply for Foundation status and have serious repercussions. On the back of making an Foundation Trust application, many trust boards were strongly promoting the use of HoNOS because of its mandatory collection as part of the MHMDS and their desire to show compliance.One interviewee argued that their trust board had not been interested in outcome measurement prior to the application, but that it had suddenly become a priority.However, once Foundation status had been achieved, it was felt that trust boards had become less interested in continuing to push for outcome measurement implementation.If anything, the Foundation Trust status had weakened the incentives for mandatory routine outcome measurement in trusts. Payment by results is a prospective method of financing providers being introduced in the NHS whereby a fixed payment is associated with a particular set of case-mix adjusted activities.If payment by results is in place, healthcare could potentially be purchased by commissioners at a fixed tariff according to the outcomes being achieved.The extension of this method of financing to mental health services remains a key objective for the government.Interviewees felt that if HoNOS were attached to the payment by results tariff as an incentive to increase coverage, completion rates would be extremely high.Those enthusiastic about the widespread introduction of routine outcome measurement said that they would 'surf this wave [payment by results], however evil it might be'.However, others argued it might lead to perverse incentives in collecting accurate outcomes data, particularly given that HoNOS is a clinician-rated tool rather than a patient report measure.Therefore, outcome measurement must be introduced with the aim of improving clinical work and not just as an aid to a financial tool.Finally, one of the words that emerged often when asking about barriers and incentives to outcome measurement was 'culture'. 'People are afraid of others viewing their outcome measuresyou want a culture where people aren't afraid.We're a long way off that.'One trust was encouraging all their clinicians to publish their outcome measures on their website, although there was considerable resistance.It was suggested that peer pressure could also be an effective means of increasing uptake. 'When we make it mandatory for all clinicians to publish their outcome measures on the web, then we will see the ratings being used more consistently and more widely.' Incorporating outcome measurement into the consultant appraisal was seen as another way to encourage ratings.However, many clinicians had concerns over outcome measurement being used for performance management. Advantages and disadvantages of HoNOS Although HoNOS was viewed as one of the best-supported measures available, there were mixed views about its usefulness.Clinicians who used HoNOS were either told to do so by their trust and did not favour it as their instrument of choice, or alternatively were enthusiastic about it.They felt HoNOS was the best validated, tested and socially relevant outcome measure available, rooted in a medical model based on what clinicians regard as important aspects of care, covering symptoms but also other important domains such as occupation, relationships, accommodation and social inclusion.They felt that HoNOS picks up the patient's condition at an acute phase as well as at discharge and is reasonably sensitive.Others felt that HoNOS is a blunt measure and that it underrepresents the user's perspective.In addition, the growth in specialist services in mental health was seen to be at odds with the drive to use broad non-specific measures such as HoNOS, which were deemed to be too general and not sensitive enough to detect change. Ultimately, clinicians would always differ in their views on the instruments they liked or did not like, and it was argued by some that since no consensus could ever be reached, one might as well choose a reasonably wellvalidated instrument such as HoNOS. 'I don't see why we shouldn't draw a line in the sand, and get on with it.' Quantitative The MHMDS provided data on HoNOS completion rates for 2004/5 and 2005/6.Table 2 sets out descriptive statistics for providers who completed HoNOS.This shows that 44 providers returned HoNOS scores in 2004/5 but that this had dropped to 37 providers the following year.Coverage, for those who completed HoNOS, also dropped slightly, from 9.6 to 9.4%. Table 3 gives the results of the linear regression model.Results were robust to various specifications tested.At a 5% level of significance only three variables are significant: admissions, ethnic coding overview and community psychiatric nurse (CPN) seen in previous 12 months.Providers with higher numbers of admissions have lower ORIGINAL PAPERS Jacobs & Moran Uptake of mandatory outcome measures rates of collection of HoNOS data, suggesting that perhaps clinicians are too busy to complete this task.Higher admission rates may also coincide with larger trusts where size and coordination issues make for lower outcome collection rates.Ethnic coding overview is a variable that essentially measures data quality.It is one of the key targets in the CQC annual health check for mental health providers.The result suggests that providers who meet this performance standard also have a higher number of HoNOS returns.As we might expect, high data coverage on one item of the MHMDS is associated with high data coverage on another.Thus setting a performance target on HoNOS completion in the MHMDS as part of the CQC annual health check might be one way of increasing completion rates.This result concurs with the qualitative findings. The variable 'seen CPN in previous 12 months' indicates that if a patient has been in contact with a community psychiatric nurse in the previous 12 months, that provider will have a higher HoNOS coverage.This result could suggest that trusts with a stronger ability to follow up patients in the community have higher HoNOS coverage. The results of the RESET test indicate that the model is correctly specified and the mean VIF for the model is 1.57, which suggests low levels of multicollinearity.The strategic health authority effects did not alter the results and so are not included. Discussion 3][14] They have surveyed all mental health trusts in England to establish a compendium of outcome measures in mental health services, which outlines the available measures, their advantages and disadvantages, their psychometric properties and copyright issues, although no particular suite of measures has been mandated. 15The only measure that has been mandated is HoNOS as part of the MHMDS, although collection rates are poor.Ironically, there was still some confusion among providers we spoke to as to whether HoNOS was mandatory or not.There is a lack of clarity within the service about what the minimum requirements actually are.Some trusts were using this perceived lack of clarity as a reason not to push for implementation.The guidance from the Department of Health on HoNOS within the MHMDS needs to be clearer. From the Department of Health's point of view the use of HoNOS is not as widespread as it would like.If indeed PROMs is to expand to mental health services, it is likely that HoNOS would receive a greater priority from the Department in the future, given that the measure is currently the most widely used tool in the service.Although HoNOS is clinician-rated rather than a patient report measure, it is conceivable that it will be the most likely contender for any mandatory expansion of outcome measurement; however, it is possible that other conditionspecific and possibly self-report measures might be encouraged for use alongside.If this were the case, HoNOS completion within the MHMDS as a CQC target would be one of the most effective ways to drive forward this process, as suggested by both our qualitative and quantitative results.With the introduction of PROMs and payment by results in acute services, there is a serious concern that commissioners might slowly divest themselves of mental health services because they can more readily see what value for money they are getting in other areas of healthcare. 16Mental health commissioning for outcome measurement needs to become a priority to ensure continued investment. Information technology systems in many trusts are poorly developed and cannot support any kind of routine outcome measurement. 17If the drive for routine outcome measurement is to be pursued in earnest, this needs to be addressed urgently, with additional resources and clear guidance.If trusts are expected to make investments in IT systems in weak local health economies with overspent primary care trusts and retracting budgets, no real progress will be made.At provider level, feedback mechanisms need to be found that can supply timely and useful clinical feedback.A dialogue needs to take place about clinical requirements and informatics capabilities.Management support is imperative to help provide the resources needed to embed outcome measurement into routine practice to support clinical decision-making in the first instance, rather than solely to support management decisions.Top-down drives to enforce routine collection will rarely be effective.From the clinician's point of view this would be seen as a form-filling waste of time, and it would be important to strike a balance by generating clinically useful patient-level feedback for clinicians alongside aggregate data for managers.Similar conclusions can be drawn from the body of literature investigating barriers to implementation of new models of working in healthcare, 18,19 which indicate the need for a shift in the work practices of both clinicians and managers.Finally, and most challengingly, an outcome-oriented culture needs to be developed, driven by the desire to learn about improving service quality rather than by the urge to benchmark, league-table and remove 'failing' services.This culture might, however, incorporate some aspects of mild coercion and peer pressure, provided this takes place in a developmental and learning culture. Funding This research was commissioned and funded by the Office for Health Economics Commission on National Health Service Productivity.R.J. also holds a fellowship from the Department of Health's Research and Development Programme examining performance measurement in mental health services. Table 1 Description, source and descriptive statistics of variables included in model CPA, care programme approach; CPN, community psychiatric nurse; HCMHS, Healthcare Commission Mental Health Survey; HES, Hospital Episode Statistics; max., maximum; MHMDS, Mental Health Minimum Dataset; min., minimum; obs., observations; s.d., standard deviation. Table 2 Descriptive statistics for providers who completed Health of the Nation Outcome Scales (HoNOS), pooled for both years and individually for 2004/5 and 2005/6
4,317.6
2010-08-01T00:00:00.000
[ "Psychology", "Economics" ]
Open Access Research Article Predicting Conserved Protein Motifs with Sub-hmms Background: Profile HMMs (hidden Markov models) provide effective methods for modeling the conserved regions of protein families. A limitation of the resulting domain models is the difficulty to pinpoint their much shorter functional sub-features, such as catalytically relevant sequence motifs in enzymes or ligand binding signatures of receptor proteins. Results: To identify these conserved motifs efficiently, we propose a method for extracting the most information-rich regions in protein families from their profile HMMs. The method was used here to predict a comprehensive set of sub-HMMs from the Pfam domain database. Cross-validations with the PROSITE and CSA databases confirmed the efficiency of the method in predicting most of the known functionally relevant motifs and residues. At the same time, 46,768 novel conserved regions could be predicted. The data set also allowed us to link at least 461 Pfam domains of known and unknown function by their common sub-HMMs. Finally, the sub-HMM method showed very promising results as an alternative search method for identifying proteins that share only short sequence similarities. Conclusions: Sub-HMMs extend the application spectrum of profile HMMs to motif discovery. Their most interesting utility is the identification of the functionally relevant residues in proteins of known and unknown function. Additionally, sub-HMMs can be used for highly localized sequence similarity searches that focus on shorter conserved features rather than entire domains or global similarities. The motif data generated by this study is a valuable knowledge resource for characterizing protein functions in the future. Background The identification of functionally relevant features in protein sequences is an important task for gaining insight into their molecular and biological activities. Commonly used feature classifications systems focus on protein regions of different lengths ranging from single residues in active site representations and relatively short sequence motifs to much longer protein domains. The identification of these functional modules is often of immediate importance for guiding molecular and evolutionary studies of genes and genomes, such as experimental or computational discoveries of drug targets, catalytic residues and ligand binding sites [1-6]. Due to the greater evolutionary constraints, the functionally important regions in proteins tend to be more conserved among related sequences than their less relevant regions. Background The identification of functionally relevant features in protein sequences is an important task for gaining insight into their molecular and biological activities. Commonly used feature classifications systems focus on protein regions of different lengths ranging from single residues in active site representations and relatively short sequence motifs to much longer protein domains. The identification of these functional modules is often of immediate importance for guiding molecular and evolutionary studies of genes and genomes, such as experimental or computational discoveries of drug targets, catalytic residues and ligand binding sites [1][2][3][4][5][6]. Due to the greater evolutionary constraints, the functionally important regions in proteins tend to be more conserved among related sequences than their less relevant regions. As a result of this basic similarity-function principle, one can predict the functional features in proteins relatively reliably by identifying their conserved regions [7,8]. The same information is often useful to predict differences of the catalytic and substrate specificities within subgroups of protein families by identifying their specificity determining residues [9,10]. Profile hidden Markov models (profile HMMs) provide the basis of very efficient approaches for modeling longer conserved regions in protein families, which are referred to as protein domains [11][12][13][14]. These domain models usually co-align well with longer functional and structural units of proteins, such as protein folds [15,16]. The genome regions coding for protein domains, rather than entire genes, are often considered the functional base units of protein evolution. Because domain models are relatively complex by covering longer conserved sequence areas, the identification of essential sub-features within protein domains can greatly facilitate their functional characterization. Well known examples are the conserved protein motifs from the PROSITE database [17,18]. These much shorter patterns frequently map to residues within protein domains that are directly involved in the core functions of many proteins, such as the coordination of the catalytic centers of enzymes. The most specific and functionally insightful information about known or predicted active sites is provided by protein structure-based resources, such as the Catalytic Site Atlas (CSA), CASTp, ActSitePred, ConSurf and PDBSite [7,[19][20][21][22][23]. The utility spectrum of these structure-based resources is typically restricted to proteins that share sequence similarity with proteins of known 3D structure. This requirement of structure information makes these methods less suited for functional site predictions of many membrane proteins or other difficult to crystallize protein classes. Thus, it is important to develop additional tools that can be used for the prediction of functionally relevant features of all protein classes. Conservation analyses are widely used alternatives for this purpose [8,[24][25][26]. Typically, these methods aim to identify conserved residues in multiple sequence alignments of related proteins. Based on the above principle, these conserved sites tend to be functionally more important than more variable ones. More recently developed approaches incorporate additional information with conservation data, such as secondary structure predictions, solvent accessibility data and other parameters [27,28]. In addition, Mistry et al. [22] have developed a set of strict rules that allows the transfer of experimentally validated active site information to other sequences within the same enzyme family. A disadvantage of most conserved residue approaches is the difficulty of using their data sets without major modifications for search applications in order to identify novel proteins containing these features. The more information rich motif and domain models are usually more effective in this regard. This is also facilitated by the availability of many efficient motif or domain search algorithms in this area. Much of the information available in conserved sequence databases is the direct result of mining the available protein space with existing feature prediction tools. This includes very established databases on protein motif or domain information, such as PROSITE, InterPro and Pfam [2,4,18]. However, the annotation and curation process of the conserved features provided by these databases is still a very time consuming and largely manual curation processes by many experts in the field. Therefore, the development of additional functional feature prediction methods, that can facilitate the automation of various steps in this laborious annotation process, will be of great importance for the field. Here we propose an automated method for identifying conserved protein motifs by creating sub-HMMs from custom or existing profile HMM data sets, such as Pfam. The method builds on existing profile HMM domain models and expands their utility spectrum to motif discovery. The approach has many applications for studying protein functions. First, it is useful for predicting the most highly conserved and functionally relevant sequence motifs in protein families. Second, it provides an effective alternative for profile-based similarity searches to detect sequences with short similarities in any order. Finally, it can be used for the characterization of domains of unknown function by associating them with sub-HMMs from functionally characterized domains. The most closely related method for modeling protein families by a fragment-based approach was proposed by Plotz and Fink [29]. Their goal was to minimize the number of parameters used by the model in order to improve its performance on small training sets. To achieve this, the authors started with a signal-like protein sequence representation [30] and trained a new model on this data set. Their model consisted of Sub-Protein Units (SPUs) that were concatenated in an order learned from the data set. Each SPU of this method is an HMM by itself. In contrast to this, our method uses pre-calculated profile HMMs to discover functionally relevant motifs in protein domains. In addition, our method has the ability to allow any combination of sub-HMMs to occur in any order. Another related method is Meta-MEME [31]. This method also minimizes the number of model parameters. It accomplishes this by concatenating short PSSMs (Position Specific Scoring Matrix) instead of HMMs, which are generated by its sister tool MEME [32]. This approach is similar to the BLOCKMAKER program [33], which also models conserved regions with un-gapped PSSMs. Our method differs from these approaches significantly by retaining full HMMs of the most highly conserved sub-regions within protein domain families. This allows us to model more complex consensus regions containing gaps. The method developed by Sun and Buhler [34] attempts to speed up searching with profile HMMs by extracting un-gapped subsections (blocks) of HMMs and then modifying the match distributions in each position to make each block as sensitive as possible. These blocks are then used as pre-filters to eliminate sequences which would not match the whole HMM well. Our proposed protein sub-HMM method starts with a profile HMM that has been trained on the multiple sequence alignment of a protein family. We then extract the most conserved sub-HMMs from the original HMM. A robust scoring method is used to predict the presence of the sub-HMMs in any protein sequence of interest. The HMMs required for this approach can be easily generated from unaligned protein sequences of interest by aligning them with a multiple sequence alignment pro-gram and then generating an HMM for them with tools like HMMER [14,35,36] or SAM [37]. Alternatively, one can use existing protein family HMMs from databases like Pfam [38]. The latter approach is taken in this paper for benchmarking the proposed protein sub-HMM method. Results and Discussion A profile hidden Markov model of a sequence family is a statistical model over sequences whose structure consists of a number of states and transitions between states. For each state z there is a distribution, P(x|z) over a set of observations, x X. In our case, X is the set of amino acids. A transition matrix T(z 1 |z 2 ) defines the probability of transitioning from state z 2 to state z 1 . We can view this transition matrix as a graph in which a link exists from z 2 to z 1 if T(z 1 |z 2 ) > 0. Figure 1 shows the structure used for aligning protein sequences [35]. For each nominal position i there are three possible states: a match state M i , an insert state I i , and a delete state D i . P (x|M i ) is a distribution over amino acids occurring at position i. P(x|I i ) is a background distribution, which is the probability of each amino acid occurring given no other information. This state is used to model noise sections in the input sequence. The delete state does not have a real observation distribution; it requires that nothing be observed (an ? observation). This is used to model sections of the input sequence which have been lost. The parameters of an HMM can be learned using the Expectation Maximization (EM) [39] algorithm given a set of observed protein sequences (but not the hidden state sequence), producing a model tuned to this set of protein sequences. Once the model has been trained, we can take another protein sequence, S, and ask what is the most likely sequence of HMM states to generate S, and what is the probability of that combination of states and observations. This is done with the Viterbi algorithm [40]. To rank the results, it is common to calculate the log-odds: In this equation, P back (S) is the probability of S, assuming each amino acid has been drawn independently from the background distribution, while P HMM (S, Z) is the probability that the HMM would generate the state sequence Z and the observed sequence S. A positive score means that S is more likely to be derived from the HMM than randomly generated from the background distribution. A more detailed description of profile HMMs can be found in [41]. Extraction of Sub-HMMs Our sub-HMM method is built on top of the well-established profile HMM framework described above. The algorithm consists of a simple but effective two step procedure for extracting the most highly conserved regions from profile HMMs (compare Figure 2). First, the Kullback-Leibler divergence is calculated for all columns of a profile HMM [42]. Second, after a series of normalization and smoothing steps (see Methods section), the most information rich HMM regions are excised from the original profile HMMs. The resulting sub-HMMs have the same structure as the original profile HMMs, but they are usually much shorter. Typically, the method will extract several non-overlapping sub-HMMs from a single domain model, especially when its most conserved regions are highly localized and discontinuous. A more detailed outline of the algorithm for extracting sub-HMMs and using them for scoring their presence in protein sequences is described in the Methods section. In the following outline we first describe our sub-HMM experiments and provide several performance comparisons to related tools. Subsequently, we use our tool to find sequences that share short sequence features encoded in our sub-HMMs. Properties of Sub-HMMs Sub-HMMs were extracted from Pfam domain families using HMMER2 and HMMER3 models [43]. was used for all experiments, whereas Pfam 24.0 was mainly used in the performance comparisons with HMMER3. This is because Pfam has adopted HMMER3 models only very recently, and at this point many of its families have not been as rigorously tested and curated by experts in the field as in the earlier HMMER2-based releases. Using our new sub-HMM method, we extracted 48,535 sub-HMMs (Table 1) from the Pfam 22.0 database (Pfam-A, Pfam_ls). This database consisted of 9,318 domain profile HMMs with 2,990,695 unique protein sequences associated with at least one domain. Due to the presence of multiple domains in many sequences, the data set contained a total of 4,070,949 family memberships. The length distributions of the original Pfam HMMs and our sub-HMMs for all families are shown in Figure 3. As expected the sub-HMMs are much shorter than the original Pfam HMMs, with an average length of 17 residues compared to 210 residues, respectively. This has several advantages for the goals of this study. First, the sub-HMMs have a length distribution similar to the size of many known functional motifs, which is essential for predicting features with related properties [17,18]. Second, their shorter length reduces the computation time for scoring a protein. Finally, it reduces the number of parameters, which should improve the accuracy of the detector. Subsequently, we performed several benchmark tests to determine the performance of the new sub-HMM method in identifying functionally relevant sequence features and searching for sequences sharing them. For this, we determined the presence of each Pfam HMM and our sub-HMMs in all protein sequences from the Pfam database by applying the scoring system described in the Materials section. We found that the processing time of our method is comparable to HMMER2. The slightly better time performance of our method by a factor 1.4 is most likely due to the lower complexity of its sub-HMM models. The sub-HMM method showed comparable time improvements when using it with the HMMER3 software. Cross-Validation with PROSITE and CSA Next, we determined how well the sub-HMM method performed in identifying known motifs that are likely to be of functional relevance. This was addressed by comparing the extracted sub-HMMs from the Pfam 22.0 database with the hand curated conserved protein motifs from the PROSITE database. If the sub-HMMs are enriched in functionally relevant candidates, then one would expect a high degree of overlap with the motifs from the PROSITE database. This should be the case because the PROSITE motifs are derived from a comparable protein knowledge space as the sub-HMMs generated by this study. The overlaps were determined by comparing the matching positions of the two fragment data sets in their corresponding protein family sequences. For counting overlaps, we used relatively conservative filtering criteria: the two fragment models had to have 50% of their matching protein sequences in common and the overlaps had to occur in least 95% of the common protein members. In addition, we consider a sub-HMM to match only if it has a score of 0 or higher. Furthermore, we compute the probability of this event happening by chance and require that it be less than 0.01. According to these comparisons, 1,055 of the 48,535 sub-HMMs overlapped with 937 of the 1,303 (72%) PROSITE motifs by at least 10% of the length of the shortest fragment. The probability of finding ≥937 matches just by chance was estimated to be < 1.6 * 10 -6 (see Method section for details). Of these 1,303 PROSITE motifs, 958 were associated by Pfam with one or more of its protein families. The number of matching families for varying percent overlaps is shown in Table 2. An example of a matching pair is shown in Figure 4. The full result set is available in Additional file 1:prosite-comp.tar. A similar test was performed for the catalytic residue annotations from the Catalytic Site Atlas (CSA) [19]. This is a database of active site residues from enzymes represented in the Protein Data Bank (PDB). Due to their functional importance, most of these residues are highly conserved within protein families. In our tests, we considered only those sites which are supported by the literature and also mapped to protein domain regions in the Pfam data set. This left us with 4147 sites mapping to 642 proteins. Subsequently, we counted how many sub-HMMs overlapped with these sites and found that 847 sub-HMMs overlapped with CSA residues. These corresponded to 2903 active sites from 546 proteins. Thus, our sub-HMM data set contained 70% of these active sites. The probability of observing ≥2903 overlaps among the two data sets just by chance is < 1.5 * 10 -18 . The complete result set of this analysis is available in Additional file 2:csa-comp. The considerable agreement of our method with the PROSITE and CSA data sets indicates that the sub-HMM method is efficient in identifying many of the known functionally important residues in protein families. Therefore, it is reasonable to assume that the novel conserved regions, identified by this study, are a useful resource for characterizing the functional hotspots in protein sequences of known or unknown function in the future. Search Performance Comparisons To compare the sensitivity and selectivity performance of the sub-HMM method with the widely used HMMER2 software, we tested how well each method could recover the members of each domain family from all proteins in the entire Pfam 22.0 database. We used the scores computed for each protein to generate an ROC (Receiver Operating Characteristic) curve for each method ( Figure 5). This allowed us to compare the methods without choosing a fixed threshold, which is usually hard to define a priori. In this preliminary test, we used the original Pfam HMMs for the HMMER2 method, and the sub-HMMs extracted by our method from the same Pfam HMMs. As a test sample, all proteins in Pfam were used. This experimental design gives a slight advantage to both methods, because the Pfam HMMs are trained on a representative subset of proteins that overlaps with the total protein set in each family. Despite this limitation, the difference in performance is still meaningful due to the identical starting conditions for both methods. shows the resulting ROC curves for assembling all 9,318 families. The results show that the HMMER2 method has a higher sensitivity at false positive rates less than 0.02, but the sub-HMM method performs slightly better at higher false positive rates. Due to the much shorter profiles used by our method, it is expected to have a higher false positive rate when it is benchmarked against a test data set that is based on the family assignments of complete domain models. We also performed more rigorous comparisons of our method against HMMER2, HMMER3, SAM and PSI-BLAST [44]. Additionally, we tested our sub-HMM method with HMMER3 profile HMMs. In this case the sub-HMMs where excised from HMMER3 models and the HMMER3 search tool was used to map and score the individual sub-HMMs to the sequences. We then combined the scores as described in the Methods section. In the following text of this section, the sub-HMM experiments performed with HMMER2 and HMMER3 are referred to sub-HMM-HMMER2 and sub-HMM-HMMER3, respectively. In all tests we trained the models ourselves by randomly selecting 20% of the members from each protein family, but the training data were not included in the test data sets. HMMER2, HMMER3 and SAM use a multiple sequence alignment for the model building step. Since it was not our goal to test the alignment quality, we used the curated domain alignments provided by Pfam as input to all methods. Although SAM can create its own alignments, we forced it to use the alignments we provided to make this method more comparable to HMMER2 and HMMER3. For PSI-BLAST, we first created multiple sequence alignments for all the training data sets using CLUSTALW. Subsequently, we built PSSMs to search the test data set with PSI-BLAST. For all methods, we compared how well they could recover the remaining 80% in each protein family from the combined set of all test sequences. Due to computational resource constraints, it was not possible to test these methods on all Pfam families. Instead we created two smaller subsets of families, one composed of smaller families and one composed of larger families. The small family set contained 933 families randomly selected from Pfam 22.0 with of 10 to 100 members, while the large set contained 1002 families with more than 100 members. In addition, we tested the different methods on the HMMER3-based Pfam 24.0 data set. To maximize the comparability of the results, we selected only families that were available in both Pfam releases and fell into the same size categories. For the small set, we found 899 families in Pfam 24.0 but only 491 of them had less than 100 mem- The numbers of sub-HMMs are listed that overlapped with PROSITE motifs. The first column provides the relative overlap among the two feature types. The second and third columns contain the number of overlapping sub-HMMs and PROSITE motifs, respectively. The details of the filter settings used in these comparisons is given in the Result and Discussion section. The column TP contains the number of true positives that we identified out of the 958 PROSITE families annotated by Pfam 22.0. The last column TPR gives the corresponding true positive rate. Since our method is designed to find short sequence similarities, it is expected to have a lower selectivity (higher false positive rate) than the other methods when reassembling family relationships that are based on longer domain similarities. In fact, such a performance char-acteristics on known family data sets is required for discovering novel conserved fragments in sequences that do not necessarily belong to the same domain family. The latter is the main utility feature of the sub-HMM method. Discovery of Conserved Fragments in Protein Families with Sub-HMMs To evaluate the utility spectrum of sub-HMMs for conserved feature discovery, we determined for each sub-HMM excised from Pfam 22.0 its matching profile against different domain families in the same Pfam release. To define a match, we required a sub-HMM to match at least 50% of the sequences in each Pfam family with a log-odds score of 0 or higher. Table 3 shows how many sub-HMMs from Pfam domains of unknown function (DUFs) matched Pfam families of known function (DKFs) and vice versa. A sub-DUF is defined as a sub-HMM that was extracted from a DUF, whereas a sub-DKF was extracted from a DKF. Interestingly, the sub-DKFs shows considerable overlaps with the PROSITE data set, whereas the sub-DUFs do not overlap with PROSITE at all (last two rows in Table 3). The latter is expected because PROSITE focuses on motifs from functionally characterized proteins. This also indicates that our sub-DUF data sets contains many novel conserved Figure 5). The first test (a) considers smaller families with 10 to 100 members, whereas the second one (b) considers large families with more than 100 members. and potentially functional motifs that are not represented in PROSITE. A similar approach was used for constructing networks of Pfam 22.0 families by their common sub-HMM matches. The obtained clusters in this network showed many similarities to the clusters from the Pfam clan database, but also significant differences [3]. The Variation of Information (VI) coefficient [45] for the two network sets was 0.275. This score has a range from 0 to log(9318) = 9.1, with lower scores indicating more similar clusterings. Two small sub-graphs of the sub-HMM based domain network are shown in Figures 8 and 9. The box in Figure 8 encloses those families which are part of a clan according to the Pfam database. In this case the sub-HMM-based grouping of families agrees almost perfectly with the corresponding Pfam clan assignment. In contrast to this, Figure 9 gives an example of a new cluster of domains predicted by our method. Such differences in the results of the two methods are expected, because the Pfam clans are assembled with a profile HMM to profile HMM alignment method [46] that is fundamentally different from our sub-HMM method. The large number of sub-HMMs matching different Pfam domains indicates the usefulness of our sub-HMM approach for discovering short sequence features that are conserved among different protein domains. Due to their high conservation, an important functional role for many of these features can be expected. Many of the sub-DKFs The table lists the numbers of sub-DKFs and sub-DUFs which matched in addition to their source families other DKF and DUF families. A sub-HMM is considered to have matched a Pfam 22.0 family if it scores greater than 0 on more than 50% of its members. The last column contains the counts of sub-HMMs that also overlapped with PROSITE motifs. will be useful for assigning potential functions to DUFs. A much more comprehensive study on applying our sub-HMM approach to biologically relevant questions will be published in an experimental journal. Conclusions We have developed a simple but effective method for identifying the most highly conserved residues in protein sequences in a fully automated manner. Its design strategy is highly practical and versatile by making efficient use of a well-established bioinformatic infrastructure, such as existing domain databases and profile HMM search tools. In addition, the conserved patterns, identified by this study, are useful for characterizing proteins of unknown function by associating them with those of known function by their common sub-HMMs. Furthermore, the sub-HMM search method appears to be a very effective tool for finding sequences that share only very short sequence similarities with a sensitivity performance similar to HMMER2. The possibility to ignore the order of different sub-HMM matches in sequences is another advantage, which will allow the identification of more complex similarity arrangements among otherwise unrelated sequences. Extracting sub-HMMs from Profile HMMs To extract the desired sub-HMMs from a single profile HMM, H, with length H l , we first compute the Kullback-Leibler divergence (or relative entropy) [42] of each position in the original HMM: Here M i is the observation distribution of the match state at position i, and B is the background distribution. We normalize h by dividing by the maximum value, so that each position has a value between 0 and 1, and then smooth the values: HMMER3. When s is increased, more positions with low relative entropy will be incorporated into sub-HMMs resulting in more specific models. Such models will tend to only match very similar protein fragments. Small l values will increase the number of sub-HMMs, whereas the opposite trend is observed for larger l values. An example of these differences is shown in Figure 10. Once the consecutive regions of match states are identified from the original profile HMM, we convert each of them into a sub-HMM. Each sub-HMM has the same structure, transition probabilities, and observation distributions as the corresponding segments in the source HMM. As the original HMMs, the sub-HMMs begin and end with looped insertion states. Typically, a sub-HMM obtained from this process is identical to a profile HMM trained on the corresponding region of a multiple alignment that was used for generating the original profile HMM. Scoring of Sub-HMM Matches Sub-HMMs can be matched and scored against protein sequences either as single models or as sets of models. When scoring a set of sub-HMMs against a protein sequence S, such as all sub-HMMs extracted from a Pfam HMM, we used a method based on a complete generative model. We hypothesize the entire protein sequence can be generated according to the following sampling semantics: First, choose the length of the sequence. Then, for each sub-HMM y, sample the starting location from a uniform distribution, and then sample a sequence from y and place it at the chosen starting point. After this is done for all the sub-HMMs, fill in the gaps with samples from the background distribution. This assumes that each of the sub-HMMs generates a portion of the protein sequence, while their order is not important. In addition, we ignore possible overlaps among sub-HMMs. We use the Viterbi algorithm to find, for each sub-HMM, the most likely hidden state sequence and position in S, using a local-local alignment. Let M be the length of S and Y the set of sub-HMMs. Then the resulting score is: Here score y (S) is the score from Equation 1 for HMM y. The term |Y| log M arises from the uniform distribution over positions at which any sub-HMM might begin. We implemented our method in Java and used code from HMMEditor [47] to run the Viterbi algorithm. This score can be computed in time linear in M and the combined lengths of the sub-HMMs. In the ROC performance tests, we scored sequences using sub-HMMs grouped by the Pfam families they were excised from. For all other tests, we scored individual sub-HMMs by using score y (S) as the final score. PROSITE and CSA Comparisons The overlaps of sub-HMMs and PROSITE motifs were computed by matching them against the domain sequences in each Pfam family. The PROSITE matches were determined with ps_scan [48]. To minimize the compute time of these overlap comparisons, we considered only those Pfam and PROSITE sets (families) which had at least 50% of their sequences in common. Among these, at least 95% of the matches had to overlap by variable lengths specified in Table 3. The overlaps with the CSA data set were computed similarly. Due to the short length of the active sites, their positions had to be completely contained in the sub-HMM matches. The probability of a sub-HMM matching with a PROSITE motif by chance was computed as follows. We let q ij be the probability that a sub-HMM match of length F j overlaps a PROSITE match of length P j on a protein of length S j from a Pfam family i by a fraction of at least x: Then we compute the probability, D i , that a certain number of overlaps occurs between a sub-HMM and a PROSITE motif within a given Pfam family i. Let F be the set of sequences in a Pfam family and P the set of sequences in a PROSITE family. We define P as the set of all subsets of F P which contain at least 95% of the intersection: where n = |F P|. Let p ij = {q ij |j F P}, then: Since the enumeration of every set in P is time intensive, we approximate it with an upper bound. Let j* = argmax j p ij , then we have: In equation (9), we replace the sum from the previous equation with the sum over the possible sizes of R. For each size, the binomial term gives the number of sets of size k, and the last term gives the probability of a set of size k. However, this bound is often too loose in practice. This is because for large values of p ij* , the last term in equation (7) makes that term very small, whereas the corresponding term in our bound would still be large. Therefore, we adopt a method of removing extreme outliers to obtain a tighter bound. In the end we have: where n' is the number of elements remaining in the intersection after the outliers have been removed. More details about this method are provided in Additional file 3:prosite_scoring.pdf. We use the Hoeffding bound [49] to upper-bound the likelihood of finding a certain number of PROSITE or CSA overlaps with our sub-HMM data set by chance (that is, if the sub-HMM data set had instead been chosen at random). The Hoeffding bound states that if the random chance of any single test matching is p, then the probability of m or more matches in M tests is less than where . For the PROSITE comparisons, matches are only considered if the prior probability is less than 0.01, therefore, Therefore, , and we obtain (again with the Hoeffding bound), a p-value of less than 1.5 * 10 -18 for the probability of our sub-HMMs overlapping these CSAannotated amino acids by chance. ROC Comparisons For the PSI-BLAST tests, the training sets were aligned with CLUSTALW [50] and then a PSSM was generated using blastpgp with just one round of searching. The test data was then scored by blastpgp using the trained PSSM as a starting point and running for up to 6 rounds. For each sequence, we recorded the maximum log-odds score from all the rounds. For the SAM tests, we extracted the aligned training data from the Pfam database and used them to train the models, forcing SAM to use the given alignments rather than create its own. These models were then used to classify the test data. In the case of HMMER2 and HMMER3, we trained models with hmmbuild and hmmcalibrate (HMMER2 only) using the same alignments as for the SAM tests. In all cases, HMMER2 tests were performed with HMMER2 models and HMMER3 tests with HMMER3 models. We then used these models to classify the test data with hmmsearch. If multiple domains were found in one sequence, the result from the best scoring one was used. For the sub-HMM method, we used the aligned training data to build HMMER2 and HMMER3 models, and then extracted sub-HMMs from them. We then used our hmmsearch implementation to score each sequence according to our model. For all tests, the training sets consisted of a random selection of 20% of the sequences from each Pfam family, while the test database contained the union of the remaining sequences. The ROC curves where computed with the ROCR library [51] using the concatenation of all the scores for each method. Log-odds scores were used for all methods to obtain comparable results. In the case of SAM, we used reverse log-odds scores [52].
8,079.4
0001-01-01T00:00:00.000
[ "Computer Science", "Biology" ]
Efficacy of Osimertinib in NSCLC Harboring Uncommon EGFR L861Q and Concurrent Mutations: Case Report and Literature Review The efficacy of first-and second-generation epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) in NSCLC patients with the EGFR L861Q mutation has been studied previously. However, there is little evidence on the efficacy of osimertinib in NSCLC patients with uncommon mutations. Here, we report the case of a 68-year-old man with advanced NSCLC with concurrent EGFR L861Q mutation as well as TP53 and RB1 mutations. The patient was treated with osimertinib as first-line therapy and achieved a remarkable progression-free survival of 15 months. His symptoms were significantly alleviated and the dose was well tolerated. The findings of the present study indicate that osimertinib might be a good treatment option for NSCLC patients with the L861Q mutation. INTRODUCTION Epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs) have revolutionized the therapeutic paradigm for advanced non-small cell lung cancer (NSCLC) with EGFR mutations. Common EGFR mutations account for approximately 75% to 80% of EGFR-mutant NSCLC cases, including exon 19 deletion and the L858R mutation, which has been reported to improve the efficacy of EGFR-TKIs in clinical trials. The remaining EGFR-mutant cases are a highly heterogeneous group of genomic alterations within EGFR exons 18-21 (1,2). Currently, with the widespread use of next-generation sequencing (NGS), an increasing number of rare or atypical EGFR mutations have been identified. However, further insights are required on the efficacy of EGFR-TKIs against advanced NSCLC harboring uncommon EGFR mutations, especially the efficacy of third-generation EGFR-TKIs such as osimertinib. Here, we present the case of a patient harboring the L861Q mutation, who maintained a sustained response to osimertinib. Written informed consent was provided by the patient to use case details and the accompanying images for publication. CASE REPORT A 68-year-old, nonsmoker, male presented with a history of back pain for two months; no family history of tumor was reported. Positron emission tomography-computed tomography (PET-CT) revealed a fluoro-2-deoxy-d-glucose (FDG)-positive lesion in the left middle lung lobe and metastases in multiple bones ( Figure 1A). CT-guided core needle biopsy of the tumor revealed adenocarcinoma with positivity for CK7 protein and TTF-1 staining ( Figure 1B). To identify potentially actionable mutations of the patient, paired NGS-based genetic testing of 1,021 cancer-related genes was performed with both circulating free DNA from plasma and DNA extracted from the leukocytes (Geneplus-Beijing Ltd., Beijing, China). EGFR L861Q mutations (allelic fraction, AF=6.1%) in exon 21 were identified by nextgeneration sequencing (NGS) of the plasma, with concurrent TP53 N239S mutation in exon 7 and RB1 mutations ( Figure 1C). First-line therapy with osimertinib (80 mg daily) was initiated. He achieved stable disease condition with decreasing primary lesions, confirmed based on the Response Evaluation Criteria in Solid Tumors 1.1. The patient also showed significant improvement in terms of back pain and quality of life, and the adverse events were well tolerated. He experienced progressive disease of the right frontal lung lobe, subcarinal lymph node, and brain metastases after a progression-free survival (PFS) of 15.0 months ( Figure 2A). Subsequently, the patient presented with severe cough, headache, and back pain. Due to the infeasibility of obtaining additional tissue biopsy, liquid biopsy assessing circulating tumor DNA (ctDNA) by NGS was performed. The AF of the L861Q mutation increased to 73.2%, with TP53 and RB1 mutations and absence of EGFR T790M ( Figure 2B). Subsequently, the patient was treated with pemetrexed and carboplatin plus bevacizumab as second-line therapy. After two cycles of chemotherapy, he experienced significant improvement in headache, cough, and back pain, but experienced fatigue. However, the patient refused to continue chemotherapy because of personal reasons. The last follow-up was in November 2020, after which the patient passed away, with an overall survival of 19 months ( Figure 3). DISCUSSION Here, we describe the clinical efficacy of first-line osimertinib in patients with advanced NSCLC harboring concurrent uncommon EGFR, TP53, and RB1 mutations. The patient benefitted from treatment with osimertinib, with a PFS of 15 months. To the best of our knowledge, this is the first report on the clinical effects of firstline osimertinib in Chinese NSCLC patients carrying the concurrent EGFR L861Q mutation as well as TP53 and RB1 mutations. Owing to the high heterogeneity and low prevalence of NSCLC with rare EGFR mutations, there is no prospective clinical trial data that directly compare different EGFR TKIs, or chemotherapy in advanced patients with uncommon EGFR mutations. Thus, optimal first-line therapy is still undetermined. The L861Q mutation is the second most common mutation have showed the recent advances on the role of EGFR-TKI in patients with uncommon EGFR mutations and considered afatinib or osimertinib as possible first-line treatment options for major uncommon EGFR mutations (15). An important implication of the present case study is the importance of NGS and liquid biopsy in detecting alterations in molecular abundance. Dynamic genetic changes can occur in lung cancers. Biopsy tissue can only provide limited information owing to heterogeneity of the tumor. Furthermore, performing a biopsy is relatively complicated, with some parts being difficult to access, and repeated sampling causes great pain to patients. A previous study showed that dynamic changes in mutation abundance can reflect the efficacy of EGFR-TKIs and that a rapid decrease in mutation abundance predicts a better EGFR-TKI response (16,17). Thus, dynamic monitoring of gene aberrances in ctDNA and generation of an integrated genomic profile from NGS can help tailor targeted treatment options for patients. EGFR-mutant NSCLC patients with TP53 mutations showed inferior response and poor prognosis for EGFR-TKI, especially those with exon 6-8 mutation (18,19). Besides, different categories of the TP53 status have been reported as a prognostic marker for patients with EGFR-TKI therapy (20). TP53 exon 8 mutations demonstrated a role in inferior clinical outcome in patients with the first and second generations of EGFR-TKI, which also confirmed the negative impact in patients with osimertinib (21). A phase III clinical trial evaluating the use of osimertinib in untreated advanced NSCLC patients with concurrent EGFR and TP53 mutations has been registered on the ClinicalTrials.gov website (Identifier: NCT04695925). Concurrent mutations in EGFR-mutant lung cancers may contribute to tumor heterogeneous outcomes and associate with resistance to EGFR-TKI treatment (22,23). A study showed that concurrent RB1 and TP53 alterations in EGFR-mutant patients were at unique risk of histologic transformation and inferior response (24). The patient in our case harbored a primary TP53 N239S mutation in exon 7 accompanying with the EGFR L861Q mutation, which may contribute inferior clinical outcome of osimertinib. There is little evidence regarding the effects of osimertinib in patients with EGFR mutations who also have concurrent mutations in RB1, TP53, and PTEN. Our present patient harboring concurrent mutations in L861Q, RB1 and TP53 received first-line osimertinib treatment and achieved PFS of 15.0 months. Osimertinib may be a therapeutic option for EGFRmutant patients with concurrent mutations, and further investigations are required in this regard. In conclusion, treatment of patients with uncommon mutations lacks established standard treatments. Our case shows that osimertinib demonstrated favorable activity in patients with NSCLC harboring concurrent uncommon EGFR, TP53, and RB1 mutations. In addition, dynamic ctDNA detection has implications for the development of treatment regimens, and the use of ctDNA as a biomarker is promising and will benefit both the clinic and patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright © 2021 Lin, Chen, Chen, Hu, Guo, Zhang, Lin and Chen. This is an openaccess article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
1,939
2021-09-02T00:00:00.000
[ "Medicine", "Biology" ]
Roles for the IKK-Related Kinases TBK1 and IKKε in Cancer While primarily studied for their roles in innate immune response, the IκB kinase (IKK)-related kinases TANK-binding kinase 1 (TBK1) and IKKε also promote the oncogenic phenotype in a variety of cancers. Additionally, several substrates of these kinases control proliferation, autophagy, cell survival, and cancer immune responses. Here we review the involvement of TBK1 and IKKε in controlling different cancers and in regulating responses to cancer immunotherapy. Introduction TANK-binding kinase 1 (TBK1) and the homolog IκB kinase (IKK) epsilon (IKKε, originally IKKi) have been studied extensively in relation to their functions in promoting the type I interferon response. Activation of TBK1 and IKKε promotes interferon regulatory factor (IRF3 and 7) phosphorylation and nuclear translocation, leading to transcriptional upregulation of type I interferons in the innate immune response [1,2]. More recently, these kinases have been linked with adaptive immunity, autophagy, and oncogenesis ( Figure 1) [3][4][5]. TBK1 was discovered based on its interaction with TANK [6]. IKKε was identified as an inducible kinase (IKKi) related to the IκB kinases IKKα and IKKβ [7] and as part of a phorbol 12-myristate 13-acetate (PMA)-inducible kinase complex [8]. While TBK1 knockout is embryonically lethal due to liver apoptosis [9], IKKε knockout is viable but exhibits enhanced susceptibility to viral infection [10] and elevated obesity in response to a high-fat diet [11]. Downstream of cytokines, toll-like receptor (TLR) signaling, and activation of certain oncoproteins, the canonical IKK complex is activated to phosphorylate IκBα, leading to its proteasome-directed destruction allowing the p50-RelA/p65 dimer to accumulate in the nucleus and drive expression of genes encoding cytokines, anti-apoptotic factors, and other regulatory proteins [12]. The canonical IKK complex is comprised of two catalytic subunits, IKKα and IKKβ, along with the regulatory subunit IKKγ (or NEMO, NF-κB essential modulator). In the non-canonical nuclear factor kappa B (NF-κB) pathway, NF-κB inducing kinase (NIK) induces IKKα to phosphorylate p100/NF-κB2, which leads to p100/NF-κB2 processing into the p52 subunit. The p52 subunit forms a dimeric transcription factor with RelB to drive gene expression [12]. NF-κB signaling is strongly associated with cancer through numerous mechanisms [13,14]. IKKε and TBK1 are referred to as non-canonical IKKs as they have sequence homology with the canonical IKKs, IKKα and IKKβ ( Figure 2). In the innate immune response, TBK1 and IKKε exhibit functional redundancies, although TBK1 appears to be more important than IKKε. In response to pathogen infection, cyclic GMP-AMP synthase (cGAS) binds cytoplasmic, pathogen-derived DNA to generate cyclic GMP-AMP (cGAMP), Figure 1. Functional effects of the IκB kinase (IKK)-related kinases. In addition to immune responses, IKKε and TANK-binding kinase 1 (TBK1) are important signaling proteins for critical cellular processes associated with cancer. For more information see text. Adapted from [19]. In non-cancer diseases, TBK1 has been shown to drive neuroinflammation and autoimmunity [20]. It has been discovered that mutations in TBK1 are associated with several central nervous system (CNS) diseases: amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), normal tension glaucoma (NTG), and childhood herpes simplex encephalitis (HSE) [21]. Gain-of-function mutations in TBK1 underlie NTG, while loss-of-function mutations correlate with ALS, FTG, and HSE. In the case of ALS, evidence suggests that mutant TBK1 leads to aberrant autophagy (see below) and neuroinflammation [22]. IKKε was reported to phosphorylate c-Jun associated with arthritis [23]. Relative to autoimmunity, TBK1 has been shown to promote immune tolerance, at least partly through controlling dendritic cell functions. Work from Sun and colleagues [24] showed that TBK1 functions in dendritic cells to regulate immune tolerance and to suppress autoimmunity. Dendritic cells are the major antigen presenting cells and are required for stimulating the T cell response in an infection. Mechanistically, TBK1 suppresses expression of certain interferon-responsive genes in dendritic cells, which could be one mechanism explaining the immune tolerance effect of TBK1. Interestingly, both TBK1 and IKKε have been shown to inhibit T cell functions [25,26]. In the case of IKKε, evidence is presented that it phosphorylates nuclear factor of activated T cells (NFATc1) to suppress T cell activation [26]. The deubiquitinase CYLD was recently shown to be phosphorylated by IKKε/TBK1 after T-cell receptor (TCR) stimulation [27]. Additionally, TBK1 deletion in dendritic cells enhances response to immunotherapy [24]. By affecting immune tolerance, TBK1 and IKKε could promote tumor progression (see below). In addition to immune responses, IKKε and TANK-binding kinase 1 (TBK1) are important signaling proteins for critical cellular processes associated with cancer. For more information see text. Adapted from [19]. In non-cancer diseases, TBK1 has been shown to drive neuroinflammation and autoimmunity [20]. It has been discovered that mutations in TBK1 are associated with several central nervous system (CNS) diseases: amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), normal tension glaucoma (NTG), and childhood herpes simplex encephalitis (HSE) [21]. Gain-of-function mutations in TBK1 underlie NTG, while loss-of-function mutations correlate with ALS, FTG, and HSE. In the case of ALS, evidence suggests that mutant TBK1 leads to aberrant autophagy (see below) and neuroinflammation [22]. IKKε was reported to phosphorylate c-Jun associated with arthritis [23]. Relative to autoimmunity, TBK1 has been shown to promote immune tolerance, at least partly through controlling dendritic cell functions. Work from Sun and colleagues [24] showed that TBK1 functions in dendritic cells to regulate immune tolerance and to suppress autoimmunity. Dendritic cells are the major antigen presenting cells and are required for stimulating the T cell response in an infection. Mechanistically, TBK1 suppresses expression of certain interferon-responsive genes in dendritic cells, which could be one mechanism explaining the immune tolerance effect of TBK1. Interestingly, both TBK1 and IKKε have been shown to inhibit T cell functions [25,26]. In the case of IKKε, evidence is presented that it phosphorylates nuclear factor of activated T cells (NFATc1) to suppress T cell activation [26]. The deubiquitinase CYLD was recently shown to be phosphorylated by IKKε/TBK1 after T-cell receptor (TCR) stimulation [27]. Additionally, TBK1 deletion in dendritic cells enhances response to immunotherapy [24]. By affecting immune tolerance, TBK1 and IKKε could promote tumor progression (see below). Below we discuss mechanisms whereby TBK1 and IKKε drive an oncogenic phenotype and contribute to responses to immunotherapy. Cancers Controlled by TBK1 and/or IKKε IKKε was reported to be amplified in breast cancer where its expression is important for survival [31]. Further, it was shown that a subset of triple-negative breast cancer requires IKKε for cytokine production and growth/survival [32]. IKKε is implicated in the glioma oncogenic phenotype, where its expression was found to be upregulated in around 50% of gliomas, independent of grade. Cellbased studies indicate that IKKε is involved in promoting glioma cell survival through upregulation of Bcl-2, via an NF-κB-dependent pathway [33]. Interestingly, it was found that IKKε expression is elevated in around 65% of pancreatic ductal adenocarcinomas and is correlated with poorer survival [34]. IKKε expression in ovarian cancer correlates with poor outcome and promotes resistance to several chemotherapies [35]. Hsu et al. [36] found that IKKε exhibits elevated expression in metastatic ovarian cancer, and the loss of IKKε reduces aggressiveness in ovarian xenograft models. IKKε expression has also been shown to correlate with poor outcome and advanced tumor grade in esophageal squamous cancer [37]. IKKε is reported to be important in KRAS-positive pancreatic models, in which oncogenic KRAS drives disease similar to human pancreatic cancer. In this model, identity with IKKα and 24% identity with IKKβ. TBK1 shares 49% identity and 65% similarity with IKKε. (B). Surface views of TBK1 (left panels) and IKKβ (right panels), with corresponding domains colored in TBK1 and IKKβ. In TBK1, the ULDs bridge between dimer SDDs, but extend away from the opposite SDDs in IKKβ. The kinase domains in the IKKβ dimer are differently oriented and do not form dimer contacts. The IKKβ structure is drawn from Protein Data Bank ID code 3QA8 [28]. The TBK1 structure is drawn from Protein Data Bank ID code 4IM0 [29]. TBK1, TANK-binding kinase 1; IKK, IκB kinase; KD, kinase domain; ULD, ubiquitin-like domain; SDD, scaffold dimerization domain; NBD, NEMO-binding domain. Adapted from [5,29,30]. Below we discuss mechanisms whereby TBK1 and IKKε drive an oncogenic phenotype and contribute to responses to immunotherapy. Cancers Controlled by TBK1 and/or IKKε IKKε was reported to be amplified in breast cancer where its expression is important for survival [31]. Further, it was shown that a subset of triple-negative breast cancer requires IKKε for cytokine production and growth/survival [32]. IKKε is implicated in the glioma oncogenic phenotype, where its expression was found to be upregulated in around 50% of gliomas, independent of grade. Cell-based studies indicate that IKKε is involved in promoting glioma cell survival through upregulation of Bcl-2, via an NF-κB-dependent pathway [33]. Interestingly, it was found that IKKε expression is elevated in around 65% of pancreatic ductal adenocarcinomas and is correlated with poorer survival [34]. IKKε expression in ovarian cancer correlates with poor outcome and promotes resistance to several chemotherapies [35]. Hsu et al. [36] found that IKKε exhibits elevated expression in metastatic ovarian cancer, and the loss of IKKε reduces aggressiveness in ovarian xenograft models. IKKε expression has also been shown to correlate with poor outcome and advanced tumor grade in esophageal squamous cancer [37]. IKKε is reported to be important in KRAS-positive pancreatic models, in which oncogenic KRAS drives disease similar to human pancreatic cancer. In this model, IKKε can promote growth and survival via Akt and drive nuclear translocation of the zinc finger protein, glioma-associated oncogene 1 (GLI1) [38]. In prostate cancer cell-based and xenograft models, IKKε promoted proliferation and tumor growth along with interleukin 6 (IL-6) expression in a manner dependent on the nuclear accumulation of the transcription factor C/EBPβ [39]. It was found that epidermal growth factor receptor (EGFR) directly phosphorylates IKKε (Y153, Y159) to promote the oncogenic phenotype of non-small cell lung cancer cells [40]. Studies suggest that TBK1 plays an important role in some KRAS mutant cells by promoting cell survival/proliferation [41]. Additionally, TBK1 has an oncogenic role in melanoma [42] and non-small cell lung cancer [43]. One melanoma study focused on cancer resistant to inhibitors of BRAF, a commonly mutated protein in melanoma, and showed that a subset is sensitive to TBK1/IKKε inhibition (compound II, along with other inhibitors) [42]. In a non-small cell lung cancer (NSCLC) study, a subset of cancer cells exhibited sensitivity to TBK1 inhibition, which was correlated with activation of Akt and mTORC1 (mechanistic target of rapamycin complex 1) signaling [43]. TBK1 and IKKε were shown to promote survival of HTLV-1 (human T-cell leukemia virus type 1) transformed T lymphocytes through the maintenance of STAT3 activity [44]. In breast cancer, TBK1 has been shown to phosphorylate the estrogen receptor (Ser305) to promote its activity and drive resistance to the ER antagonist tamoxifen [45]. Control of IKKε and TBK1, and Downstream Signaling The activities of IKKε and TBK1 are regulated by a number of posttranslational modifications. Phosphorylation of TBK1 at Ser172 is known to strongly promote TBK1 activity, and Ma et al. [46] described structural studies supporting a model whereby TBK1 activation involves trans-autophosphorylation (Ser172). Cohen and colleagues have proposed that there is a distinct upstream kinase for TBK1 [47]. Initially, it was found that IKKε (IKKi) is inducible by inflammatory cytokines and lipopolysaccharide (LPS) [7]. Interestingly, IKKε activity is regulated by K63-linked polyubiquitination controlled by the cellular inhibitor of apoptosis proteins cIAP1 and cIAP2, and the E3 ubiquitin-protein ligase, TNF receptor-associated factor 2 (TRAF2) [48]. The authors indicate that this is required for the transforming potential of IKKε. Tu et al. [29] and Larabi et al. [30] reported the structure of TBK1 dimers and the control of activity by K63-linked polyubiquitination, similar to that reported for IKKε [48]. Also, it was reported that TBK1 is sumoylated at K694, near the C-terminus, to promote its antiviral activity [49]. As discussed below, TBK1 is likely to exhibit activation and substrate-specific interactions via regulation controlled by subcellular localization via interaction with distinct adaptor proteins [19]. NF-κB signaling is mediated downstream of several IKKε/TBK1 substrates. For example, it was reported that IKKε phosphorylates TRAF2 on Ser11 and lead to its K63-linked ubiquitination to promote NF-κB activity and mammary cell transformation [50]. Hutti et al. [51] showed that IKKε phosphorylates the tumor suppressor CYLD (Ser418) to suppress its deubiquitinase activity and to promote oncogenesis. IKKε was proposed to phosphorylate and inhibit the transcription factor Foxo3a as an oncogenic mechanism [52]. TBK1 was shown to promote cell survival through its ability to activate NF-κB (controlling Ser536 phosphorylation of RelA), with subsequent upregulation of PAI-2/serpinB2 and transglutaminase 2 [53]. Jin et al. [54] showed that TBK1 can phosphorylate NIK to inhibit noncanonical NF-κB activation, in relation to the immunoglobulin A (IgA) class switch in B cells. Our group previously showed that IKKε controls cancer-associated RelA Ser536 phosphorylation [55]. Harris et al. [56] reported that TBK1/IKKε can phosphorylate the cRel NF-κB subunit to promote nuclear accumulation. While not studied in cancer, it was reported that IKKε can phosphorylate RelA at Ser468 to promote transactivation potential [57]. Subcellular Localization and Target Specificity As described above, innate immune signaling activates STING (bound by cGAMP) to recruit both TBK1 and IRF3 to drive phosphorylation of IRF3, such that loss of STING expression blocks this specific activity of TBK1. Thus, STING is a critical adaptor to appropriately direct TBK1 to the biologically relevant substrates. Goncalves et al. [58] analyzed scaffold proteins important for TBK1 and IKKε activity. TBK1 and IKKε are known to interact with three adaptors: TANK, Sintbad and NAP1. This group found that there is a mutually exclusive interaction between the kinases and these adaptors, and that the adaptors are found in distinct subcellular locations. Binding of TBK1 to these adaptors was mapped to the coiled-coil region 2 of the TBK1 C-terminus ( Figure 2). In response to viral infection or to cytoplasmic double-stranded DNA, it was found that TBK1 activation was dependent on its interaction with TANK. Specific roles for Sintbad and NAP1 as TBK1 and/or IKKε adaptors need further elucidation. The concept of distinct subcellular localization, due to interactions with critical adaptor proteins, should always be considered relative to regulation of TBK1 or IKKε activity in distinct cell types and under different stimulatory conditions. Helgason et al. [19] have reviewed the concept of subcellular localization in controlling TBK1 activity. Autophagy Regulation by TBK1 Autophagy promotes clearance of damaged cellular proteins and organelles and is a key source of energy and survival for cells under stress [63]. Importantly, autophagy is critical for clearance of pathogens in the innate immune response. Additionally, autophagy has been studied extensively for its involvement in a variety of diseases, including cancer where many (but not all) cancers appear to depend on autophagy for tumor cell growth [64][65][66]. TBK1 has been reported to be involved in autophagy at several levels, consistent with its role in the innate immune response. It was shown that phosphorylation of the autophagy receptor optineurin (Ser177) by TBK1 promotes clearance of intracellular bacteria [67,68]. Furthermore, phosphorylation of p62/SQSTM1 on Ser403 by TBK1 controls autophagy as well as autophagosomal engulfment of mitochondria [69,70]. Interestingly, p62/SQSTM1 has been linked with cancer development and progression [71]. Additionally, Prabakaran et al. [72] reported that the phosphorylation of p62/SQSTM1 on Ser403 by TBK1 limits the innate immune response through degradation of STING. In this role, activation of TBK1 through the STING pathway ultimately feeds back to limit the duration of the innate immune response. Kimmelman and colleagues have reported that autophagy is important in sustaining pancreatic cancer growth [73,74]. The latter study indicated both cell-autonomous and non-autonomous roles for autophagy in this cancer. Using a model of pancreatitis, it was found that loss of ATG5 to block autophagy, promoted activation of TBK1 in vivo, which led to enhanced T cell-infiltration and programmed death-ligand 1 (PD-L1) upregulation [3]. Upregulated TBK1 activation also led to enhanced expression of IL6, CCL5, and other neutrophil and T cell chemotactic cytokines. Treatment with the TBK1/IKKε/Janus kinase (JAK) inhibitor CYT387 inhibited autophagy and suppressed PD-L1 expression, blocking KRAS-driven pancreatic dysplasia. Thus, TBK1 appears to activate basal autophagy in this model; however, pTBK1 is also degraded by autophagy to prevent excessive TBK1 activity [3]. Others found that autophagy markers LC3 and p62 were elevated in a KRAS-positive pancreatic cancer model where TBK1 was deleted [75]. The interplay between TBK1 and autophagy is relatively clear in pathogenic responses but appears quite complex in cancer. Promotion of KRAS-Induced Oncogenesis and Control of Akt Activating KRAS mutations occur frequently in cancers and are usually drivers of tumor initiation and cancer progression [76]. White and colleagues reported that RalB/Sec5 activated TBK1 to promote cancer cell survival in a study focused on Ras-induced transformation [77]. Interestingly, RalB/Sec5 are involved in the innate response and TBK1 activation [77,78]. Barbie et al. [41] reported that TBK1 knockdown induced cell death in a panel of KRAS-positive cancer cells, with evidence that TBK1 drives pro-survival signaling through an NF-κB pathway involving c-Rel and BCL-XL. Subsequently, the same group showed that TBK1 and IKKε promote the KRAS-driven tumorigenic phenotype through regulation of CCL5 and IL6 [79]. The compound CYT387, a JAK/TBK1/IKKε inhibitor, blocked cytokine signaling and suppressed KRAS-driven lung tumor growth. Muvaffak et al. [80] reported that TBK1 knockdown/inhibition did not affect survival using a panel of KRAS-positive cells, even though in some cells TBK1 inhibition blocked IRF3 phosphorylation. One consideration relative to these studies, is that potential redundancy with IKKε was not considered. Ou et al. [59] showed that TBK1 promotes Akt activation in cancers but not downstream of insulin, with evidence of direct phosphorylation of Akt. Additionally, Guan and colleagues reported that IKKε and TBK1 can activate Akt by direct phosphorylation on both Thr308 and Ser473 [81]. However, White and colleagues reported that the ability of TBK1 to support Akt activation is context dependent [43]. For example, TBK1 promotes Akt/mTORC1 in response to amino acid addition in starved cells. shRNA kinome screening identified TBK1 as a therapeutic target for human epidermal growth factor receptor 2 (HER2)-positive breast cancer [82]. In this case, TBK1 did not regulate Akt phosphorylation in HER2+ cancer cells. Instead, TBK1 was shown to regulate RelA/p65 phosphorylation, and TBK1 downregulation led to cell-cycle arrest and the upregulation of p16Ink4a. TBK1 inhibition plus lapatanib (an EGFR family inhibitor) strongly blocked xenograft tumor growth. In KRAS-positive cancers, TBK1 involvement in regulating Akt/mTORC1 was found in cells with a mesenchymal phenotype [43]. A recent paper potentially explains these findings, identifying TBK1 activation downstream of endogenous retroviruses that are activated in the mesenchymal state, in association with oncogenic KRAS and other contexts [83]. Based on this, it is likely that conclusions regarding TBK1 sensitivity in KRAS-positive cells, as well as other cancers, will be dependent on factors that relate to the differentiation status of the cells in question. TBK1 and IKKε Control of mTORC1 and Metabolism mTORC1 controls metabolic processes and promotes oncoprotein-induced cell proliferation and survival through phosphorylation of key substrates including factors that promote protein translation [84]. Yu et al. [25] established TBK1 as a regulator of the Akt-mTORC1 signaling axis. mTORC1 is activated downstream of Akt via the ability of Akt to phosphorylate and inactive TSC2 (an inhibitor of mTORC1) and via direct phosphorylation of the mTORC1-associated protein PRAS40 ( Figure 3). It was reported recently that TBK1 can directly phosphorylate mTOR, in the mTORC1 complex, at Ser2159 to positively regulate its activity [85]. This mechanism is involved in IRF3 nuclear translocation and IFNβ production. However, Hasan et al. [86] showed that chronic innate immune activation suppresses mTORC1 via TBK1 activation, and Kim et al. [87] showed that overexpression of TBK1 suppressed mTORC1. Consistent with these overall findings, TBK1 was shown to associate with components of the mTORC1 complex [43]. Given that mTORC1 is known to suppress autophagy and TBK1 is known to promote autophagy, one would argue that TBK1 should suppress mTORC1 [88]. It is interesting to speculate that under conditions where TBK1 positively regulates mTORC1 that mTORC1 may not function to suppress autophagy. However, White and colleagues reported that the ability of TBK1 to support Akt activation is context dependent [43]. For example, TBK1 promotes Akt/mTORC1 in response to amino acid addition in starved cells. shRNA kinome screening identified TBK1 as a therapeutic target for human epidermal growth factor receptor 2 (HER2)-positive breast cancer [82]. In this case, TBK1 did not regulate Akt phosphorylation in HER2+ cancer cells. Instead, TBK1 was shown to regulate RelA/p65 phosphorylation, and TBK1 downregulation led to cell-cycle arrest and the upregulation of p16Ink4a. TBK1 inhibition plus lapatanib (an EGFR family inhibitor) strongly blocked xenograft tumor growth. In KRAS-positive cancers, TBK1 involvement in regulating Akt/mTORC1 was found in cells with a mesenchymal phenotype [43]. A recent paper potentially explains these findings, identifying TBK1 activation downstream of endogenous retroviruses that are activated in the mesenchymal state, in association with oncogenic KRAS and other contexts [83]. Based on this, it is likely that conclusions regarding TBK1 sensitivity in KRAS-positive cells, as well as other cancers, will be dependent on factors that relate to the differentiation status of the cells in question. 8. TBK1 and IKKε Control of mTORC1 and Metabolism mTORC1 controls metabolic processes and promotes oncoprotein-induced cell proliferation and survival through phosphorylation of key substrates including factors that promote protein translation [84]. Yu et al. [25] established TBK1 as a regulator of the Akt-mTORC1 signaling axis. mTORC1 is activated downstream of Akt via the ability of Akt to phosphorylate and inactive TSC2 (an inhibitor of mTORC1) and via direct phosphorylation of the mTORC1-associated protein PRAS40 ( Figure 3). It was reported recently that TBK1 can directly phosphorylate mTOR, in the mTORC1 complex, at Ser2159 to positively regulate its activity [85]. This mechanism is involved in IRF3 nuclear translocation and IFNβ production. However, Hasan et al. [86] showed that chronic innate immune activation suppresses mTORC1 via TBK1 activation, and Kim et al. [87] showed that overexpression of TBK1 suppressed mTORC1. Consistent with these overall findings, TBK1 was shown to associate with components of the mTORC1 complex [43]. Given that mTORC1 is known to suppress autophagy and TBK1 is known to promote autophagy, one would argue that TBK1 should suppress mTORC1 [88]. It is interesting to speculate that under conditions where TBK1 positively regulates mTORC1 that mTORC1 may not function to suppress autophagy. It is known that IL-1 activation of mTORC1 regulates Th17 cell survival and effector functions. Gulen et al. [89] found that IKKε phosphorylates Ser21 of GSK3α, (glycogen synthase kinase 3-alpha), blocking the ability of GSK3α to suppress Akt activation. Thus, IKKε has a role in Th17 cell function to promote Akt and mTORC1 activation. IKKε is critical in IL-17-dependent signaling by phosphorylating Act1 (a key interacting protein for the IL-17 receptor) on Ser311 [90]. IKKε also promotes the maintenance of Th17 cells by phosphorylating GSK3α at Ser21 [89]. More studies need to be performed to address the complex interplay between TBK1/IKKε, Akt, and mTORC1. A hallmark of cancer is that tumor cells maintain rapid growth by switching to glycolysis for a needed supply of energy and macromolecules [91]. Importantly, glucose uptake is elevated in cancer and this appears to be controlled by increased expression of glucose transporters. It was reported that upon activation of RalA, TBK1 phosphorylates the exocyst protein Exo84, which leads to translocation of the GLUT4 glucose transporter to the cell membrane [92]. Thus, TBK1 is involved in insulin-stimulated glucose uptake. Interestingly, TBK1 can phosphorylate the insulin receptor (Ser994) to block the activity of the receptor, potentially leading to insulin-resistance [93]. Other data link TBK1 with regulation of cell metabolism. Zhao et al. [94] have studied the involvement of TBK1 in adipocytes in animals fed a high fat diet, showing that knockout of TBK1 in adipocytes blocked high fat diet-driven obesity. TBK1 is proposed to directly inhibit AMP-activated protein kinase (AMPK) activity via phosphorylation and thereby block respiration and increase energy storage. Interestingly, activation of AMPK under catabolic conditions was found to increase TBK1 activity via direct phosphorylation by ULK1 (AMPK-activated unc-51 like autophagy activating kinase 1). Thus, TBK1 could affect cellular respiration and the switch to the glycolytic pathway. Along a different line, AMPK was recently found to phosphorylate and stabilize the tumor suppressor TET2 (tet methylcytosine dioxygenase 2) [95]. In that study, elevated glucose blocked AMPK activity leading to destabilization of TET2 and reduced 5-hydroxymethylcytosine, related to DNA methylation status. It is interesting to speculate that the ability of TBK1 to negatively regulate AMPK could lead to reduced TET2 and associated epigenetic changes. IKKε was shown to be overexpressed in around 80% of pancreatic tumors [96]. That study showed that loss of IKKε in pancreatic cancer cell lines reduced cell growth in association with loss of glucose consumption and of the expression of genes associated with glucose metabolism which may be related to the downregulation of c-Myc induced by suppression of IKKε. It was proposed that the ability of IKKε to activate Akt (see above) leads to subsequent Akt-dependent phosphorylation of GSK-3β Ser9 which is inhibitory. GSK-3β phosphorylation of c-Myc is proposed to destabilize c-Myc [97]. TBK1 and Antitumor Immunity As described above, TBK1 regulates autoimmunity, at least partly through control of dendritic cell function [98]. While immune tolerance is important for prevention of autoimmune disease, it is also thought to be important for overcoming immune responses against neoplastic lesions. Using both thymoma and melanoma models, it has been shown that specific deletion of TBK1 in dendritic cells leads to reduced tumor growth and increased survival. This is associated with enhanced T cell infiltration into tumors [24]. Barbie and colleagues used organotypic tumor spheroids (which retain lymphoid and myeloid components) to implicate both TBK1 and IKKε in promoting resistance to anti-PD-1 therapy [99]. In a colorectal tumor model, Jenkins et al. [99] showed that PD-L1 inhibition combined with TBK1 inhibition (compound 1) led to a stronger anti-tumor response than either treatment alone. This group argues that the effect of TBK1/IKKε is at the level of T cells, where IL-2 and IFNγ are increased with inhibition, and at the tumor cell level where TBK1/IKKε inhibition leads to decreased C-C motif chemokine ligand 5 (CCL5) and IL-6. Consistent with this overall model, TBK1 was identified (among others) in a CRISPR screen as promoting resistance to immunotherapy in a melanoma tumor model [100]. Mechanisms for TBK1 and IKKε expression/activation in tumors is likely to be varied, and the link with STING is not clear. Findings by Cañadas et al. [83] suggest that TBK1 is activated downstream of retroviruses in mesenchymal subpopulations co-expressing STING (see above). Interestingly, STING expression is often reduced or lost in many cancers, suggesting that STING is not driving TBK1/IKKε activation widely in cancers [101]. STING is detected in human pancreatic ductal adenocarcinomas and in colorectal cancer, but there is a significant loss of STING in advanced disease [101,102]. STING is activated in APCs, as well as stromal T cells, and APCs are known to play a critical role in the effects of STING agonists being tested for antitumor effects [103]. DNA Damage and Cancer: Is TBK1 Involved? Cytoplasmic DNA is sensed as a danger signal related to pathogen infection, as described above. cGAS is bound by cytoplasmic DNA to generate cGAMP, which engages STING to recruit TBK1 to phosphorylate IRF3, leading to interferon production. Interestingly, cGAS can be activated by self-derived DNA, including mitochondrial DNA, endogenous retroviral DNAs, and micronuclei generated from DNA damage [104,105]. Importantly, chromosomal instability in cancer which leads to cytoplasmic DNA drives a cGAS-STING pathway to promote metastasis, although a link with TBK1 activation (as measured via phosphorylation of IRF3 and interferon production) was not detected [106]. The authors provided evidence that noncanonical NF-κB activation is associated with these mechanisms and is important for the metastatic potential. Interestingly, it is known that STING activation can lead to non-canonical NF-κB activation in a manner independent of TBK1 [1]. The studies on cancer-associated chromosomal instability and STING signaling need to include methods other than measurement of phosphorylation of IRF-3 in order conclude that TBK1 is not involved. Therapeutic Potential and Future Considerations While TBK1 and IKKε inhibitors have been developed and used in several cell-based studies and animal models, no clinical trials have been initiated related to cancer [20,[107][108][109]. Louis et al. [110] showed that TBK1 inhibition with a small molecule inhibitor (WEHI-112) blocked inflammatory arthritis in antibody-dependent models of this disease. A clinical trial was initiated with amlexanox, an inhibitor of IKKε and TBK1, in patients with diabetes and the results showed improved glucose control in the treated patients [111]. One preclinical cancer study involved the use of BX795, an established but not specific TBK1 inhibitor, in blocking oral squamous cell carcinoma xenograft growth [112]. Compound 1, a dual TBK1-IKKε inhibitor (with preference for TBK1) was shown to potentiate anti-PD-L1 therapy in a xenograft model [99]. Eskiocak et al. [42] found that melanomas that are resistant to BRAF/mitogen-activated protein kinase kinase (MEK) inhibitors are sensitive to a TBK1/IKKε inhibitor (compound II). Future cancer studies need to consider potential effects of TBK1/IKKε inhibitors on the likely suppressive effects related to pathogen clearance and on immune system function, such as immune tolerance. An additional concern relates to whether targeting TBK1 or IKKε uniquely would be more (or less) effective for a particular cancer. In this regard, more studies need to be performed to address redundant as well as unique functions for these two related kinases, for a better understanding of the biology associated with TBK1 and IKKε and also to consider potential adverse effects of drug intervention. Proteolysis-targeting chimera (PROTAC) has emerged as a technology that can be used to target an "undruggable" protein for degradation. PROTACs contain one moiety that binds an E3 ligase linked with another moiety that binds to the target protein. The induced proximity results in ubiquitination and subsequent degradation of the target. Recently, Crews and colleagues demonstrated that a PROTAC directed to TBK1 can specifically degrade TBK1 in cells while not affecting the IKKε [113]. Thus, utilization of a TBK1 PROTAC could functionally dissect roles of TBK1 from those of IKKε, and also may prove therapeutically beneficial. Funding: The authors' work is funded by NCI grants R35CA197684 (ASB) and 1R01CA211732 (QZ).
6,995.8
2018-09-01T00:00:00.000
[ "Biology" ]
An Analysis of the Network Selection Problem for Heterogeneous Environments with User-Operator Joint Satisfaction and Multi-RAT Transmission The trend in wireless networks is that several wireless radio access technologies (RATs) coexist in the same area, forming heterogeneous networks in which the users may connect to any of the available RATs. The problem of associating a user to the most suitable RAT, known as network selection problem (NSP), is of capital importance for the satisfaction of the users in these emerging environments. However, also the satisfaction of the operator is important in this scenario. In this work, we propose that a connection may be served by more than one RAT by using multi-RAT terminals. We formulate the NSP with multiple RAT association based on utility functions that take into consideration both user’s satisfaction and provider’s satisfaction. As users are characterized according to their expected quality of service, our results exhaustively analyze the influence of the user’s profile, along with the network topology and the type of applications served. Introduction and Motivation In the realm of wireless communications, a user has nowadays different possibilities of connectivity: 3G, 4G, WiFi, and WiMAX, among others.Very often, the access points (APs) or base stations (BSs) that provide these radio access technologies (RATs) coexist in the same location and, consequently, their connectivity services overlap.This scenario is known as heterogeneous network (HetNet) and has received an increasing attention in the past years [1,2].On the other hand, user's satisfaction perceived when he/she is using any application is directly related to the quality of service (QoS), which is consequently a decisive factor for successful network exploitation [3][4][5].Within this paradigm, we are interested in the network (or RAT) selection problem (NSP) with QoS, in which each terminal (user) is assigned to the most suitable available network complying with the QoS requirements. A more complete analysis is to contemplate both user's and operator's satisfaction in multiuser scenarios, as in [9,11,13,20].Nevertheless, this approach has received much less attention than those mentioned above.In [9], the authors address the NSP although they do not use a unified framework as they study user's and operator's satisfaction by means of two separate algorithms.In [11], the NSP is formulated as the maximization of an objective function that includes the user's and operator's utility function, and the results show the sum-utility achieved.In [13], user's and operator's satisfaction is based on monetary criteria (bid and price).However, this 2 Wireless Communications and Mobile Computing scheme relies on the so-called "operator's reputation rating," which is provided by the operator itself and may be a source of unreliability.In [20], new metrics are proposed to evaluate user's satisfaction and system performance, for multicast transmissions. A common feature to all the abovementioned works is that the user terminal associates with only one RAT/AP.Recently, the multi-RAT technique has been proposed for heterogeneous scenarios so that the user equipment can transmit/receive its data over multiple RAT/AP [21,22].These terminals are known as multimode terminals (MMT) [23].Additionally, multi-RAT parallel transmission from more than one BS/AP naturally achieves some degree of load balancing at the expense of a theoretical increase in complexity.In this paper, we contemplate the possibility that MMT has the capability to support multi-RAT parallel downlink communication (see, e.g., [21,24]). In this work, we incorporate the use of multi-RAT transmission to the scenario where both user's and operator's satisfaction are explicitly considered.In this context, we investigate how different user's profiles may affect the performance of the system, not only in terms of utility but also in terms of metrics significant for heterogeneous networks such as bit rate and BS load.The NSP is solved following three different approaches to achieve some degree of load balancing in a natural way, so the use of specific convex/concave utility functions can be avoided [25].Among the references mentioned above, this work is closer to [11].Nevertheless, we here consider multi-RAT transmission and explore explicit metric and results that show user's and operator's satisfaction, different performance according to user's profile distribution, and different performance for distinct network topologies. The rest of this paper is organized as follows.Section 2 provides a review of the related literature and distinguishes the most usual approaches, multiobjective optimization, and game theory.In Section 3, we present the system model, which includes user's characterization and the definition of utility functions.In Section 4, we provide the mathematical formulation for the NSP.Performance evaluation of the developed options is presented and analyzed in Section 5.The paper ends with our conclusions. Related Work Network selection concerning users and providers for Het-Nets have been examined mainly from two viewpoints, namely, multiobjective optimization and game theory.While the multiobjective functions allow for different (and frequently opposite) criteria of interest for the actors of the network (base station, access point, operator, and users), game theory resolves the conflicting interests under a selfish perspective of the participants.In this section, we explore the relevant literature in these two areas. Multiobjective Optimization for NSP. Multiobjective optimization (also known as multicriteria optimization) is a mathematical framework to solve problems with multiple objectives or criteria.As in multiobjective optimization (MOO), the objectives are incorporated into the objective function, the results are more aligned with the objectives than when these objectives are formulated as constraints in the optimization problem.MOO has been applied in many engineering and economic related fields and is receiving an increasing attention to study wireless communication problems with opposed criteria [26]. In [9], the NSP is formulated as the maximization of a new multiobjective function that includes several criteria that collect both user preferences and network status (link quality, cost, battery lifetime, mobile speed, and network load).Instead of constructing the multiobjective function as the weighted sum of the individual criteria utilities, the authors propose an exponential-weighted product utility function, as they consider that if given criteria are set to zero, the total utility must reflect the importance of this fact.They also propose how to integrate the mechanism into the MIH (Media-Independent Handover) mechanism defined in IEEE 802.11 standard.However, it is difficult to determine the benefits of this work as user's and operator's satisfaction are evaluated separately. In their paper, Kosmides et al. [11] address the NSP from a service utility-based perspective, as the user's and operator's utility are a function of the provided service (voice, video, or web).As video and web are considered soft services, in the sense that different rates can be accommodated, the utility function of these two services has two components, the user's utility and the operator's utility.Also, the contribution of each component is complementary weighted by using the parameter, with ∈ [0,1].In this model, users are characterized depending on their expected satisfaction level, and this characterization conditions the operator's utility in the form of user's willingness to pay.To solve the NSP, the authors propose two heuristic algorithms, the Greedy algorithm and the strip packing algorithm.Having the same O( 2 ) complexity ( is the number of access points and is the number of users), the two algorithms perform differently depending on the chosen metric. In [27], the authors propose two models that include battery consumption, quality of service, and monetary cost.The first model is based on TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and incorporates a vote mechanism to reduce user's subjectivity.The second model relies on the data envelopment analysis (DEA) methodology with the goal of achieving good user's experience with no explicit requirements.This scheme accomplishes a reduction in complexity with respect to the TOPSIS scheme. In [28], the authors propose an access network selection mechanism based on the IEEE 802.21 standard.In this model, user's utility evaluates fairness among users and the suitability of the network currently selected.Regarding provider's utility, it is based on the total income.The overall network selection process is implemented by means of a genetic algorithm. Game-Theoretic Approach for NSP. Game theory is a mathematical tool that is suitable to model situations where the involved participants or players (users, base stations in our case) compete selfishly for the resources.The combination of strategies (or decisions) incorporating the best strategy for every player is known as equilibrium, which is a Nash Equilibrium if none of the players can increase its utility by changing his/her strategy without degrading the utility of the others [29].Hence, game theory can be straightforwardly applied to NSP.For the interested reader, [30] provides an excellent survey of the literature on the general problem of network selection based on game theory previous to 2010. In [31], the proposed game-theoretic network selection framework integrates service differentiation considering distinct utility functions for brittle, partially elastic, and elastic traffic, from the user's perspective, similarly to [11].The objective is to maximize the sum-utility, as the sum of the user's utilities.To this end, a local improvement algorithm (LIA) is proposed.The key point of LIA is the use of localized cooperation, that is, two networks/APs whose coverage overlaps exchange information.LIA takes the form of a user-network association game in which the pairs of overlapping networks are the players and the original problem is decomposed into a sequence of multiple subproblems.However, provider's profit is not explicitly contemplated, as only social welfare is taken into consideration. In [13], a modified version of the first-price sealed-bid auction is designed to solve the NSP.In this auction, the users buy the connection service to the network operators that provide it.The users establish their preferences by means of the desired buying price, which is based on the operator's reputation rating (level of success in delivering service according to a given QoS).As the reputation ranking is provided by the operator itself, this may be a potential source of unreliability.The operator's strategy is based on the price offered by the users, the bidding price, and a penalty according to the reputation rating, and the utility is equal to the difference between the bidding and the cost.For the case of two operators, which is a relevant case in practical systems, they find a closed form for the equilibrium bidding strategy functions,. Following also a game-theoretic approach, Cao et al. [32] present a two-layer framework that involves both service provider's benefit and user's satisfaction.The two-layer algorithm consists in an intranetwork game where the service providers (SPs) determine their prices and rates for users, and an internetwork game for network selection, where users are associated with SPs according to the prices and rates resulting from the previous stage.The second game is addressed from a social-behavioral viewpoint, as the authors formulate the game as a hedonic game which considers preference rules to implement behavioral restrictions.In the same line, the work of [24] proposes a two-level algorithm, where in the user-level game they compete for provider's bandwidth and in the provider-level game providers establish bandwidth prices. For HetNets that include C-RAN (cloud-radio access networks), inter-tier interference mitigation is studied in [33]. The key idea of this scheme is the use of contract-based utility functions for the base stations (macro base stations and radio remote units, RRH), with these utilities being a function of the sum-rate received by the users.A significant contribution of this paper is the consideration of non-perfect channel state information (CSI) for the contract design. System Model In this paper, we study the network selection problem (NSP) in heterogeneous networks, considering that one operator offers network connectivity services by means of a set of different radio access technologies (RATs) to users.We consider that user's terminals are multi-RAT (also known as multistandard or multimode) [34], that is, terminals that can connect to more than one radio technology and support connectivity in integrated heterogeneous environments.Each user reports his/her bandwidth (bit rate) requirements ,max when he/she requests a connectivity service for applications such as voice or video streaming.We assume that our model targets mobile communication networks where users have a very reduced mobility such as airport hot-spots, restaurants, and cafés, where no horizontal handover management is required. The operator offers connectivity services to the users from a finite set of data rates R = { 1 , . . ., }; that is, the solution to the NSP is to optimally assign elements from R to users.As each RAT has distinct characteristics, we have R = { 1 , . . ., } for each RAT .We denote by the maximum bit rate that RAT can deliver to users. User Profiles. For variable bit rate services (web and video), different user profiles are considered according to his/her expectation about the quality of service (QoS) he/she will receive and the benefit the provider may obtain for the service [11].Some users have low expectations about the QoS, which is the case of users with low-quality service contracts or not worried about the quality of the service.These users are labeled as low-expectation users (LEUs).On the other hand, other users that are paying for high quality services are generally not tolerant to service degradation and can be characterized as high-expectation users (HEUs).Finally, the rest of users having expectations between these two cases are considered as mediumexpectation users (MEUs).The utility functions corresponding to the different profiles are presented in the following subsection. For on-off services such as voice, where the service is provided only if the bit rate exceeds a given threshold rate, the utility function is the same irrespective of the user profile, as indicated also in the following subsection. Utility Functions. In this section, we introduce the utility functions to be used, defined in [11].Let ( , , ,max ) denote the user's satisfaction function and ( , , ) the RAT's benefit function, where , represents the rate assigned by RAT to user and stands for the data rate requested by user .Three different utility functions based on and are defined whether the service requested by user is either voice, web, or video: where, for a generic utility function , () is defined as ,min is the minimum data rate requested by user for web service, and parameter weights utility functions of (2) to favor one component over the other.Note that for = 1 only user's satisfaction is considered, while for = 0 only operator's satisfaction is pertinent.In (3), min represents the minimum utility value for video service.Mathematically, the user profiles correspond to different utility functions for both and .With respect to , the general form is where takes the values 1, 2, and 4 for LEU, MEU, and HEU, respectively [35].Regarding , the utilities for MEU and HEU are modelled using a sigmoid function because of its mathematical properties (monotonicity, convexity) and its capacity to implement the decreasing marginal utility of user's satisfaction.For LEU, a logarithmic function reflects the desired behavior.Hence, the functions are as follows: , medium-expectation user (MEU) , high-expectation user (HEU) . ( In (5), HEU and MEU represent the steepness of the utility curve for HEU and MEU, while HEU and determine the center of the utility curves for HEU and MEU.Note that (i) the larger the center value, the smaller the range of where users are risk-averse; (ii) the larger the steepness, the sharper the curve. To meet the expected user's behavior, the utility functions (1)-( 5) have been accordingly fitted using the parameter values of Table 1. Problem Formulation The objective is to maximize the network total utility, which is a sum of the individual utilities of the network.Let , ( , , ) be the utility function for the pair (, ), being the access point index and denotes the user.Note that , will correspond to either (1) or (2).Let , ∈ R be the bit rate assigned to user by RAT .Let , be the binary assignment variable associated with (, ) such that , = 1 if user is assigned to RAT and 0 otherwise.If each user can be associated with one and only one RAT, NSP can be formulated as [11] However, a user may be willing to simultaneously execute two or more applications (for instance, voice and web browser) and, in practical implementation, the application data flow is constrained to a single RAT.To reflect this scenario, the problem is reformulated so that a total of ⩾ connections are attended by RATs: where constraint (C.4) makes it possible that any application can be associated with more than one RAT.Note that (NSP1) is an integer problem.Therefore, by replacing (C.4) for (C.4a), we are performing integer constraint relaxation and (NSP1) becomes the more tractable (NSP MultiRAT) problem [36].Moreover, it has the advantage of achieving some degree of load balancing with no additional changes. Performance Evaluation This section shows the performance results obtained by simulation.Our proposal is compared with a Greedy stateof-the-art algorithm [11].In the following figures, we depict the results corresponding to (i) optimal solution to (NSP1), labeled as OPT: The solution is obtained using a standard branch-andbound (B&B) algorithm, as the integer problem is linear [37], (ii) optimal solution to (NSP MultiRAT), labeled as Multi: The NSP MultiRAT problem is linear, so a standard interior point algorithm is employed [38], (iii) a rounded solution of (NSP MultiRAT), labeled as MultiR: The solution is obtained by rounding the NSP MultiRAT solution to the nearest integer (the rounded solution is obtained by approximating the real solution to nearest integer; e.g., if we have 1, = 0.7 and 2, = 0.3, the rounded solution is 1, = 1, 2, = 0), (iv) the Greedy algorithm, labeled as Greedy, that solves the (NSP) problem (equivalent to NSP1). We first describe the simulation scenario and afterwards we provide the results in terms of utility and bit rate. Simulations Setup. This subsection describes the simulation setup.The network consists of 3 base stations (BSs) or access points whose coverage areas overlap, and each of them can correspond to a different RAT, namely, HSDPA, HSDPA+, and WiMAX 802.16e.The bit rates available for each RAT are given in Table 2.The results reflect the case when the probability of blocking is zero.This is accomplished by providing the RATs with sufficient capacity to accommodate all user demands, as the objective is to analyze the possible differences among user and service profiles and RAT combinations.The capacity of each RAT has been set to 16.8 Mbps (WiMAX), 9.4 Mbps (HSPA+), and 6 Mbps (HSPA), considering one sector per BS [39].For a fair comparison, the channel model is similar to that described in [11].The BS-terminal communication can find three conditions, namely, propitious, balanced, and ominous.In our simulations, we assume the channel condition is the same for all users and BSs.The channel model is as follows.For propitious conditions, the channel is good with a probability of 70% and poor with a probability of 30%.For balanced and ominous conditions, the probabilities are, respectively, (50, 50) and (30,70).Whether the channel is good or poor, the available transmission rate is given in Table 2. To evaluate the influence of user's profiles, we have simulated scenarios with different percentage of user's profiles and with different percentage of application's types.Table 3 details the simulated configurations.We have also considered different combinations of RATs to assess the effect of this factor on the system performance.The network topologies corresponding to the RAT combinations are listed in Table 4. 1 and 2 represent the variation of the sum-utility with of (2).We observe that the higher the value of , the higher the sum-utility obtained, which implies that the term associated with user's utility dominates provider's utility.Regarding the percentage of user types, we notice that an increase in HEUs (CONFIG2 compared with CONFIG1 and CONFIG6 compared with CONFIG4) implies an increase in sum-utility keeping the same percentage of applications. Utility Performance. Figures In Figures 3 and 4, the sum-utility is plotted for different combinations of BSs, as indicated in Table 4.As expected, the sum-utility grows as the capacity offered by the set of base stations does.We note that, for the network topologies with highest sum-capacity (combinations 1 to 6), the presence of a higher percentage of high-expectation users (CONFIG2 and CONFIG6) increases the sum-utility, and this effect is almost negligible and even negative for network topologies 7-10.For other values of , the curves exhibit a similar performance. Bit Rate Performance. The performance in terms of sumrate is depicted in Figures 5-8, where we observe that an increase in the number of high-expectation users (CONFIG2 and CONFIG6) does not affect the total rate, except for the Greedy algorithm.Moreover, the sum-rate is almost constant irrespectively of the value of , with the exception of the Greedy algorithm (see Figures 5 and 6).Figures 7 and 8 display the sum-rate as a function of the network topology (see Table 4), showing that the lower the capacity, the lower the sum-rate, as expected. Impact of User's Profiles.We now evaluate the impact of the user's profiles.The figures plot data when the OPT solution is employed, as the other solutions exhibit similar performance.In Figure 9, we see how the network topology affects the performance in terms of mean utility per user: while for topologies with high capacity HEUs and MEUs clearly outperform LEUs, LEUs outperform MEUs for low capacity and both MEUs and HEUs for network topology #10.In Figure 10, we see that the value of impacts the mean utility of LEUs, while its effect is almost negligible for MEUs and HEUs.In Figures 11 and 12, we notice that there is no appreciable difference when the percentage of user types is significantly modified (CONFIG1 and CON-FIG2, CONFIG4 and CONFIG6) for the different network topologies. With respect to bit rate, as the optimization function does not directly focus on bit rate, variations in utility performance for different user's profiles may or may not translate into diversity in bit rate performance.As Figure 13 shows, there is no appreciable difference when the percentage of applications is similar (CONFIG1 and CONFIG2).However, if the percentage of application type is highly unbalanced (CONFIG4 and CONFIG6), the percentage of user type influences the obtained bit rate.In this case (see Figure 14), the performances for LEUs, MEUs, and HEUs are dissimilar, especially for the low capacity network topologies. BS Load. We now evaluate the BS load.We have observed that the variation of the user's profile has a negligible effect (for instance, in the case of CONFIG1 and CONFIG2 or CONFIG4 and CONFIG6), so CONFIG1 and CONFIG4 have been plotted as a representative case.We see that, depending on the network topology, the mean load varies (see Figure 15).The value of does not affect the mean BS load, irrespectively of the configuration or the network topology, as Figures 16 and 17 show.4. Discussion. The results show that our proposal (OPT, Multi, and MultiR) outperforms the Greedy algorithm and that the OPT, Multi, and MultiR solutions achieve in practice the same performance, not only in terms of utility but also in terms of bit rate. Regarding the number of users, we have run simulations for a number of users (applications) from 4 to 18.We have observed that the number of users does not impact any of the metrics shown in the performance evaluation section, so we only present results for most unfavourable case (i.e., 18 applications). We mention in the introduction of Section 5 that 3 scenarios have been considered for simulation (propitious, 4. balanced, and ominous).However, the results for the balanced and ominous cases are akin to the propitious scenario except for the fact that the utility and bit rate values are lower.For this reason, we have not included results for the balanced and ominous scenarios. Conclusions In this work, we have formulated the NSP taking into account the fact that user terminals support multiple RATs and can be served simultaneously from diverse BSs.As not only user's satisfaction is important, provider's satisfaction has been included to build the objective utility function.We have seen that by means of parameter we can balance user's and provider's satisfaction. As the NSP faces the optimization of the sum-utility, sum-rate performance is not meaningful from the user's profiles viewpoint.Nevertheless, the individual utility and rate are significantly influenced by the user's profiles and the percentage of user's types and application's types.Our simulations also show that the network topology impacts the system performance and that BS load can vary according to user's profiles. In summary, we can conclude that, to design an effective association for the NSP, different factors must be taken into account, such as user's profiles, application's characteristics, the number and type of BSs that form the HetNet, and the operator's satisfaction. Figure 10 : Figure 10: Mean utility per user with different user's profiles, 18 users, CONFIG1, scenario propitious, network topology #1, as a function of . Figure 11 : Figure 11: Mean utility per user with different user's profiles with CONFIG1 and CONFIG2, 18 users, scenario propitious, = 0.5, for the network topologies of Table4. Figure 12 : Figure12: Mean utility per user with different user's profiles, 18 users with CONFIG4 and CONFIG6, scenario propitious, = 0.5, for the network topologies of Table4. Figure 13 : Figure13: Mean bit rate per user with different user's profiles, 18 users with CONFIG1 and CONFIG1, scenario propitious, = 0.5, for the network topologies of Table4. Figure 14 : Figure14: Mean bit rate per user with different user's profiles, 18 users with CONFIG4 and CONFIG6, scenario propitious, = 0.5, for the network topologies of Table4. where , , , , , and , are equivalent to those defined above but for the pair (, ). Table 3 : Configurations of users and applications. Table 4 : Network topology: number of BSs of each type.
5,881.4
2017-01-18T00:00:00.000
[ "Computer Science" ]
Specifics of Using Image Processing Techniques for Blood Smear Analysis The process of medical diagnosis is an important stage in the study of human health. One of the directions of such diagnostics is the analysis of images of blood smears. In doing so, it is important to use different methods and analysis tools for image processing. It is also important to consider the specificity of blood smear imaging. The paper discusses various methods for analyzing blood smear images. The features of the application of the image processing technique for the analysis of a blood smear are highlighted. The results of processing blood smear images are presented. Introduction An important stage in the treatment of patients is the diagnosis of their state of health. The tasks of diagnosing the patient's body have different directions for their implementation. It depends on the stage of diagnosis, the severity of the disease. For this, various studies are carried out that allow you to determine a certain set of parameters. These parameters characterize the general health of the patient. At the same time, we can use various tools and methods to conduct diagnostics (Rabotiahov et al., 2018;Lyashenko, Babker & Kobylin, 2016). Among the methods for diagnosing the state of human health, one should single out the method when the patient's blood parameters are analyzed. It can be a complete blood count or CBC. This can be a biochemical, immunological, hormonal blood test. We can look at the leukogram, assess the hemoglobin content, and determine the number of erythrocytes or leukocytes (Li et al., 2019;Andrade et al., 2019;Grill et al., 2020). At the same time, it is possible to diagnose the state of human health based on the processing of the blood smear image (Lyashenko, Matarneh & Kobylin, 2016). Here we can also evaluate the different components of a blood smear. For this, various methods of image processing are used. The use of such methods is determined by the task of analyzing a blood smear, the method of obtaining an image of a blood smear. This imposes certain restrictions on the use of imaging The main task of blood smear image analysis is to identify individual components: plasma, erythrocytes, leukocytes, platelets. Researchers are also solving problems such as counting erythrocytes, leukocytes, and platelets. For example, S. N. M. Safuan, M. R. M. Tomari and W. N. W. Zakaria discuss a procedure for counting white blood cells in blood smears (Safuan, Tomari & Zakaria, 2018). This analysis is carried out based on the color segmentation method. The first step is the identification of white blood cells; the second step is counting white blood cells in blood smears. At the same time R. Tomari, W. N. W. Zakaria, M. M. A. Jamil, F. M. Nor and N. F. N. Fuad are investigating an automated system for counting red blood cells in blood smears (Tomari et al., 2014). For such counting of red cells, the authors use special techniques for processing and analyzing blood smear images. For this analysis, an artificial neural network classifier is used. The system of counting red blood cells in a blood smear is considered by V. Acharya and P. Kumar (Acharya & Kumar, 2018). For this, the authors use morphological operations and the edge detection method. But the more difficult task is to identify and classify the components of a blood smear, identifying the constituent parts of each component of a blood smear. In their research, D. C. Huang, K. D. Hung and Y. K. Chan offer their own method for recognizing leukocyte nuclei in blood smear images (Huang, Hung & Chan, 2012). For this, the authors perform segmentation of the initial image taking into account the morphological features of images of leukocyte nuclei. The preliminary stage of the analysis is the use of image enhancement techniques. The detection of nuclei and cytoplasm is also discussed in (Tran, Ismail, Hassan & Yoshitaka, 2016). An original method for the identification of leukocyte nuclei is considered in the work of the authors R. B. Hegde, K. Prasad, H. Hebbar and B. M. K. Singh (Hegde, Prasad, Hebbar & Singh, 2019). For this, the authors have developed an appropriate neural network that adjusts depending on the type of input image of blood smears. For this, the brightness level of the input image and the brightness level around the nucleus of leukocytes are taken into account. Various methods of segmentation and classification of leukocytes in blood smears are considered in the work of S. Sapna and A. Renuka (Sapna & Renuka, 2017). The authors provide a critical review of the various approaches that are used to analyze blood smear images. Thus, we see that different methods and approaches can be used to analyze images of blood smears. Moreover, the arsenal of such methods and approaches is large and varied. One of the reasons for such a variety of methods and approaches for studying images of blood smears is the specificity (features) of these images. Below we provide examples of some of these features of blood smear image analysis. Some features of the image processing technique of blood smears The preliminary step in analyzing blood smear images is to improve the quality of the original image. In this case, you can use such procedures as: changing the contrast of the original image, filtering procedure. However, in each case, changes to the original image are possible, which may lead to errors. The filtering procedure can remove significant components from the blood smear image. For example, if these are platelets, then they can be removed (barely visible for further processing) after filtering the original image. In Figure 1 shows an example of the original image where there are platelets (Figure 1a) and the images after the filtration procedure, where the platelets are poorly displayed (Figure 1b). Also procedure filtering may change the visualization of other components of the original image ( Figure 1c). Figure 1. Examples of the initial image with platelets and images after the filtration procedure Therefore, it is important to select the necessary parameters of the filtration procedure for solving a specific problem of processing a blood smear image. The same remark applies to the peculiarities of using the procedure for contrasting the original image. In Figure 2 shows an example of the original image ( Figure 2a) and the image after applying various procedures for changing the contrast (Figure 2b and Figure 2c). Figure 2. Examples of the original image with platelets and images after the contrasting procedure of the original image We see that different contrasting procedures give different results. Special attention should also be paid to the color segmentation method. To implement the color segmentation method, it is necessary to know the morphological parameters of each component of the blood smear image. But you definitely can't do it (Dey et al., 2015). This is due to the fact that it is possible to use various systems for staining a blood smear; some components overlap when coloring them. Some examples of color segmentation are shown in Figure 3. . Some examples of color segmentation for blood smear images We see that for different images of a blood smear, we have different images of different visualization quality, to which the color segmentation procedure is applied. Moreover, for blood smear images (Figure 3b), the selected components of the blood smear overlap or merge. This complicates further analysis. Therefore, image preprocessing procedures are required. But as we said earlier, these procedures can also give a false result. We also draw attention to the application of procedures for identifying components of a blood smear. To solve this problem, edge selection methods are used. But these methods also give different results. We see that different methods of edge extraction emphasize in different ways the structure of individual components of the blood smear image. This fact should also be taken into account when developing automated procedures for analyzing blood smear images. Conclusion The paper deals with the use of image processing techniques for the analysis of blood smear images. In particular, special attention is paid to the preliminary processing of images: changing the image contrast and filtering functions. Color segmentation and identification of components of a blood smear using an edge extraction procedure are also discussed. Attention is paid to the specifics of using different procedures for processing blood smear images. Specific examples are given. Possibility of influence of such procedures on the general process of diagnostics of human health is indicated.
1,974
2020-11-07T00:00:00.000
[ "Medicine", "Computer Science" ]
A Brief Survey of Methods for Analytics over RDF Knowledge Graphs : There are several Knowledge Graphs expressed in RDF (Resource Description Framework) that aggregate/integrate data from various sources for providing unified access services and enabling insightful analytics. We observe this trend in almost every domain of our life. However, the provision of effective, efficient, and user-friendly analytic services and systems is quite challenging. In this paper we survey the approaches, systems and tools that enable the formulation of analytic queries over KGs expressed in RDF. We identify the main challenges, we distinguish two main categories of analytic queries (domain specific and quality-related), and five kinds of approaches for analytics over RDF. Then, we describe in brief the works of each category and related aspects, like efficiency and visualization. We hope this collection to be useful for researchers and engineers for advancing the capabilities and user-friendliness of methods for analytics over knowledge graphs. Introduction To leverage large scale data for gaining new insights, a recent and promising practice in various domains (environment, health, economy, culture, economics and others), adopted by both academia and industry, is to construct a Knowledge Graph (KG) [1] that aggregates and integrates data from several datasets, as illustrated in Figure 1. The value of such KGs is that they provide a unified view of the domain and enable unified browsing, querying, question answering and analytics. Indeed, there are several KGs expressed in the W3C standard RDF (Resource Description Framework), including general purpose KGs, like DBpedia [2] and Wikidata [3], domain-specific KGs [4], like Europeana [5] for culture, DrugBank [6] for drugs, GRSF [7] for stocks and fisheries, ORKG [8] and OpenAIRE [9] for scholarly work, WarSampo [10] and SeaLiT [11] for historical research, recently also for research related to COVID-19 such as [12], COVID-19 Open Research Dataset (https://github.com/allenai/cord19, accessed on 1 January 2023) and CORD-19 Named Entities Knowledge Graph (https://zenodo.org/record/3827449, accessed on 1 January 2023), and finally KGs from enterprise relational databases [13]. However, the analysis of big and complex KGs is still challenging, as it is also stated in [14]. In particular, users have difficulty in analyzing complex KGs since this requires knowledge of the data terminology (which is wide in case of KGs that integrate data from several datasets) and the syntax of query language. From a system perspective, efficiency is hard to achieve for big KGs, while from an application/domain perspective users usually face completeness and freshness issues [14]. To better understand the situation, in this paper, we review the work that has been done in this area, i.e., by focusing on KGs expressed in RDF. The rest of this paper is organized as follows: Section 2 provides the required background and refers to past surveys, while Section 3 identifies challenges and provides a categorization of the existing works. Subsequently, Section 4 surveys particular works and systems, whereas Section 5 discusses related aspects, including efficiency and visualization. Finally, Section 6 concludes the paper and identifies directions for further research. Background and Related Surveys This section provides a background for RDF (in Section 2.1), for SPARQL (in Section 2.2), for the possible access methods over RDF (in Section 2.3), for OLAP (in Section 2.4), and finally it discusses related surveys (in Section 2.5). Resource Description Framework (RDF) The Resource Description Framework (RDF) [15,16] is a graph-based data model proposed for the realization of Semantic Web vision and key format of the Linked Data publishing method. It uses triples, i.e., statements of the form subject´predicate´object , where the subject corresponds to an entity (e.g., a product, a company etc.), the predicate to a characteristic of the entity (e.g., price of a product, location of a company) and the object to the value of the predicate for the specific subject (e.g., "300", "US"). The triples are used for relating Uniform Resource Identifiers (URIs) or anonymous resources (blank nodes) with other URIs, blank nodes or constants (Literals). Formally, a triple is considered to be any element of T " pU Y BqˆpUqˆpU Y B Y Lq, where U, B and L denote the sets of URIs, blank nodes and literals, respectively. Any finite subset of T constitutes an RDF graph (or RDF data set). RDF Schema. RDF Schema (https://en.wikipedia.org/wiki/RDF_Schema, accessed on 1 January 2023) (RDFS) is a special vocabulary that comprises a set of classes with certain properties based on the RDF extensible knowledge representation data model. Its intention is to structure RDF resources, since even though RDF uses URIs to uniquely identify resources, it lacks semantic expressiveness. It uses classes to indicate where a resource belongs, as well as properties to build relationships between entities in a class and to model constraints. For example, a KG with information about products is shown in Figure 2 (for reasons of brevity namespaces are not shown). The upper part illustrates the schema, while the bottom part illustrates the data. SPARQL RDF data are mainly queried through structured query languages, i.e., SPARQL (https://www.w3.org/TR/rdf-sparql-query/, accessed on 1 January 2023), which is the standard query language for RDF data. From version 1.1, SPARQL also supports complex querying using regular path expressions, grouping, aggregation, etc. In particular, and as regards analytic queries, SPARQL supports the modifier GROUP BY and supports various aggregate functions including COUNT, SUM, AVG, MIN, MAX, and GROUP_CONCAT. For example, the expression of the query "total quantities of products released by company", over the KG of Figure 2, can be expressed in SPARQL as we can see in Figure 3. We should note that apart from SPARQL, there are a few other languages for querying knowledge graphs, such as Cypher [17] (a declarative language implemented as part of the Neo4j graph database), Gremlin [18] (a combination of SQL, SPARQL and Cypher, which focuses on navigational queries rather than matching patterns), PGQL [19] (an SQLlike pattern-matching query language) and G-CORE [20] (a graph query language that integrates the features provided by the graph query languages Cypher [17] and PGQL [19]) for querying property graphs. Access Methods over RDF Apart from structured query languages (i.e., SPARQL), we have Keyword Search systems over RDF (like [21]) that enable users to search using the familiar method they use for Web searching. We can also identify the category Interactive Information Access that refers to access methods beyond the simple "query-and-response" interaction, i.e., methods that offer more interaction options to the user. In this category, there are methods for RDF Browsing (plain or similarity-based like [22]) methods for Faceted Search over RDF [23], as well as methods for Assistive (SPARQL) Query Building (e.g., [24]). Finally, in the category natural language interfaces, there are methods for question answering, dialogue systems, and conversational interfaces (e.g., see [25] for a survey). Figure 4 illustrates the above methods and the distinctive characteristics of each one. OLAP (OnLine Analytical Processing) OLAP is a special case of materialized data integration [26], where the data are described by using a star-schema, while "data are organized in cubes (or hypercubes), which are defined over a multidimensional space, consisting of several dimensions" [27]. Especially, in the era of big data, data is often produced faster than it can be consolidated and analyzed, and the data cube was designed to avoid slow processing times for complex data analysis, since it aggregates relevant data, speeding thus data queries. Essentially, a data cube is used to understand and analyze, fast and easily, large amounts of data that is too complex to be understood or interpreted by a table of columns. It enables consolidating or aggregating relevant data for easier handling and fast retrieval since there is no need for many time-consuming calculations when an end-user query is processed. The preaggregated values within the cells of a cube are called measures and they are the values of interest. The measures are aggregated according to dimensions , i.e., attributes of data, and they show the relationship between dimensions. The data into the cube can be viewed from different angles. A number of OLAP data cube operations exist to demonstrate these different views, allowing interactive queries and search of data at hand. Hence, OLAP supports a user-friendly environment for interactive data analysis. The basic OLAP operations are: roll up (aggregate data by ascending concept hierarchy), drilldown (navigate from less detailed data to more detailed data), slice (perform a selection on one dimension of the given cube), dice (describe a subcube by operating a selection on two or more dimensions), and pivot (provide an alternative presentation of the data). Related Work: Past Surveys There are several surveys available for RDF KGs. In particular, ref. [28] surveys approaches for large scale semantic integration of linked data, by giving emphasis on how to integrate multiple RDF datasets. Moreover, ref. [29] offers a survey of the RDF dataset profile features and methods, by also mentioning vocabularies for publishing RDF statistical data (which are also described later in this survey). Furthermore, ref. [30] surveys techniques and systems for querying RDF datasets, by mainly focusing on storage, indexing and query processing techniques for evaluating SPARQL queries, while [31] surveys RDF graph generation approaches from heterogeneous data, by focusing on existing mapping languages for schema and data transformations. Moreover, ref. [26] surveys and categorizes OLAP approaches that leverage semantic web technologies according to several criteria, including materialization, transformations and extensibility. Finally, there are also available surveys [32,33] that describe visualization approaches for RDF KGs and surveys for summarization for semantic RDF graphs, e.g., see [34]. All the mentioned surveys can be of primary importance for generating, integrating, querying and visualizing RDF KGs, which are usually prerequisite steps for producing analytics over RDF KGs. On the contrary to the best of our knowledge, there is no survey yet which provides an overview on analytics over RDF KGs, i.e., which is the core objective of this survey. RDF and Analytics: Challenges and General Approaches Section 3.1 identifies the major challenges that are related to analytics over RDF, Section 3.2 provides a categorization of the existing works on this topic, and Section 3.3 presents the different types of analytic queries by providing indicative examples. Challenges A KG that integrates data from several datasets tends to have a complex structure, in comparison to multidimensional data, since: (i) different resources may have different sets of properties (from different schemas), (ii) properties can be multivalued (i.e., there can be triples where the subject and predicate are the same but the objects are different) and (iii) resources may or may not have types. We should note here that the typical methods for analytics (i.e., over multidimensional data), are not adequate since they presuppose a single homogeneous data set, something that is not the case for RDF data, e.g., as it is stated in [14]: "Analytic tasks would be straightforward, using SQL or SPARQL queries and data-science tools, if the underlying data were stored in a single database or knowledge base. Unfortunately, this is not the case". Furthermore, the analysis of RDF graphs should leverage the semantics of RDF(S), i.e., the inference based on rdfs:subClassOf and rdfs:subPropertyOf, and in many cases quality, completeness and freshness issues should be tackled. Categories of Works (Related to RDF and Analytics) We categorize the related works in five basic categories, illustrated in Figure 5. In brief, there are works that focus on the formulation of analytic queries directly over RDF (they will be described in Section 4.2), works that first define Data Cubes over RDF (more in Section 4.3), and works that define domain-specific Pipelines that produce RDF and provide analytic services (will be described in Section 4.4). Finally, there are works that focus only on the publishing of statistical Data in RDF (more in Section 4.5), and approaches that combine data from multiple sources for producing quality analytics (see Section 4.6). Categories of Analytic Queries Here, we present the two main categories of analytic queries, by providing some indicative examples: (B) Quality-related analytics (e.g., connectivity, data uniqueness, data verification) of one or more KGs, e.g., through statistics or specialized metrics. They are mainly used in categories C4-C5. Examples of such queries are given below: -Coverage of a dataset: "How many unique triples DBpedia offers for the entity Aristotle?" -Connectivity between Datasets: "Give me the number of common entities among DBpedia, Wikidata and National Library of France" -Distribution of specific elements, such as properties, classes, namespaces, for detecting power-law cases in a KG or at the whole Linked Open Data (LOD) Cloud: "Is there a power-law distribution for the ontologies that are used from the LOD Cloud datasets?". -Dataset Discovery: "Which dataset is the most relevant for the entity Socrates (e.g., offering the most triples)?". -URI Quality: "What is the percentage of URIs that are dereferenceable and not broken?" Survey of Works and Systems In this section, we provide some details about the methodology that we followed for finding relevant papers and statistics about these papers (in Section 4.1), and we survey the existing works (in Sections 4.2-4.6) based on the categorization of Section 3.2. Methodology and Statistics For finding the related approaches, we used Google Scholar in the period of June 2022-November 2022 without any restrictions on the publication date. We used the following queries: (i)"RDF analytics tool", (ii) "Interactive RDF analytics", (iii) "RDF Data cube analytics", (iv) "Efficiency of RDF data analytics", (v) "Knowledge graph analytics" and (vi) "LOD Cloud analytics". For each query, we analyzed manually papers (from the first pages of Google Scholar results), i.e, by checking their title, abstract and body. Moreover, we found relevant papers from past surveys, e.g., for analytics over multiple datasets belonging to the LOD Cloud [28]. Concerning the selected papers, Figure 6 shows some statistics about the number of surveyed papers for each category and Figure 7, the year of publication of these papers. As we can see, the majority of works that we survey concern the categories C1 and C2, and most of the papers have been published between 2013-2017 (i.e., the most common case for the two mentioned categories). On the contrary, we also survey some more recent approaches (i.e., between 2018-2022), that mainly concern domain-specific pipelines (i.e., category C3) and approaches over multiple datasets at LOD scale (i.e., category C5). Table 1 lists approaches about the formulation of analytic queries directly over RDF, for enabling the execution of analytical queries of category A. Since both the size of the datasets and the need to process aggregate queries produce challenges for the standard SPARQL query processing techniques, some of the works propose techniques to overcome these limitations. Below, we provide more details for each of the presented approaches of Table 1 (in chronological order). Ref. [38] proposes some techniques, to handle SPARQL queries with aggregate operators over dynamic RDF datasets, efficiently. It stores RDF data as a large graph, and represents a SPARQL query as a query graph. To achieve efficient and scalable query processing, it implements pattern matching queries with the help of two index structures: a VS*-tree, which is a specialized B+-tree, and a trie-based T-index. • C1. Formulation of Analytic Queries Directly over RDF Ref. [39] proposes a set of query processing strategies for executing aggregate SPARQL queries over federations of SPARQL endpoints by materializing the intermediate results of the queries. However, participating sources in a federation might be unavailable at some point. Data and schemata of the sources might have evolved since the federation was created; thus, integration rules might no longer be valid or history of the data will be lost. • Ref. [40] shows how to process aggregate queries by using materialized views-named queries whose results are stored in a system (since they are typically much smaller in size than the original data and can be processed faster). These results are then used for answering subsequent analytical queries. • Ref. [41] describes a possible extension of SemFacet [46] to support numeric value ranges and aggregation. The focus is on theoretical query management aspects, related to faceted search; however, it lacks an interface and implementation. From the mockups of the GUI, it seems that no count information is provided, whereas explicit path expansion is not supported. On the contrary, the authors use the notion of "recursion" to capture reachability-based facet restrictions. Since this approach is not implemented, no evaluation results are available. • Ref. [42] presents Spartex, a vertex-centric framework for complex RDF analytics, that extends SPARQL to combine generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. It employs graph exploration and uses intervertex message passing during the query evaluation. • Ref. [43] mentions that the existing federated RDF systems support only basic queries in SPARQL 1.0 and cannot be compatible with complex queries in SPARQL 1.1 well, such as aggregate queries. For this reason, proposes a query decomposition optimization method, which allows combine triple patterns with the same multisources into one subquery. The schema can reduce the number of remote requests to improve the query efficiency by reducing the number of subqueries. • Ref. [44] proposes an approach for guided query building that supports analytical queries in natural language and can be applied over any RDF graph. The implementation is over the SPARKLIS editor [47], and it has been adopted in a national French project (http://data.persee.fr/explore/sparklis/?lang=en, accessed on 1 January 2023). During the query formulation, no count information is provided, reducing in this way the exploratory characteristics of the process. The authors report positive evaluation results as regards the expressive power of the interactive formulator which works well on large datasets and is easier to use than writing SPARQL queries. • Ref. [45] describes how a high-level functional query language, called HIFUN [48], can be exploited for applying analytics over RDF data. Rules for translating analytical HIFUN queries to SPARQL are presented. However, the interactive formulation of such queries and the evaluation part are missed from that study. To the best of our knowledge, there is limited work regarding analytics directly over RDF graphs in a user-friendly and interactive environment. We managed to find only two such works [37,44] that let users formulate analytical queries directly in such graphs by specifying the attributes of analysis (i.e., dimensions, measures) and the operations using drop-down menus or natural language and defining their values via checkboxes. The rest of the works [35,36,[38][39][40][41][42][43]45] propose methods entangled with lower-level technicalities, preventing novice users from exploiting them, and this can be time-consuming and burdensome for experts. C2. Definition of Data Cubes over RDF To gap the mismatch between the relational data model and the graph data model, there are approaches that define a data cube over existing RDF graphs and then apply OLAP. According to [44], one weakness of this approach is that it requires someone with technical knowledge to define the required data cube(s). Table 2 lists such approaches, whose target is also to enable the execution of analytical queries of category A. Below, we describe them in chronological order. -2015 Jakobsen et al. [54] -2015 CubeViz [55] Various charts, e.g., pie, bar, column, line 2015 Benetallah et al. [56] -2016 Microsoft Power BI [57] Various charts e.g., bar, column, pie, area, treemap ect. 2016 Tableau [58] Various charts, e.g., column, bar, pie, line, area, map etc. 2019 • Ref. [49] introduces Graph Cube to support OLAP queries effectively on large multidimensional networks. However, it usually ignores semantic information in heterogeneous networks. The experimental studies conducted shows that this tool supports decisions on large multidimensional networks, effectively. • Ref. [50] introduces Linked Data Query Wizard, a Web-based tool for displaying, accessing, filtering, exploring, and navigating Linked Data which are expressed in data cube format and stored in SPARQL endpoints. The main innovation of the interface is that it turns the graph structure of Linked Data into a tabular interface and provides easy-to-use interaction possibilities. It supports filtering of the columns (e.g., by a keyword or a numeric value) and simple aggregations. However, the tables are limited to the presentation of the direct neighborhood of entities (columns are entity properties, and column values are the objects of those properties) rather than results of arbitrary queries. Table cells can contain sets of values but not multicolumn tables. The results of the conducted user study showed that the tool had a few weak spots that could be improved, but in general it is usable, both for experts and nonexperts in computer science. • Ref. [51] presents Payola, a framework for Linked Data analysis and visualization. The goal is to provide end users with a tool enabling them to analyze Linked Data in a user-friendly way and without knowledge of SPARQL query language. This goal can be achieved by populating the framework with variety of domain-specific analysis and visualization plugins Although it encourages collaboration between users, e.g., experts can edit visualizations and SPARQL queries and lay-users can consume a result, it neglects to provide a complete representation of the dataset that is necessary for expressing the queries. At the same time, the amount of manual configuration and the necessary transformation steps between different abstractions might be considered a shortcoming by nontechnical users. Regarding the evaluation of this tool, there is a concise report where the test users asked a couple of questions regarding usability of it and concludes that work on the usability is needed. • Ref. [52] presents Vis-Wizard, a Web-based visualization system able to analyze multiple datasets using brushing and linking methods i.e., combining different visualizations to overcome the shortcomings of single techniques. The tool was designed for two different tasks: (i) explore endpoints like DBpedia and (ii) explore datasets that contain statistical data. Vis-Wizard allows users to group data and aggregate values providing multiple interactive widgets. According to [59], the online version reports a multitude of errors that prevented users to analyze the different visualizations that the tool offers. In fact, console errors rose and no charts appeared. Regarding endpoints like DBpedia, the tool works fine, but the tabular layout they implemented results to be a little messy at first. The evaluation conducted regarding the usability of the Vis-Wizard shows that while several usability issues still need to be fixed, the overall advantage is observable. • Ref. [53] proposes algorithms that use the materialized result of an RDF analytical query to compute the answer to a subsequent query. The answer is computed based on the intermediate results of the original analytical query. However, the approach does not propose any algorithm for view selection. It is applicable for the subsequent queries and not to an arbitrary set of queries [40]. In addition, no evaluation is reported. • Ref. [54] studies the improvement of SPARQL queries over QB4OLAP [60] (an extension of the RDF Data Cube Vocabulary https://www.w3.org/TR/vocab-data-cube/, accessed on 1 January 2023) to fully support OLAP multi-dimensional models and operators) data cubes. The idea behind the proposed approach is to directly link facts (observations) with attribute values of related level members. Although preliminary results in an evaluation study show an improvement in queries performance, this approach prevents level members from being reused and referenced, breaking the Linked Data nature of QB4OLAP data instances. • Ref. [55] proposes CubeViz, a user-friendly exploration and visualization platform for statistical data represented adhering to the RDF Data Cube vocabulary. If statistical data is provided adhering to the Data Cube vocabulary, CubeViz exhibits a faceted browsing widget allowing to interactively filter observations to be visualized in charts. However, it does not support aggregate functions, such as SUM, AVG, MIN and MAX, and blank nodes. According to [61] if the created RDF Data Cube is sparse, it is possible to receive an empty result set after using the data selection component of CubeViz. As a consequence, CubeViz is not able to process all kinds of valid Data Cubes. In a domain-agnostic tool such as CubeViz, it is not feasible to integrate static mappings between data items and their graphical representations. Most of the chart APIs have a limited amount of predefined colors used for coloring dimension elements or select colors completely arbitrarily. Finally, this paper does not provide any information about the evaluation of this tool. It contains only a link to an online demonstrator letting users evaluate it. • Ref. [56] presents multidimensional and multiview graph data using MapReducebased graph processing. The goal is to facilitate the analytics over the ER graph through summarizing the process graph and providing multiple views at different granularities. The technique, however, always materializes the result as paths with respect to a single entity identifier. The experiments conducted over real-world data sets, showed that the proposed approach performs well. • Ref. [57] introduces Microsoft Power BI, a business intelligence platform that provides nontechnical business users with tools for aggregating, analyzing, visualizing and sharing data. Power BI's user interface is intuitive mainly for users familiar with Excel. It assumes that the ingested data has been cleaned up well in advance, while there is also a limit on its size (cannot import large data sets). After the data hit the limit, you have to upgrade to the paid version of Power BI. [64,65]. All of these systems follow common techniques in the formulation of the analytical queries. They let users specify the attributes of analysis (i.e., dimensions, measures) and the operations interactively using drop-down menus and define their values via check-boxes. C3. Domain-Specific Pipelines over RDF There are numerous works that focus on defining specific pipelines for constructing the desired KG, from various structured and unstructured sources, and then offer particular analytic queries and visualizations to support domain-specific research purposes, e.g., for supporting analytical queries of category A. Since there is a large number of such available cases, e.g., ref. [4] surveys more than 140 papers on KGs from seven different domains, below, we present a few number of indicative works, from the medical, publications and cultural domain (presented according to their domain): • Medical Domain. PhLeGrA [66] has integrated data from several large scale biomedical datasets, for analyzing associations between drugs, i.e., for improving the accuracy of predictions of adverse drug reactions. Moreover, ref. [67] collects both structured and unstructured data for creating an aggregated KG about cancer data. The objective is to provide cancer data analytics through several services, such as treatment sequence analysis, data discrepancy analysis and others. Moreover, ref. [68] created a KG, from over 50,000 articles related to coronaviruses, by using linked data techniques. The produced RDF dataset can be used for producing analytics through several extraction and visualization tools, e.g., it is feasible to analyze the number of articles that comention cancer types and viruses of the corona family. Finally, ref. [69] describes a framework called Knowledge4COVID-19, that integrates several RDF sources of COVID-19 related data. The resulting KG is exploited from machine learning methods for providing analytics and visualizations that are used for discovering adverse drug effects and for evaluating the effectiveness and toxicity of COVID-19 treatments. • Publications Domain. OpenAIRE [70] is a Research KG that aggregates a collection of metadata and links, which are offered within the OpenAIRE Open Science infras-tructure, and provides several analytics and visualizations, such as for usage data (https://usagecounts.openaire.eu/analytics, accessed on 1 January 2023). Moreover, Open Research Knowledge Graph (ORKG) [8] exploit manual and automated techniques for creating and processing a scholarly KG. The mentioned KG can be used for further analysis through visualizations that are produced by the offered data science environments (e.g., see https://orkg.org/visualizations, accessed on 1 January 2023). • Cultural Domain. FAST CAT [71] is a collaborative system for data entry and curation in Digital Humanities, and it can be exploited for performing historical analysis over aggregated data. Moreover, ref. [72] describes BiographySampo, an approach that provides analytics for biographical and prosopographical research, by first transforming textual resources (from the National Biography of Finland) to RDF data. Afterward, even users that are nonfamiliar with SPARQL, can perform custom-made complex data analysis through the offered tools. C4. Publishing of Statistical Data in RDF This category of works is not for formulating analytic queries but for exchanging statistical results, and they mainly focus on providing analytical queries of category B. However, they can be also used for analytical queries of category A, i.e., for publishing domain-specific statistical data. In particular, we provide two different subcategories, i.e., works that publish statistical data as linked data through either the RDF data cube vocabulary (https://www.w3.org/TR/vocab-data-cube/, accessed on 1 January 2023), or the "Vocabulary of Interlinked Datasets", i.e., VoID [73]. All the approaches are listed in Table 3 and are described below. • Works with RDF data cube vocabulary. To foster the exchange and intelligibility of statistical results (expressed in csv and other formats), approaches such as [74,75], focus on publishing statistical data as linked data through RDF data cube vocabulary. Such statistical data can be visualized and analyzed through the framework Payola [51] (which has been described in category C2). • Works with VoID vocabulary. VoID can be exploited for expressing metadata about one or more RDF datasets, i.e., for representing and publishing several simple statistics, such as the number of triples, properties or classes of each dataset and the number of links between different datasets. Several tools have been published for measuring such statistics for RDF datasets through VoID including Aether [76] for generating, browsing and visualizing statistics, by using SPARQL queries. Furthermore, ref. [77] describes the tool Loupe, which provides summaries and an analysis of vocabulary information about each RDF dataset, e.g., the classes and properties used in each dataset. There have been proposed extensions of VoID, such as [78], for publishing and analyzing connectivity analytics of semantic data warehouses. On the contrary, approaches such as SPORTAL [79] and SPLENDID [80] compute and publish such statistics, for aiding the process of source selection for federated queries. Finally, the application KartoGraphI [81] publishes statistical data through VoID (and extensions of VoID), for SPARQL endpoints and provides several types of visualizations for the results. Table 4 introduces approaches that produce quality analytics, i.e., analytical queries of category B, over single and multiple RDF datasets (even at LOD-Scale). As we can observe from Table 4, most approaches of category C5 produce analytics either for measuring distributions (e.g., power-law cases) or for dataset discovery, i.e., as they are divided (and described below). • Works that measure distributions (e.g., power-law). Ref. [82] measured and analyzed the graph features of Semantic Web (SW) schemas with focus on powerlaw degree distributions, and the main finding was that the majority of SW schemas (at that time 2008) with a significant number of properties (resp. classes) approximate a power-law for total-degree (resp. number of subsumed classes) distribution. Furthermore, LOD-a-LOT [85] is an approach where 28 billion RDF triples from thousands of RDF documents have been collected, for enabling the analysis and the querying of combined data from multiple data sources, e.g., for analyzing the distribution of URIs and triples. Moreover, ref. [86] presents algorithms for computing analytical queries over Linked Open Data, by aggregating the results of queries from running SPARQL endpoints, i.e., for producing analytics over multiple LOD datasets, e.g., they measure the property and class usage on the LOD cloud, and they estimate the number of the available triples in the LOD Cloud. Finally, ref. [87] presents an empirical analysis of linkage among all the datasets of the LOD cloud, by focusing on automated methods for analyzing different link types at scale. The objective was to analyze the availability and discoverability of LOD datasets, i.e., the most commonly used ontologies, namespaces and classes, and many others, e.g., for discovering powerlaw distributions, and to analyze the quality of URIs, e.g., broken links, deferenacable URIs, etc. • Works for Dataset Discovery. LODVader [83] is a system that produces LOD analytics over 491 RDF datasets, for supporting dataset exploration, analysis and dataset discovery. Moreover, LODstats [84] is a service including some basic metadata and statistics for over 9000 RDF datasets, e.g., for measuring the number of datasets of specific property and class elements. Furthermore, LODsyndesis [16] is a suite of services that provides analytics for measuring the connectivity among hundreds of RDF datasets. The target is the produced connectivity analytics to be exploited for improving the discoverability and reusability of the underlying datasets, and for answering coverage queries. Finally, LODChain [88] is a research prototype the computes connectivity analytics for a new RDF dataset at real time to the rest of LOD Cloud through LODsyndesis, and produces several visualizations (including graph visualizations, bar and pie charts, etc.) and dataset discovery measurements. In particular, the target is the analytics to be used for enriching and verifying the content of the input dataset. Efficiency and Visualization This section discusses related aspects for the surveyed papers, i.e., efficiency (in Section 5.1) and visualization (in Section 5.2). Efficiency First, for the category C1, in [36], the authors measure the efficiency of joining star patterns with grouping operators for executing aggregating queries. They indicate that for complex analytical tasks that combine generic graph processing with SPARQL, vertexcentric graph processing frameworks are at least an order of magnitude faster than existing alternatives [42], whereas they demonstrate significant performance improvements for analytical processing of RDF data over existing Map-Reduce based techniques [35]. They show that decomposing the analytical queries and materializing the intermediate results [39,40] improve the query response time by more than an order of magnitude, and that in these cases, the average query time increases linearly with the increase of dataset size [43]. Concerning the category C2, in [56] the authors show that the size of the dataset as well as the number of function operations in an analytical query influence the execution time of such a query. They prove that running queries on Virtuoso over data cubes in the star pattern is faster than over cubes in the snowflake pattern, which is particularly interesting since the snowflake pattern is the pattern in which most RDF data cubes are available [54]. As regards category C3, in many cases, the authors measure the execution time of the SPARQL queries that produce the analytics [67,69], which are executed over the resulting KG. Generally, these queries are executed quite fast, even in a few milliseconds. On the contrary, the most time-consuming task of such domain-specific approaches is usually the creation of the KG, which requires huge human effort [89]. Regarding the approaches of category C4, which produce statistics usually through SPARQL queries [76,84], their performance highly depends on the underlying SPARQL endpoints, and the size of the datasets (number of triples, URIs, etc). Concerning the category C5, for enabling the fast computation of analytics, in several cases, specialized indexes are created, e.g., see LODsyndesis [16] and LOD-a-Lot [85]. Indicatively, the indexes of LODsyndesis aggregated KG [16] (which contain more than 2 billion triples), are constructed once in approximately 7 hours. On the contrary, the connectivity analytics are produced quite fast, i.e., even in a few seconds, by accessing the mentioned indexes. Regarding LODChain, it can produce the analytics for hundreds of thousands of triples in a few minutes (indicatively less than a minute for 50,000 triples), by also exploiting the indexes of LODsyndesis. A complementary topic is that of ranking, in the sense that if the KG is big, or the results are big, then methods that can rank and reveal the more important elements are useful also for visualization purposes. Such ranking methods can be leveraged at both schema and data level, just indicatively, ref. [90] proposed methods for ranking RDF Schema elements (and their applications in visualization), ref. [91] described ranking-induced top-k diagrams for reducing the information overload. Concluding Remarks The analysis of big and complex KGs in RDF is challenging. In this brief survey, we reviewed the work that has been in this area. In brief, we identified two main categories of analytic queries (domain specific and quality-related), and five kinds of approaches for analytics over RDF. Then, we described the related works that fall in these categories. In total, we surveyed 45 papers (including more than 15 systems). In general, we observe an increasing trend for analytics over RDF KGs, for both domain-specific (e.g., for medical and publications domain) and domain-independent tasks. In particular, we identified 11 works for applying domain-related analytic queries over general-purpose KGs, whereas we surveyed 10 works that first define data cubes over RDF and then use them for analysis. We have also described indicatively 8 works on domain specific pipelines for analytics from various domains, including health (drugs, cancer and Covid-19), research publications, and digital humanities (historical analysis). Finally, we mentioned 8 works for publishing statistical data through RDF vocabularies and 8 works for quality-related analytics over single and multiple RDF datasets (or LOD scale) for fostering connectivity. Figure 10 summarizes the categories identified, the number of works of each category and the main challenges. We hope this collection to be useful for researchers and engineers for advancing the capabilities and user-friendliness of methods for analytics over knowledge graphs.
8,575
2023-01-17T00:00:00.000
[ "Computer Science" ]
Coupling of the Localized Wind Wall at High Latitudes to the Lower Thermosphere by Neutral Cells The recently observed Wall in the daytime zonal winds in the thermosphere from O (S) and O (D) emissions by the WINDII instrument on the UARS satellite in the high latitudinal region during 1994 to 1996, has been interpreted in terms of NCAR-TIGCM models. The strong westward polar wind (convergence) and weaker eastward winds equator wards of it (divergence), potentially generating localized vertical flows, overlap the dayside high density and equatorward of it low density neutral Cells’ regions in the models. The models indicate that the Cells and the Wall separating them exist at all solar and geomagnetic activities. These Cells in the thermosphere can transport neutral gas vertically down in the convergence region and up in the divergence region thus moving the associated emissions as observed in the data. Since the diameter of these Cells can reach up to 2000 km, the resulting enhanced emissions may have scale size of about 20° in latitude and longitude. The idealized transport time is under 8 minutes for up to 100 km for these observations during quiet solar and geomagnetic conditions. Once the transporting Cell’s temperature / density reaches that of the ambient atmosphere they disappear and other Cells will partake in this process at these latitudes and times. Introduction In a recent work [13], a zonal wind reversal in the 140 to 250 km range from westward polar cap winds to equatorward weaker eastward winds was observed in the 60°-70° geographic latitudinal range and the 100° -200° longitudinal range in the southern hemisphere in O ( 1 S) daytime observations from the Wind Imaging Interferometer (WINDII) on Upper Atmospheric Research Satellite (UARS). The corresponding observations in the northern hemisphere were in the same latitudinal range, but in the 200° -300° longitudinal range. In southern hemisphere, the observational period from March, 11 to 20, 1996 was during low solar activity (F 10.7 ~ 70 x 10 -11 w/m2/c/s) and low geomagnetic activity (Ap<15), being unsettled on March, 11 and 21 (23<Ap<28). In the northern hemisphere, the observations were from October 2 to 10, 1995. The solar activity was also low (F 10.7 ~ 71 x 10 -11 w/m2/c/s), but the geomagnetic activity was moderately disturbed (16<Ap<58). The upper atmospheric observation in summer solstice, January, 1995 in Red line emission [O( 1 D) at 630 nm] from WINDII in the southern hemisphere for low solar and mild geomagnetic activity have also been recently reported by Shepherd et al [14]. They observed an increase in the emissions at around 200 km in the 100 -150°E longitudinal zone at high latitudes. The regions observed lie below the cusp region which resides at about 78° geomagnetic and ultimately merges with the auroral oval. This sharp reversal region was termed as a 'Wind wall'. This reversal created a convergence on the west side wind and a divergence equatorward of the wall on the east side wind. As a result of these convergence / divergence regions and the presence of air Cells in this region, vertical downward / upward motion in the neutral gas will occur as observed in the observations. The emissions were enhanced to lower altitudes on the west side wind (convergence) region and increased in altitude on the east side wind (divergence) of the wall. Schoendorf et al and Crowley et al have investigated neutral density at polar latitudes in both hemispheres in the 130 -350 km range during high and low solar activity periods, utilizing the Thermospheric Ionospheric General Circulation Model (TIGCM) of the National Center for Atmospheric Research (NCAR) [3,11,12]. The model simulations were carried out to predict the density perturbations observed by S85-1 satellite near 200 km. They found high density cells in the thermosphere near local noon around 70° and low density cells equatorward around 60° geographic for low geomagnetic conditions. The Cells moved to lower latitudes for high geomagnetic activities and the number of Cells gradually increased to 3 (a high Cell near midnight), then to 4 (a low density Cell near dusk) and the winds are strengthened. Cells of enhanced temperature and pressure were also predicted. These high and low density Cells are the result of the velocity structure as a result of the combination of the joule heating, ion-drag, Coriolis and pressure-gradient forces. Below about 170 km, there are only 2 Cells. Above about 170 km, the neutral velocity is cyclonic (anticlockwise) in a low density Cell. In the high density Cell, it is anticyclonic (clockwise). This will imply a westward wind in the high density Cell and an eastward wind in the low density Cell near the Wall separating the two. The Cell structures in the north and southern hemispheres are similar except that they are located at slightly lower latitudes in the southern hemisphere and slightly offset from the positions in the northern hemisphere, for similar solar and geomagnetic activity. This difference was attributed to the fact that the Cell structures are fixed to the magnetic poles in their hemispheres which are symmetrically offset. In the WINDII observations, the Wall in the southern hemisphere was between 100 -200° longitude for relatively quiet geomagnetic conditions, and between 200 -300° longitudinal ranges in the northern hemisphere for active geomagnetic conditions. The density structure of the Cells extends from about 140 km to about 300 km and possibly below 400 km [6] for all solar and geomagnetic activities. These Cells have been observed by many satellites: ESRO-4 and DE2 [1], SETA-1 [8], S85-1 [3]. The diameters of the Cells may range upward from about 1000 km, depending on the solar and geomagnetic activity. Figure 1(a, b, c) depicts the Cells' contours with local time in neutral mass density at 200 km arising from the wind structure for the low, medium and high geomagnetic activity respectively in the northern hemisphere [3]. Geomagnetic quiet, moderate and active conditions were represented with Cross polar cap potentials of 30, 60, and 90 kv respectively. If we take the longest contour line in the polar plots depicted by arrows, separating the high density and low density Cells to be the possible representative of the Wall position, then it can be seen that its position changes from roughly East-West aligned to more latitudinally North-South aligned as the geomagnetic activity represented by the cross polar cap potential difference increases (Figure 1b, c). In the lower thermosphere at 140 km (Figure 1d), the Wall seems to be aligned approximately East-West for all solar and geomagnetic conditions (Figure 1 in [11]). A reversal of the wind direction close to the edge of the auroral zone indicating a cellular type of thermospheric circulation was indicated as early as in 1956 by DeVries from Logacs, the low altitude Air Force satellites' experiment [9]. Up to 40% density variation could occur from low to high density Cells, depending on the solar and geomagnetic activity within 1-2 hours [3]. Well above the Cells' altitudes such as seen by CHAMP's (at 400 km), enhanced electron density and temperatures, ion and neutral velocities are predicted due to soft electron precipitation (~ 500 ev). This particle precipitation in the Cusps inputs energy to the electrons which move upwards creating ambipolar electric fields, which in turn pull ions upwards [15], with speeds exceeding even 500 m / s and thus dragging neutrals with them all within 10-15 minutes [10]. Electron temperature is enhanced within about an hour. The increased ion upwelling increases neutral density to maintain charge neutrality [7]. Just direct energy input to neutrals from joule heating and ion drag gives only small enhancements in the neutral densities [2]. The dependence of the thermospheric wind and the field-aligned currents on the By and Bz components of the interplanetary magnetic field at CHAMP's altitude have been studied recently by Huang et al [4] and Kersvalishvili and Luhr [5]. In light of this existing morphology of these Cells at high latitudes, a simplified linearized theoretical treatment is used to see if the observed rise and descent of emissions in the thermosphere by WINDII-UARS could be explained in the next section. Simple Linearized Theoretical Treatment Here we examine the simple dynamical behavior of the idealized 'narrow' Cells in high latitudinal regions, in Thermosphere by Neutral Cells transporting neutral gas downwards in the convergence region and upwards in the divergence region, in a linearized fashion. We will ignore for the time being Coriolis and iondrag forces during transport process, which are especially important for large spatial sizes of these Cells and one has to deal with appropriate momentum, continuity and energy equations. This will be the subject matter in a future study. The ambient atmosphere in hydrostatic equilibrium can be represented for unit mass by the equation: 0=-g -1/ρ a . dp/dz (1) where ρ a is the ambient density, g the gravitational acceleration and p the pressure at a height z. If a Cell is not in a hydrostatic equilibrium with the ambient atmosphere, it is 'unstable' and its vertical acceleration can be represented by the equation: d 2 z/dt 2 =-g -1/ρ c . dp/dz (2) assuming that the pressure acting on the cell is always adjusted to the ambient one at the same level. Here ρ c is the Cell's density. Let the density and the temperature of the cell formed, ρ c and T c be in equilibrium with the ambient thermosphere with corresponding density and temperature as ρ a and T a at height z. Now eliminating dp/dz from (1, 2) above or simply by applying Archimedean principle and the Newton's second law, the Cell's equation of motion can be written as: In case of the ideal gas law, equation (3) can also be written in terms of temperature as: As we can see from equation (3,4) in the case of convergence, the Cell in the high density structure is such that ρ c >ρ a (T a >T c ), so it will accelerate downwards transporting neutral gas to a lower height where it will encounter a density or temperature equal to that of the ambient atmosphere and may disappear. This will increase the airglow emission at lower heights. Similarly, in the case of divergence, the Cell is in a low density structure, ρ c <ρ a (T c >T a ), so it will move upwards transporting neutral gas and thus emissions, and when it encounters an equal ambient density or temperature then again it may disappear. If the Cell's velocity, dz/dt is zero at the cell height say, z 0 , and assuming constant T c and T a with height, then the integration of equation (4) with time 't' will yield ∆ =[g (T c -T a ) / T a ] t 2 /2 Equation (5) shows that the transport time depends on the ratio (T c / T a ) and ∆ , the height travelled by the Cell to transport the neutrals. Adopting T a ~ 800 K and assuming T c ~ 10% less than Ta in the thermosphere during low solar and geomagnetic activity, a part of the high density Cell can transport neutral gas and thus emissions by say, about 100 km downwards in about 8 min, in an idealized case. Similarly assuming Tc ~ 10% greater than Ta, a low density Cell can transport neutral gas and thus emissions by about 100 km upwards in about the same time. For the WINDII observations in 1996 for the same low solar activity but at highly disturbed geomagnetic activity (Ap~57, Kp~7), on neglecting the latitudinal variations at high latitudes in the geomagnetic activity effects, the changes in the thermospheric temperature can be represented as [9]): ∆T a =21.4Kp + 0.03 exp(Kp) (6) This also yields also about 7 min for the Cell to transport the neutrals under identical conditions. The Cells responsible for the transport of the neutrals could belong to Stable or Unstable types. From equation (5), if (T c / T a ) is>1 or<1 during its path then the Cell will keep on moving up or down in the same direction respectively before it disappears, and is called the Unstable type. If during its path (T c / T a ) changes from>1 to<1 or vice versa, the Cell will reverse its path to where it started and will be of the Stable type. After these Cells disappear during transport, other Cells will keep the process going on at the latitudes and times of the high density or low density Cells as these are permanent features. Different Cells start at different heights and end at different heights. In case, if the initial Cell's velocity is non-zero, equation (5) will take the form and ∆z=v 0 t + g [(ρ a -ρ c ) / ρ c ] t 2 /2 (8) In other words, ∆z will also depend on the initial velocity of the Cell, v 0 . Comparison of the WINDII Observations with the Models The Wall observed around local noon in the winds during October, 1995 in the northern hemisphere (magnetic pole at ~ 80°N, -70°W geographic) for low solar activity and moderate geomagnetic disturbed periods is around 65° geographic latitude and about 250°E longitude, on average [13]. In figure (1b) for moderate disturbed period, the Wall in the atmospheric density is shown by an arrow at around 9 LT and at 5 LT local times. The local noon at 250°E will put the observed Wall in zonal wind at about 320°E or around 9 LT in the figure, close to the low density Cell area. That means that these low density Cells are capable of transporting neutrals and thus emissions to higher altitudes, probably close to the observed ones in zonal wind velocity enhancement around 200 -250 km altitude range. As far as the southern hemisphere in March, 1996 is concerned (magnetic pole -74.5°S, 127° geographic), the Wall observed for low solar activity and mild geomagnetic activity is around 65°S and 150°E [13] Though the model polar diagram for low solar and geomagnetic activity in the southern hemisphere is not available at the time, we can roughly compare these observations with the ones in figure (1b) for mild geomagnetic activity in the northern hemisphere as the structures of the Cells are similar in both the hemispheres. The Wall at 150°E close to the longitude of the magnetic pole, will put it around 13 LT in the high density Cell area. At this time the observed emissions are enhanced to much lower altitudes by high density Cells reaching to about 100 km, lower by up to 100 km than in the enhancements of the zonal and meridional winds. The upper atmospheric observation in summer solstice, January, 1995 in Red line emission [O( 1 D) at 630 nm] from WINDII in the southern hemisphere have also been recently reported by Shepherd et al [14]. They observed an increase in the emissions at around 200 km in the 100 -150°E longitudinal zone at high latitudes in the southern hemisphere for the period of low solar activity (F10.7~ 72 x 10 -11 w/m2/c/s) and mild geomagnetic activity (7<Ap<30). We can see from the figure (1b) that this area lies again in the domain of the high density Cells around 13 LT assuming the Cell structures are similar in both the hemispheres. The Cells in the high density region will transport neutrals to lower altitudes, this leads to a deficit of neutrals in upper thermosphere leading to an increase in Red line intensity there since it depends inversely on the densities of molecular oxygen and nitrogen. Results and Discussions The O ( 1 S, 1 D) observations by WINDII on the UARS satellite have shown a convergence region in the westward zonal winds and a divergence region down the Wall in the east side zonal winds in the high latitudinal region during daytime implying vertical flows of the neutral gas. As a result depending on the overlap of the Wall on the high or low density sides, the emissions were enhanced and moved down on the west side, and moved up on the east side for the existing low solar and mild geomagnetic conditions. It is shown that in the thermospheric region of 130 -300 km, the Cells in high / low density regions almost overlap the convergence / divergence structure in the density adjacent to the Wall, and are able to move neutrals from higher to lower and lower to higher altitudes respectively. These Cells were deduced by the NCAR-TIGCM models and other satellite observations below 300 km. These cells seem to occur for all solar and geomagnetic conditions in both hemispheres, with additional low density dusk and high density night Cells for higher solar and geomagnetic activities. The Cells are not observed above about 300 km. The WINDII observations have been compared with these models and discussed. Following a simplified treatment more suitable for 'narrow' Cells by neglecting the non-linear effects like ion-drags, Coriolis forces which are important for large spatial thermospheric Cells, the vertical transport due to these or part of these Cells in the convergence region enhancing the emission even to the lower thermosphere, and in the divergence region raising the emission to higher altitudes is shown. No such Cells transporting neutrals will exist once their temperature or density equalizes with that of the ambient atmosphere. After that other neutral Cells available at other heights will transport the neutrals at the time and latitude of these Cells. The distance travelled by these Cells in transporting the neutral gas vertically is larger, larger the temperature difference between the Cell and the ambient displaced atmosphere. The ideal travel time of the Cells from equation (5) to transfer neutrals from upper to the lower thermosphere or vice versa is around 8 minutes for heights of say, about 100 km for solar and geomagnetic quiet periods. Here the implications of the thermospheric Cells in the density and winds have been mainly discussed for vertical movement of emissions for low and mild solar activity and geomagnetic conditions. As mentioned before that in TIGCM models, the Wall separating the low and high density Cells tend to be more north-south aligned rather than east-west as the solar and geomagnetic activity increases. In addition a high nighttime density and a low dusk density Cell start appearing. Therefore it will be worthwhile in future studies to look into WINDII and ISIS data on these emissions and their response for these conditions. Conclusions It has been shown here on the basis of a simplified treatment of the vertical motions of narrow high and low density neutral Cells at high latitudes in the earth's thermosphere that these Cells contribute to the vertical transport of the emissions from the atomic oxygen as observed by WINDII instrument on UAR satellite. The emissions are transported downwards in the region of high density Cells and upwards in the region of low density Cells in the daytime and morning sectors respectively. Once the densities of these Cells equalizes with that of the ambient atmosphere, the transport ceases and other Cells may partake in this transport process. An idealized time is about 8 min for a 100 km transport for quiet solar and geomagnetic conditions. A further analysis of the data in intended to be carried out to see such emission transport in the region of high density Cells during nighttime and low density Cells during dusk respectively.
4,531.8
2020-11-19T00:00:00.000
[ "Environmental Science", "Physics" ]
A Preconditioned Conjugate Gradient Based Algorithm for Coupling Geomechanical-Reservoir Simulations In this article, we introduce a new coupling algorithm between the reservoir multiphase Darcy flow simulator and the geomechanical code accounting for the compaction of the porous medium. The coupling is defined on time periods in such a way that the reservoir unknowns are computed for time steps which are small enough subdivisions of the time period whereas the mechanical problem is solved at the end of the period. Our new approach is based on a nonlinear preconditioned conjugate gradient method which is applied to the mechanical displacement variable. This algorithm is compared, on a one dimensional example, to the staggered coupling algorithm with a porous medium compressibility as relaxation parameter. The main conclusion is that the preconditioned conjugate gradient method is much more robust and converge much faster than the staggered algorithm, while the additional cost per iteration should remain in practical situations very small. INTRODUCTION The production of hydrocarbon reservoirs involves pressure and temperature variations over long periods of time.These variations trigger off a modification of the reservoir geomechanical behavior during the production.A wellknown example of the stress equilibrium changes in and around the reservoir is the subsidence phenomenon that has been observed on different fields.The most famous case is the Ekofisk field in the North Sea, Norway, where a sea floor subsidence rate of 42 cm/year has been reached at the end of 1993 (Sylte et al., 1999).The cases of the Valhall field in Norway (Pattillo et al., 1998) and the Bachaquero (Merle et al., 1976) and Tia Juana (Mc Lendon and Sawyer, 1991) fields in Venezuela also illustrate the importance of the subsidence phenomenon in oil reservoir. The subsidence phenomenon strongly depends on the reservoir compaction.In conventional reservoir simulators, the reservoir compaction is directly deduced from the reservoir pore pressure and temperature changes using porous medium compressibility with pressure and porous medium dilatability with temperature coefficients.However, several authors (Tortike and Farouq, 1993;Settari and Mourits, 1994;Ruisten et al., 1996;Gutierrez and Lewis, 1998) demonstrate that porous medium compressibility and dilatability parameters (even pressure-dependent) are not sufficient to reproduce pore volume changes induced by pressure and temperature changes.Therefore an accurate modeling of stress dependent reservoir compaction and other related geomechanical effects require the coupling of a geomechanical model accounting for the rock deformation with a reservoir model describing the fluid flow in the porous medium.The porosity change due to strain variation and the total stress change associated with the pressure variation are the reservoir-geomechanical couplings considered in this paper. There are two different approaches for solving the geomechanical-reservoir problem.The first one consists in simultaneously solving all the equations in one simulator, which is referred to as the fully coupled algorithm (Gutierrez and Lewis, 1998;Chin et al., 1998), whereas the second one uses two different simulators in order to solve the two sets of equations (mechanical equilibrium and flow problem); each simulator solves its own system independently, and information is passed in both directions between the simulators.This second technique, which is referred to as the simulators coupling method, can be performed with two conventional reservoir and geomechanical simulators (Tortike and Farouq, 1993;Settari and Mourits, 1998;Settari and Walters, 1999). The objective of this paper is to define and compare two algorithms coupling the reservoir simulations performed over time periods to the mechanical computations at the end of each period.The reservoir simulation over each period is computed for time steps which are small enough subdivisions of the time period. The first coupling algorithm defined in Section 3 is the so called staggered algorithm.It is a fixed point method which performs a sequence of reservoir simulations with the porosity field fixed by the previous iteration, followed by an update of the mechanical displacement and porosity field.In the single phase linear case, this algorithm amounts to a Gauss-Seidel type iterative method. In Section 4, a new coupling algorithm is introduced which is based on a preconditioned conjugate gradient approach to solve the coupled system.Each iteration of the conjugate gradient method involves one mechanical computation which plays the role of the preconditioning step, and two sets of reservoir computations on the whole time period to compute the conjugate gradient residual. In Section 5, these algorithms are compared in terms of robustness and convergence rate on a one dimensional example. For the sake of simplicity, throughout this article, the reservoir is assumed to be an isothermal dead-oil model whereas the geomechanics is considered as a linear poroelastic model with an uncompressible rock matrix.Both models are introduced in Section 1, and their discretizations, by a finite volume scheme for the reservoir equations and by a finite element method for the mechanical equilibrium, are briefly outlined in Section 2. COUPLING GEOMECHANICAL AND RESERVOIR SIMULATIONS The reservoir is modeled as a porous nonlinear elastic medium with an instantaneous deformation response coupled with an unstationary compressible two phase Darcy flow.Assuming small deformations of the medium, both the fluid flow and the geomechanical equilibrium equations are written in Euler coordinates, and the computational geometrical domain is a fixed domain Ω.The coupling between the geomechanics and the Darcy flow is given by a simplified form of Biot's law connecting the variation of the porosity of the medium to its mechanical deformation, which amounts to assume that the solid matrix is incompressible.Conversely, we suppose that the medium is submitted to the fluid pore pressure.For the sake of simplicity we do not take into account the gravity forces.In addition, the reservoir is supposed to have zero permeability outside the subdomain ω ⊂ Ω.In other words, the subdomain ω represents the flow computational domain of the reservoir code, while the whole domain Ω is the geometrical domain of the geomechanical code (Fig. 1).Note that Ω is much larger than ω since the mechanical effects can be observed on a much larger scale than the scale of the reservoir domain; in particular the boundary conditions for the mechanical unknowns must be set sufficiently far away from ω.The coupled system consists in the following set of equations governing the time evolution of the fluid pore pressure p and the displacement field u of the porous medium. The geomechanical equations are given by: (1) where σ and ε are the stress and the strain tensors for the porous medium with: λ and µ and are the Lamé constants in drained conditions, and the function b (x) is defined by: The reservoir equations state the conservation of each phase with fluxes defined by the two phase Darcy's law: (2) where we have neglected the capillary effects so that the water and oil pressures are equal. The simplified form of Biot's law corresponding to the case of an incompressible rock matrix is given by: (3) In Equations ( 2) and (3) the following notations have been used: S denotes the saturation of the water phase, and 1 -S is the saturation of the oil phase; p is the pressure; is the porosity of the porous medium; κ is the intrinsic permeability; kr i,j = kr i,j (S) is the relative permeability of the phase i in presence of the phase j ≠ i; µ j is the dynamical viscosity of the phase i; ρ i = ρ i (p) is the density of the phase i. THE DISCRETE PROBLEM In order to approximate the fluxes and ensure the mass conservation, the reservoir set of equations is discretized using a finite volume method.As in most mechanical codes, the geomechanical problem is discretized using a finite element method.The displacement u is computed at times T 0 = 0, T 1 , ..., T k , ..., T p = T f .Also we suppose that T k+1 -T k = T, k = 0, 1, ..., p -1 and refer to T as the period.The unknown functions p, S, and φ are computed at times t k 0 , ..., t k n , ..., t k q = t 0 k+1 , where In fact, ∆t and T vary in the course of the computations; for the sake of simplicity, we shall consider them as fixed in Sections 2, 3, 4. Furthermore, for the sake of conciseness, whenever it is not ambiguous, we shall use the notations p n K , S n K and φ n K rather than p K n,k , S K n,k and φ K n,k for the approximations of p, S, and φ at time t k n on the control volume K. Similarly let ρ n i, K , i = o or w, denote the cell values of the oil and water phase densities at time t k n . The reservoir and mechanical geometrical domains. Finite Volume Scheme for the Flow Problem Let τ h be the set of control volumes, m (K) the measure of the cell K, and N (K) the set of its neighboring cells.The transmissibility between two neighboring cells K and L is defined by: with m (e K, L ) denoting the measure of the common face e K, L between the cells K and L, XK being a point associated to each cell K, and d (XK, XL) denoting the distance between the points XK and XL.The reservoir conservation Equations ( 2) are integrated on each space time element with a semi-implicit integration in time to obtain the following discrete system: with the "phase by phase upwinding" defined by the relations: The porosities , depend on the mechanical displacement through the Biot's law (3) and are defined in Subsection 2.3 below. Finite Element Computation of the Geomechanical Equilibrium Let us define the approximate pressure function by the following piecewise constant interpolation: The geomechanical equation is discretized by a finite element Galerkin approximation of Equation ( 1) with a continuous piecewise quadratic finite element space on a given mesh of the domain Ω. Let u h, T (., T k ) denote this finite element approximation of u (., T k ), then it is defined by the following Galerkin variational approximation: (6) for all finite element test functionv h , where 〈f h ,.〉 is a right hand side accounting for the load at the boundary of the domain Ω. Finite Volume Discretization of Biot's Law The relation between the discrete displacement and the discrete porosity φ Κ n+1 is given by: ( (8) For c r = 0, ∆φ k K , denotes the variation of porosity during the period [T k , T k+1 ] which is linearly distributed over the subdivision t k n , n = 1, ..., q.Furthermore, the Definitions (7) and (8) of φ Κ n+1 and ∆φ k K include the usual porous medium compressibility correction c r of reservoir simulations.For q = 1, this correction cancels out between both Equations ( 7) and ( 8) and only plays the role of a relaxation parameter which stabilizes and speeds up the convergence of iterative coupling schemes (see Section 3).For q > 1, it is expected in addition to provide a better interpolation of the porosity within the period. STAGGERED COUPLING ALGORITHM The staggered algorithm performs iteratively until convergence a sequence of one reservoir simulation over the period T = T k+1 -T k on the subdivision t k 1 , ..., t k q with porosity variations ∆φ k K , K ∈τ h , given by the previous iteration, followed by an update of the geomechanical displacement and the porosity variations ∆φ k K , K ∈τ h , using the computed pressure at time t k q = T k+1 (Fig. 2). Sketch of the staggered coupling iterative algorithm between the reservoir simulation over one period and the mechanical computation. Let us denote by: the vector of the pressure, saturation, and porosity unknowns at time t k n , by u k the vector of the displacement node values at time T k , and by the vector of the variation of porosity unknowns.The subscript l will be used to denote the staggered algorithm iterations. Initialization of the period: .Iterations on the period Do l = 1, … until convergence Do n = 1, … , q Reservoir simulation over the period: p l n,k and S l n,k are computed from (4) and (5) using End Do Geomecanical simulation: u 1 k+1 computed from (7) using p l q,k ∆φ k l computed from (8) End Do End of the period.The convergence criterion is based on the maximum relative variation of the pressure between two successive iterations.In the case of single phase linear flow and linear elasticity, this algorithm can be seen as a Block Gauss-Seidel iterative method for solving the pressure displacement coupled system.In that case a convergence analysis can be carried out and the convergence rate is shown to depend strongly on the compressibility ratio between the fluid and the rock.Furthermore, the porous medium compressibility c r can be interpreted as a relaxation parameter which is required to ensure the convergence of the algorithm when the compressibility of the porous medium is greater than the compressibility of the fluid (Bévillon and Masson, 2000;Bévillon, 2000). It is well known that Gauss-Seidel type algorithms converge slowly and that their convergence rate can be considerably improved by using acceleration techniques like conjugate gradient methods.The definition of such an algorithm that can apply to nonlinear fluid flow models as well as to the subdivision of the geomechanical periods (q > 1) is the purpose of the next section. PRECONDITIONED CONJUGATE GRADIENT METHOD We consider throughout this section that the porous medium compressibility parameter c r is set to zero.The geomechanical Equation ( 6) are rewritten in matrix form as: (9) where G h denotes the stiffness matrix, B h the discrete gradient matrix, and F h the vector of the right hand side node values.With these notations, Equations ( 7) and ( 8) can be rewritten as follows: (10) where t B h denotes the transpose matrix of B h .From Equations (4), ( 5), (10), it is clear that the reservoir simulation over the period [T k , T k+1 ] relates the pressure p q,k to t B h u k+1 and to other quantities depending only on the previous period (like p q,k-1 , S q,k-1 , φ q,k-1 , and u k ).Let us denote by: (11) this nonlinear operator, where the superscript k of R k h stands for all variables computed at the previous period Conjugate gradient algorithms apply to symmetric positive definite linear equations.They can be extended to nonlinear equations with symmetric or slightly non symmetric positive definite Jacobian matrices.In order to be as close as possible to these properties, we have chosen to apply the conjugate gradient algorithm to the Schur complement system defined on the displacement variable obtained by computation of the pressure from Equation ( 11) and substitution in equation ( 9): (12) The nonlinear system ( 12) is clearly symmetric positive definite for single phase flow with q = 1 and is expected to Reservoir computations at each time Mechanical computations during the period y T = period ∆t = time step remain slightly non symmetric and positive in more general practical simulations.A natural symmetric positive definite preconditioner for the system ( 12) is the mechanical inverse operator: The residual gradient of ( 12) along the descent direction will be computed by finite difference of step denoted by ε.All these choices lead to the following preconditioned conjugate gradient algorithm, where l denotes the conjugate gradient iterative subscript, and (X, Y), X, Y ∈ IR N , the canonical scalar product of IR N . Initialization: Compared with the previous staggered algorithm, each iteration involves one additional reservoir simulation in order to compute the gradient along the descent direction, and two mechanical residual computations.The evaluation of the mechanical residual is always negligible compared to the inversion of the mechanical operator.Furthermore, in practical situations for which the geomechanical model is a nonlinear plasticity model, the cost of a reservoir simulation over one period is much smaller than the cost of one mechanical computation.It results that the additional cost of each conjugate gradient iteration will be very small.Note also that this algorithm readily extends to nonlinear mechanical models with no significant additional cost. The conjugate gradient algorithm exhibits a lack of robustness to non symmetry and non positivity of the system.In particular, although it has not been observed in the following numerical experiments, the algorithm could blow up if (y l , d l ) = 0 which is not excluded when the Jacobian of the system is not positive definite.More robust related algorithms will be investigated in a future work to improve the robustness of our method especially for highly nonlinear operators. NUMERICAL EXAMPLE In this section, the staggered algorithm presented in Section 3 is compared with the conjugate gradient algorithm of Section 4. The comparison is performed for a onedimensional test case dealing with an isotropic porous cylinder of radius and length (Fig. 3). Water is injected at the bottom (incoming phase) and oil is extracted at the top (outcoming phase).There is no lateral movement in the x-and y-directions and we assume that the strains follow the z-direction (uniaxial strains).The total deformation u is fixed to zero, and the pressure p is imposed at the outcoming phase.At the incoming phase, the water flux Q w is prescribed to a given value and the oil flux Q o is set to zero. The relative permeabilities are given by the following quadratic laws: The water and oil phases are assumed to be compressible, and we set: w (1 + c w p) where c w and c o denote the water and oil compressibility, and ρ 0 w and ρ 0 o the reference water and oil densities.The mechanical and fluid flow physical parameters used for the simulations are given in Table 1. The meeting times between the reservoir and the mechanical simulators (the T k 's which are not equally spaced here) are given in minutes by: T 1 = 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 25, 30, 45, 60, T 20 = 120 The saturation equation is a Buckley-Leverett type problem where capillary effects have been neglected.It satisfies a first order nonlinear conservation law whose exact solution may develop chocks.The saturation plots displayed Figure 4 exhibit a steep front which approximates the discontinuity of the exact solution.The extraction of oil, which is five times more viscous than water, requires a very high pressure gradient decreasing in time as it is illustrated on Figure 4. The reference solution is obtained with a fully coupled code and a computation of the geomechanics at each reservoir time step (q = 1). For both the staggered and the conjugate gradient algorithms, the errors plotted Figures 5 and 6 between the computed solutions and the reference solutions are small, showing that the choice of the successive periods provides enough accuracy. Note that the main differences on the saturation profiles are located at the discontinuities of the solution of the continuous problem, which could be expected.The order of magnitude of the saturation error for the staggered algorithm is lower than that of the conjugate gradient algorithm (Fig. 5).This is explained by the use of the exact porous medium compressibility for the staggered algorithm, which helps to decrease the effects of the pressure increase as water is injected.On the contrary, the magnitude of the pressure error for the conjugate gradient method is lower than that of the staggered algorithm (Fig. 6).This is due to a poor convergence of the Gauss-Seidel like method for such slightly compressible fluids.This lack of robustness appears more clearly in Tables 2 and 3, where j denotes a convergence criterion of 10 -j on the maximum relative variation of pressure between two successive iterations.Note in particular on Table 2, the nonconvergence of the staggered algorithm at the 18th period over a total of 20 periods and for j = 3. TABLE 2 Total number of mechanical computations and reservoir period simulations for the staggered algorithm over the 20 periods for a given accuracy 10 -j on the maximum relative variation of pressure between two successive iterations Total number of mechanical computations and reservoir period simulations for the conjugate gradient algorithm over the 20 periods for a given accuracy 10 -j on the maximum relative variation of pressure between two successive iterations The pressure error between the previous reference solution and, on the left, the staggered algorithm solution, and, on the right, the conjugate gradient algorithm solution. CONCLUSION In this paper we have introduced a new coupling algorithm between the geomechanical and the reservoir simulators.The coupling is defined on time periods in such a way that the reservoir unknowns are computed for time steps which are small enough subdivisions of the time period whereas the mechanical problem is solved at the end of the period. Our new approach is based on a nonlinear preconditioned conjugate gradient method which is applied to the mechanical displacement variable.For each period and each iteration of the conjugate gradient algorithm, two reservoir simulations over the period are performed to compute the residual of the conjugate gradient algorithm and the mechanical computation plays the role of the preconditioning step. This algorithm is compared to the staggered coupling algorithm with a porous medium compressibility as relaxation parameter.The main conclusion is that the preconditioned conjugate gradient method is much more robust and converge much faster than the staggered algorithm, while the additional cost per iteration should remain in practical situations very small. Figure 3 Figure 3Cylinder with no lateral movement.
5,040
2002-09-01T00:00:00.000
[ "Geology" ]
Probing CO2 Reduction Pathways for Copper Catalysis Using an Ionic Liquid as a Chemical Trapping Agent Abstract The key to fully leveraging the potential of the electrochemical CO2 reduction reaction (CO2RR) to achieve a sustainable solar‐power‐based economy is the development of high‐performance electrocatalysts. The development process relies heavily on trial and error methods due to poor mechanistic understanding of the reaction. Demonstrated here is that ionic liquids (ILs) can be employed as a chemical trapping agent to probe CO2RR mechanistic pathways. This method is implemented by introducing a small amount of an IL ([BMIm][NTf2]) to a copper foam catalyst, on which a wide range of CO2RR products, including formate, CO, alcohols, and hydrocarbons, can be produced. The IL can selectively suppress the formation of ethylene, ethanol and n‐propanol while having little impact on others. Thus, reaction networks leading to various products can be disentangled. The results shed new light on the mechanistic understanding of the CO2RR, and provide guidelines for modulating the CO2RR properties. Chemical trapping using an IL adds to the toolbox to deduce the mechanistic understanding of electrocatalysis and could be applied to other reactions as well. Introduction Thee lectrochemical CO 2 reduction reaction (CO2RR) provides ap romising solution to offset the increased atmospheric CO 2 concentration, and also represents an excellent option for storing intermittent renewable electricity (e.g. solar,w ind energy) by producing value-added chemicals. [1] However,p oor energy conversion efficiencya nd broad product spectrum are major barriers to achieving economic viability of the CO2RR. Intensive effort has been spent searching for high performance electrocatalysts. [2] Copper (Cu) is identified as the only metal that produces hydrocarbons and alcohols with appreciable Faradaic efficiency (FE), [3] due to its moderate binding strength with key intermediate species. [4] Despite its unique catalytic properties, mechanistic understanding of the reaction pathways which provides the basis of steering the CO2RR toward desired products,remains controversial. Although the adsorbed *CO species is well-accepted as ak ey intermediate leading to various C 2+ products,i tr emains an open challenge to elucidate the mechanistic pathways from *CO to C 2+ products on Cu. Especially,the formation mechanisms of ethylene and ethanol have long been the subject under debate in both experimental and theoretical studies. [5] Mechanistic understanding of the CO2RR are derived almost exclusively through in situ/operando spectroscopic techniques (e.g., IR, Raman). [6] Early in situ spectroscopic studies of Cu electrodes suggest that hydrogenation of *CO to *CH 2 would be the precursor to ethylene and ethanol, [6c,7] while others suggest that formation of these C 2 species would mainly proceed through forming a*CO dimer (*C 2 O 2 À )which is subsequently protonated to *CO-COH. [8] These discrepancies may stem from the inherent limitations of spectroscopic techniques.T he limitations include the interference from the solvent or spectator species, [8b, 9] limited temporal/ spatial resolution due to the low coverage and short residual time of key intermediates, [6c,d] and ill-defined background signals that are sensitive to electrode pretreatment history and cell configurations, [6d,10] and all these may add to the uncertainty of the measurement and make interpretation of resultant spectra an on-trivial task. [6d] Complementary ways of analyzing the CO2RR mechanism are highly desirable. Chemical trapping is regarded as an effective way to study reaction mechanisms.I to riginated in organic chemistry and was widely applied in catalysis. [11] Thereaction mechanism is deduced using ac ompound (trapping agent) that reacts specifically with one or more reaction intermediate(s) to form as table product(s). Thet rapping agent stops/decelerates specific reactions,a nd reaction mechanisms can then be deduced by examining the products.Bell et al. demonstrated in their exemplary works that the production of hydrocarbons from CO hydrogenation involved adsorbed methylene species as akey intermediate,asshown by the suppressed formation of hydrocarbons in presence of methylene scavengers. [12] This chemical trapping method has not yet been applied to electrocatalysis,l argely due to the lack of suitable chemical trapping agents that can selectively interact with specific intermediates without being oxidized/reduced under electrochemical conditions.I nspired by previous works where ionic liquids (ILs) were employed as surface modifiers to modulate the catalytic properties of avariety of electrocatalysts,anILis used here as achemical trapping agent to analyze the CO2RR pathways in Cu catalysts.This idea is realized by analyzing the IL-induced perturbation in the product spectrum. Therationales for choosing ILs also include their coordination ability with CO2RR intermediates and good stability over aw ide potential window. [13] ILs have been used as either pure electrolyte or electrolyte additive to change the CO2RR properties in various metal catalysts (e.g.,Ag, Pb). [14] ILs are reported to lower the overpotential and explicitly favor the formation of CO,p resumably through coordinating with reduction intermediates (e.g., CO 2 À C)byeither stabilizing the intermediates or preventing their spatial approach. [13c, 15] In the current study,the IL is introduced by immobilizing as mall amount of 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([BMIm][NTf 2 ]) on aC u-Foam catalyst (see Figure S1 in the Supporting Information). This method follows the concept of "solid catalyst with ionic liquid layer (SCILL)", which was first invented in heterogeneous catalysis, [16] and was soon successfully transferred to electrocatalysis particularly in improving electrocatalysts for the oxygen reduction reaction (ORR). [17] Thehydrophobic nature of the IL and capillary force ensure the confinement of the IL within the catalysts even in aqueous electrolytes. [17d,e] We demonstrate that IL can act as ac hemical trapping agent in the CO2RR. Its presence significantly alters the product spectrum by selectively suppressing the formation of ethylene,ethanol, and n-propanol, without disturbing either FE or partial current density of the others.T hese findings demonstrate selective interactions between the IL and one or more reaction intermediate(s), while the altered product distribution provides au nique perspective to track the CO2RR pathways.T his work may represent as imple approach to gaining mechanistic insights into the CO2RR, and also paves anew way in modulating the CO2RR activity and selectivity. Results and Discussion Cu foams were chosen because of the unique catalytic property of Cu and the abundance of porous structure which is beneficial to IL immobilization. Cu-Foams were prepared using ah ydrogen evolution reaction (HER) assisted electrodeposition method, [18] with aC up late as the substrate and copper sulfate as the precursor ( Figure S2). [BMIm][NTf 2 ] was chosen because of its hydrophobic nature and ability to coordinate with CO 2 and/or its anion radical. [15,19] Figure 1a Figure S7). Both samples exhibit amajor XPS peak at abinding energy (BE) of 932.5 eV,which associates with Cu 0 /Cu + .The Cu LMM Auger spectra confirm that the surface Cu on both samples mainly exists as Cu + (i.e. Cu 2 O), [20] which is not surprising since the oxidation of Cu to Cu 2 Oo ccurs immediately upon air exposure. [21] Am inor shoulder peak at aB Eo f9 34.7 eV,w hich associates with Cu(OH) 2 , [20] can also be observed on pristine Cu-Foam, indicating that asmall portion of Cu 2 OinCu-Foam are prone to further oxidation to form Cu(OH) 2 .T his consequent oxidation process was also reported by Tannenbaum et al. when studying the initial oxidation behavior of Cu in air. [21] Intriguingly,t his shoulder peak is absent on Cu-Foam-IL, implying that the IL can help suppress surface oxidation, which is in line with our previous study on Pt-based catalysts. [17e-g] Notwithstanding this difference,c onsidering the well documented readiness of copper oxide reduction under CO2RR conditions, [10,22] the presence of asmall portion of Cu(OH) 2 species on initial Cu-Foam is not expected to play as ignificant role in altering the product distribution. The CO2RR performance on Cu is sensitive to surface facets of Cu. [23] To find out whether the IL can change the Cu surface by selectively blocking certain facets,w ep erformed Pb UPD stripping experiments on both samples ( Figure S8). The comparable integrated areas of Pb UPD stripping peaks verify that (selective) blocking of Cu facets by the IL can be ruled out. TheC O 2 electrolysis experiments were performed in ag as-tight electrochemical cell with anode and cathode separated by an anion exchange membrane ( Figure S9). Figure S10). Despite the fluctuation, the electrolysis current densities are more or less comparable at the beginning and end of the electrolysis on both samples. This result indicates that Cu foams are stable during the electrolysis regardless of IL modification, which is also evidenced by the intact dendritic structures of both Cu foams after the electrolysis ( Figure S11). Thes tability of the IL on Cu foams during the CO2RR was also probed by performing post-reaction analyses of Cu-Foam-IL using both XPS and diffuse reflectance infrared Fourier transform spectroscopy techniques.T he characteristic signals of IL can be clearly resolved using both techniques after the electrolysis (Figure S12), implying that the IL can be well-maintained within the Cu foams during electrolysis.The overall current densities are comparable between these two samples,d espite as light current increase in Cu-Foam-IL at potentials of À0.7 and À0.8 Vv ersus reversible hydrogen electrode (vs.R HE). These results verify that the presence of the IL has not induced any change in mass transport properties of reactant molecules (CO 2 )f rom bulk electrolyte to Cu-Foam surfaces, and also imply that the majority of the CO 2 molecules may approach the catalyst surface in afree form instead of an ILcoordinated form. Figure 2b compares the FEs of various products on both samples at À0.7 V. Av ariety of products, including CO,f ormate,e thylene glycol (EG), ethylene, ethane,e thanol, n-propanol, methane,a nd acetate can be detected, with CO,f ormate,a nd EG identified as the major products (in addition to hydrogen). Va rious CO2RR products can usually be observed in Cu foams,while the major product depends on their morphology,a ctive surface area, and foam thickness. [2b,18, 24] Intriguingly,h erein we observe that EG, which is usually identified as aminor product in Cu catalysts, is produced with impressively high FEs ( % 20 %) on both samples.These results showcase that Cu foams are aversatile platform in producing value-added CO2RR products. Thep otential dependent FEs of various products on pristine and IL-modified Cu-Foams are compared in Figure 3. TheH ER, am ajor competing reaction of the CO2RR, still dominates the product spectra on both catalysts.Asurge in H 2 production is observed at electrode potentials lower than À0.7 V, relating to the liberation of surface sites from adsorbed *CO. [25] Meanwhile,t he HER is promoted by IL modification. This may stem from the inherent acidity and superior proton transfer capability of the IL being used, which offers greater proton availability for the HER. [14a, 26] TheH 2 production rates on both samples converge at lower electrode potentials (< À0.85 V), indicating that at higher reaction rate, the HER is mainly limited by the diffusion of proton (or proton source) from bulk solution to the catalyst surface,and the influence of IL modification is not pronounced. Similar potential-dependent FEs for major CO2RR products,including formate,E G, CO,e thylene,a nd ethane,c an be observed on both samples despite some minor difference in FEs for EG and formate at around À0.7 V, due to the liberation of strongly adsorbed *CO intermediate from Cu surfaces. Different from other studies of the CO2RR on Cu catalysts, on which methane is am ajor product, in the current work, methane is produced with ar ather low FE (< 1%)o nb oth catalysts.S imilarly,B roekmann et al. also observed that C 1 pathway to methane was almost completely suppressed on Cu foams. [18] Themorphology or surface faceting of Cu catalysts plays ac rucial role in determining the product selectivity of CO2RR. [27] Fori nstance,C u(100) facets favor the formation of ethylene while Cu(111) facets facilitate the formation of methane. [27b] This structure sensitive behavior of the CO2RR on Cu catalysts originates from the differences in binding energy of *CO and/or energetic barrier for the CÀCcoupling or hydrogenation step between different Cu facets. [8b, 28] Herein, the low FEs of methane on both catalysts imply that the Cu foams after the initial reduction of surface Cu 2 O species during the CO2RR might be enclosed by abundant Cu(100) facets as suggested by Broekmann et al. [18,24] The comparable FEs and onset potentials for major products such as CO and formate on Cu-Foam and Cu-Foam-IL also verify that the presence of IL has not induced any fundamental structural change on the Cu foam itself,and at the same time, the possible blockage or surface rearrangement of specific faceting by IL molecules during the CO2RR can be excluded. Them ost striking effect induced by IL modification is that ethanol and n-propanol, giving amaximum FE of 7%and 5% on pristine Cu-Foam, respectively,a re completely absent on Cu-Foam-IL (Figures 3h and i).M eanwhile,t he FE of ethylene is solely suppressed in the high overpotential region (< À0.7 V), with the highest FE decreasing from 10.2 %t o 5.2 %after IL modification (Figure 3d), while little difference can be observed in the low overpotential region (i.e.f rom À0.6 to À0.7 V). Thes ame conclusion can also be drawn by comparing the partial current densities of CO2RR products ( Figure S13). TheI Lh as selectively slowed down the production rate of ethylene in the high overpotential region and ceased the production of ethanol and n-propanol. These results demonstrate that the feasibility of the IL as achemical trapping agent, which provides the basis for deducing the CO2RR pathways by analyzing the altered product spectrum in presence of the IL. Despite understandings of reaction pathways on Cu catalysts are rife with controversy,s ome consensus has been reached, which enables discussion of the observed chemical trapping results.T ransferring the first electron to CO 2 to form CO 2 À C anion is considered as the rate-determining step for CO 2 activation because of the high reorganization energy needed to activate alinear CO 2 molecule to form CO 2 À C anion with bent coordination geometry. [8d, 15, 29] Moreover,C Oi s identified as akey intermediate during the reduction of CO 2 to various C 2+ products,since CO is the only C 1 molecule that gives similar product spectrum as CO 2 on aC uc atalyst. [3a,d] However,i tr emains elusive how the adsorbed CO intermediate is further converted into various products.Intrigued by the altered product spectrum after IL modification, we clarify several elusive reduction pathways by referring to the widely reported yet controversial mechanism in literature,as summarized in Figure 4. Among various products,e thylene shows the most interesting response to IL modification. Its formation is only suppressed at high overpotentials,while at low overpotentials both FE (Figure 3d)a nd the partial current density of ethylene ( Figure S13d) are almost the same regardless of IL modification. This result strongly suggests that ethylene could form by two separate pathways at high and low overpotentials.Adual pathway mechanism for ethylene production was proposed by Koper et al. when studying CO reduction on Cu. [28b] One pathway (Pathway II) involves the dimerization of two adjacent CO at low overpotentials,w hich is later reduced and protonated to form ethylene.T he dimerization would proceed by forming ah ydrogenated CO dimer (*CO-COH) as confirmed by spectroscopic and theoretical studies. [8b] On the other pathway (Pathway I), CO is converted into either *CHO [4a, 28b] or *COH [30] at high overpotentials, which is then reduced to carbene-like *CH 2 species,followed by either C À Ccoupling between two *CH 2 ,orCOinsertion as in the Fischer-Tropsch process,t op roduce ethylene. [30] The dual pathway mechanism may also hold its validity for ethylene production in Cu-Foams.T he IL could selectively quench one or more intermediates in Pathway I, which eventually suppresses the formation of ethylene at high overpotentials,w hile it appears that Pathway II, which starts at relatively low overpotentials and involves the C À C coupling through CO dimerization, is undisturbed by the IL. Ethane is not atypical CO2RR product on Cu catalysts. [31] Thep roduction of ethane with as ignificant FE is explicitly observed on nanostructured porous Cu catalysts. [31,32] Ethane can be seen as areduction product of ethylene after two more protonation steps.The porous structure of Cu catalysts seems to increase the retention time of pre-formed products in aconfined space.Therefore,for along time,the formation of ethane has been attributed to the re-adsorption and reduction of pre-formed ethylene on Cu catalysts (Pathways Ia nd IIA). [31,32] However,b oth FE and partial current density of ethane are actually insensitive to IL modification (Figures 3e and S13e). Thee ntirely different responses of FEs for ethylene and ethane to IL modification imply that ethane is formed via an independent pathway.Recent works report that ethane is produced by the CO dimerization pathway involving ethoxy intermediate, [33] which reconciles with our observation that the pathway involving CO dimerization is undisturbed by the IL. These findings suggest that production of ethane would mainly proceed through Pathway IIB (Figure 4). Ethanol is considered to share the similar formation mechanism as ethylene. [3d,8d] Twor eaction pathways,w hich involve either formation of carbene intermediate (*CH 2 ) (Pathway I) or dimerization of two adjacent CO (Pathway II), are usually proposed (Figure 4). We found that formation of ethanol is completely suppressed on Cu-Foam-IL, which suggests that IL traps the key intermediate(s) leading to the formation of ethanol. Similarly,n -propanol is not produced on Cu-Foam-IL. It is generally accepted that the formation of n-propanol undergoes intramolecular C À Ccoupling between CO and hydrogenated carbon (e.g.,carbene *CH 2 ), followed by proton/electron transfer to form propionaldehyde,a n intermediate that is further reduced to n-propanol (Figure 4). [3d, 8d, 34] It can be seen that the formation of both ethanol and n-propanol involves carbene species (*CH 2 ), which is also the intermediate to produce ethylene at high overpotentials. TheI L-induced suppression of ethanol, n-propanol, and ethylene (at high overpotentials,Pathway I) infers that these products likely share one or more common intermediate(s) selectively trapped by the IL. Regarding the other CO2RR products including CO and formate,their differences in FEs and partial current densities are quite minor or within measurement error between two catalysts,d etermining their pathways conclusively becomes challenging.Nevertheless,some inspiring information can be deduced. Fori nstance,t he formation of CO and formate is insensitive to IL modification, indicating that starting from the adsorption of CO 2 on Cu surfaces to the formation of adsorbed CO,t he IL seems to play an egligible role,o ri n other words,the IL does not take effect through coordinating with CO 2 molecules which are more likely transported to the catalysts surface in af ree form. Moreover,E Gi su sually detected as am inor product of the CO2RR. [31,35] However, herein both Cu foam catalysts exhibit fairly high FE of EG: up to 25 %a nd 19 %o nC u-Foam and Cu-Foam-IL, respectively.T he formation of EG is double-checked by analyzing the liquid products using GC-MS ( Figure S14). Consensus on the reaction pathway to EG has not yet been reached, although it is inferred that EG formation might proceed through aC Od imerization mechanism. [31,35] Herein, EG formation is always accompanied by formate,a nd their FEs exhibit similar potential-dependent behavior,t hat is,h igher FEs obtained at lower overpotentials and maximum FEs obtained at around À0.7 V. These results imply that these two products probably share the same intermediate,for example, *CO 2 À ,w hich has been experimentally confirmed as ak ey intermediate to produce formate. [36] Brennecke et al. suggested that CÀCc oupling could also take place between two adsorbed CO 2 À to form oxalate species. [19a] Theh ypothesis here is that EG is produced via dimerization of two adsorbed *CO 2 À species,i nstead of *CO,f ollowed by multistep reduction and protonation to give EG (Figure 4). The predominant product at À0.7 Vs witches from EG on Cu-Foam to formate on Cu-Foam-IL. TheI Lm ay inhibit the dimerization process of the co-adsorbed CO 2 À C species by preventing their close approach. [19a] It is also intriguing to observe that IL modification exhibits little impact on the methane formation. Tw or eaction pathways are usually proposed for the methane formation. One pathway involves carbene (*CH 2 )a sa ni ntermediate,w hich is further reduced to *CH 3 and finally to CH 4 .T he other pathway is through hydrogenation of *CO to form *CHO,followed by amultiple electron-proton transfer process to produce CH 4 (Fig-Figure 4. Proposed reaction roadmaps of CO2RR on Cu catalysts. Selected intermediates are presented for clarity.U nfeasible pathways are marked by red crosses. ure S15). Considering that Pathway I( carbene pathway) has been significantly suppressed by the IL, herein, comparable FEs of methane on both Cu foams leads us to hypothesize that methane is mainly produced through the latter pathway ( Figure 4). Analyzing the IL-induced change in CO2RR product distribution provides au nique perspective to gain some unprecedented mechanistic insights into the Cu catalyzed CO2RR which actually bypasses the necessity of explicit understandings about the chemical identity of surface intermediate(s). Based on the above results,asimplified overview of the reaction pathways that lead to varied CO2RR products is summarized in Figure 5, where the IL suppressed products and pathways are highlighted in yellow.I ti s intriguing to observe that the bifurcation of intermediates leading to the suppressed products starts right after the formation of adsorbed carbene (i.e.* CH 2 ) ( Figure 4). This hints that the key intermediate(s) are either *CH 2 or other species (e.g., *COH, *CHO,* C, *CH) that can further be converted to *CH 2 ( Figure S16). Another key question is how the IL molecules can trap the surface intermediate(s). IL molecules are reported to adopt ac harge-separated layered structure with alternating cation-/anion-rich layer at electrified surfaces. [17f,37] Accordingly,[ BMIm] + cations should be enriched at the innermost (Stern) layer of the electrodeelectrolyte interface when the electrode is negatively polarized (i.e.t he CO 2 electrolysis conditions). Therefore,u nderstanding of how [BMIm] + cations can possibly interact with other species would be crucial to extrapolate the role of ILs during the CO2RR. It is well documented that an imidazolium cation can easily be deprotonated at its C2-site,t hus converting the C2-site into ar eactive center due to its nucleophilicity. [38] Accordingly,t oc larify whether [BMIm] + interacts with surface intermediate(s) via its C2-site,a n imidazolium-based IL on which the C2-site at the imidazolium cation ring is "neutralized" by amethyl group (denoted as [BMMIm] + ,F igure S17a), was used for modifying Cu foams.Itturns out that the chemical trapping effect of the IL is not pronounced. Both ethanol and propanol can be detected, and the formation of ethylene at high overpotentials is not suppressed ( Figure S18). Furthermore,a nother IL, [HMIm][NTf 2 ]w hich shares structural similarity with [BMIm][NTf 2 ]b ut features al onger cationic chain, was also tested. Although both ethanol and propanol can still be detected, their FEs are much lower than those on unmodified counterpart, and ethylene formation is also suppressed (Figure S18). Twomore common ILs (i.e.[MTBD][NTf 2 ], [P 1444 ]-[NTf 2 ]) were also tested for comparison. Not surprisingly,no pronounced chemical trapping effect can be identified using either IL ( Figure S18). Their product spectra are comparable to that of the unmodified Cu-Foam, except for as lightly higher FEs of H 2 on Cu-Foam modified with [MTBD][NTf 2 ], probably due to the protonic nature of this IL. These results lead us to hypothesize that the IL traps the surface key intermediates through bonding with carbene (or other hydrogenated carbon species) on Cu surfaces.T his process may involve deprotonation and following alkylation reactions at the C2-site of the imidazolium ring. [39] Conclusion This work outlines an ew strategy to probe CO2RR pathways.T he IL alters the product spectrum during the CO2RR on Cu foams.A nalyzing the responses of CO2RR products to IL modification is au nique way to gain new insights into CO2RR pathways:1 )Ethanol and n-propanol form explicitly through a" carbene" mechanism, while formation of ethylene could proceed through two independent pathways which involve carbene and dimerized CO as key intermediates at high and low overpotentials,r espectively; 2) Thep resence of the IL can selectively suppress the formation of those products involving carbene intermediates, likely by forming stable imidazolium-carbene compound(s); 3) Ethane,w hich has long been considered ar eduction product of re-adsorbed ethylene during CO2RR, is identified as proceeding with an independent pathway that involves CO dimerization process.C onsidering the great structural flexibility in ILs,i dentification of reaction pathways for CO 2 products by carefully designing task-specific ILs to selectively interact with intermediate species may be feasible.T he success of this will bring IL modification closer to being ag eneric strategy for analyzing complicated CO 2 reduction pathways.T his approach is transferable to other electrocatalytic reactions and materials.This work demonstrates the possibility of moderating the CO2RR product spectrum by rationally leveraging the IL modification effect, which can be key to finely tuning the catalytic properties of aC O 2 reduction catalyst at am olecular level.
5,776.4
2020-09-03T00:00:00.000
[ "Chemistry", "Environmental Science", "Engineering" ]
Boundary conditions in hydrodynamic simulations of isolated galaxies and their impact on the gas-loss processes Three-dimensional hydrodynamic simulations are commonly used to study the evolution of the gaseous content in isolated galaxies, besides its connection with galactic star formation histories. Stellar winds, supernova blasts, and black hole feedback are mechanisms usually invoked to drive galactic outflows and decrease the initial galactic gas reservoir. However, any simulation imposes the need of choosing the limits of the simulated volume, which depends, for instance, on the size of the galaxy and the required numerical resolution, besides the available computational capability to perform it. In this work, we discuss the effects of boundary conditions on the evolution of the gas fraction in a small-sized galaxy (tidal radius of about 1 kpc), like classical spheroidal galaxies in the Local Group. We found that open boundaries with sizes smaller than approximately 10 times the characteristic radius of the galactic dark-matter halo become unappropriated for this kind of simulation after about 0.6 Gyr of evolution, since they act as an infinity reservoir of gas due to dark-matter gravity. We also tested two different boundary conditions that avoid gas accretion from numerical frontiers: closed and selective boundary conditions. Our results indicate that the later condition (that uses a velocity threshold criterion to open or close frontiers) is preferable since minimizes the number of reversed shocks due to closed boundaries. Although the strategy of putting computational frontiers as far as possible from the galaxy itself is always desirable, simulations with selective boundary condition can lead to similar results at lower computational costs. INTRODUCTION Differential equations are widely used in different astrophysical contexts, from physical phenomena involving our solar system to the large-scale structures in the universe, as clusters and superclusters of galaxies. In particular, fluid-dynamic problems are formulated through differential equations that represent the conservation of mass, momentum, and energy of a fluid with a certain equation of state (e.g., Landau & Lifshitz 1987). Corresponding author: Anderson Caproni<EMAIL_ADDRESS>To find a particular solution of any differential equation, it is necessary to provide some initial condition and/or some boundary conditions (BCs; e.g., Arfken & Weber 2005). However, because of high complexity involving hydrodynamic (HD) problems in astrophysical systems, analytical solutions for the temporal behavior of a fluid are rare, leading to applications of numerical methods to solve the HD equations (e.g., Toro 2009). There are several numerical codes dedicated to dealing with astrophysical gas/particle dynamics (e.g., Stone & Norman 1992;Fryxell et al. 2000;Raga et al. 2000;Teyssier 2002;Gammie et al. 2003;Anninos et al. 2005;Springel 2005;Mignone et al. 2007;Bryan et al. 2014), adopting different strategies to numerically evolve arXiv:2302.04825v1 [astro-ph.GA] 9 Feb 2023 gas/particle flows. Distinct BCs are usually available in those codes, which are chosen according to the specific physical situation to be studied. In this work, we focus on HD simulations planned to study the time evolution of the gas content inside an isolated galaxy under the influence of a dark-matter distribution and supernova feedback (e.g., Silich & Tenorio-Tagle 1998;Mac Low & Ferrara 1999;Fragile et al. 2003;Wada & Venkatesan 2003;Marcolini et al. 2006;Stinson et al. 2007;Revaz et al. 2009;Ruiz et al. 2013;Recchi 2014;Caproni et al. 2015;Emerick et al. 2016;Caproni et al. 2017;Emerick et al. 2020), aiming to verify the influence of the BCs on the gas removal efficiency. This paper is structured as follows. In Section 2, we describe the three BCs analyzed in this work. Initial setup and the general results from the HD simulations performed in this work are presented in Section 3 and discussed in Section 4. The main conclusions obtained in this work are listed in Section 5. THE SELECTIVE BOUNDARY CONDITION Before introducing our selective boundary condition (SBC), it is useful to present the main characteristics of the additional boundary conditions (BCs) used in this work. Let ρ, P , and v be the mass density, the thermal pressure, and the velocity of a fluid element at a position r measured in a given reference frame. The former three quantities are usually referred to as primitive variables in HD problems. For grid numerical simulations, the region of interest is discretized on computational cells, where the HD equations are evolved in space and time. The region of interest, or simply the computational domain, is enclosed by numerical boundaries. Boundary conditions are implemented numerically by the usage of guard or ghost cells adjacent to the boundaries of the computational domain. Let us also definen as the unit vector orthogonal to the boundaries of a computational domain, always pointing outwards by convention. In the case of the open boundary condition (OBC), also known as outflow BC, the gradient of any primitive variable across the boundary alongn is set equal to zero (e.g., Mignone et al. 2007). Caproni et al. (2015) and Caproni et al. (2017) adopted closed boundary conditions (CBC) in their HD simulations. It differs from open boundaries only in terms of the values of v at the boundaries: all velocity components were set to zero in Caproni et al. (2015), while the null value was set only for the velocity component parallel ton, v n = v ·n, in Caproni et al. (2017). Those authors adopted such boundary condi-tions to avoid that the frontiers of the computational domains in their simulations behaved as an infinity reservoir of matter due to the dark-matter gravitational potential (see Section 3 for further discussion). A similar BC was also adopted by Fragile et al. (2003), where density and temperature at static boundaries (v = 0) is kept fixed at their initial values. The SBC (also known as diode BC; e.g., Fryxell et al. 2000;Zingale et al. 2002) is a variant of the CBC adopted in Caproni et al. (2017), in the sense that if the fluid element that reaches the boundary is moving outwards, as well as it having a speed higher than a predefined threshold value, v th , the CBC is switched to OBC at that location. In other words, the selective boundaries allow those fluids that are moving fast enough to leave the computational domain; otherwise SBC blocks their passage, keeping them inside the domain. Thus, the SBC can be defined as follows where SBC is the boundary condition to be used for a given position at the boundaries in a given time step, and |v ·n| (= |v n |) is the the absolute value of v n in a given cell adjacent to the boundary. Initial setup Aiming to test the impact of the boundaries on the evolution of the gas content inside an isolated (dwarf spheroidal) galaxy, we decided to use in our simulations a similar initial gas configuration found in Caproni et al. (2017). In a few words, an isothermal gas is put in hydrostatic equilibrium with a cored, static dark-matter gravitational potential (e.g., Equation 6 in Caproni et al. 2017), so that its density distribution is peaked at the center of the gravitational potential well, decreasing radially as the galactocentric distance increases. Adopting a dwarf galaxy as a proxy for an isolated galaxy in our simulations avoids working with large computational domains, since dwarf galaxies are relatively small in size, with a tidal radius roughly below of some thousands of parsecs (e.g., Mateo 1998). Consequently, it helps to conduct high numerical resolution experiments without a large number of computational cells, decreasing substantially the involved execution times. We show in Table 1 the main physical characteristics of the isolated galaxy used in our numerical simulations. These values are compatible with those inferred for the classical dwarf spheroidal galaxy Ursa Minor. c At the center of the galaxy. Perturbing the galactic gas: types Ia and II supernovae feedback The initial gas distribution is perturbed by supernova (SN) blasts in our simulations. We followed basically Caproni et al. (2017) for the SN feedback recipe, even though the new version of our code used in this work follows independently types Ia and II supernovae 1 . In a few words, the rates of types Ia and II SNe in our simulations were constrained by the chemical evolution model for Ursa Minor galaxy (Lanfranchi & Matteucci 2004, a typical classical dwarf spheroidal galaxy in the Local Group. The imposed types Ia and II SNe rates are strictly respected during the whole of the simulations, telling to the code when an SN event must occur. On the other hand, where a SN event must take place depends on its type: denser regions are more prone to be selected for harboring a type II SN blast, while type Ia SNe are distributed randomly inside the galaxy. Independent of the type of supernova, an internal energy of 10 51 erg is added into the computational cell elected as an SN site. The SN feedback injects momentum into the interstellar medium, producing a net motion of the gas that is directed outward the galaxy. These galactic winds drive the gas losses that the simulated galaxy will experience as the time evolves. A portion of this galactic wind can reach the boundaries after a given interval, so that we 1 Further details of this new approach will be provided in a future paper in preparation (Lanfranchi et al. 2023). must be concerned about the influence of the chosen boundaries on it. Boundary conditions and instantaneous gas-loss rates All the numerical HD simulations performed in this work made use of the PLUTO code 2 (Mignone et al. 2007) in its version 4.2. The classical hydrodynamic differential equations are evolved in time by a third-order Runge-Kutta algorithm, while the primitive variable reconstruction is done by a piecewise parabolic method (Colella & Woodward 1984). The flux computation among numerical cells was done by the advection upstream splitting method (AUSM+; Liou 1996). We also assumed that gas respects the ideal equation of state, and it is under influence of a cored, dark-matter gravitational potential and cooling processes (see Caproni et al. 2017 for additional details). We performed 16 HD numerical simulations to study the impact of the boundaries on the gas-loss rates. The main characteristics of these simulations are listed in Table 2. They include two simulations adopting OBC (OBL60N170 and OBL3N100), one simulation with CBC (CBL3N100), and the remaining 13 simulations used to test the behavior of the SBC. Comparing open, closed and selective boundary condition simulations Following (Caproni et al. 2015), we estimated the instantaneous gas mass inside a galactocentric radius R gal = 950 pc (compatible with the tidal radius of the Ursa Minor dSph galaxy; e.g., Irwin & Hatzidimitriou 1995), after integrating numerically the mass density distribution obtained in the simulations where M gas is the total gas mass at a time t inside a spherical volume V gal with a radius of R gal . The instantaneous mass fraction of the gas inside R gal , f gas , is calculated from where M gas,0 is the initial gas mass inside R gal (see Table 1). We show in Figure 1 the behavior of f gas for three distinct simulations: OBL3N100, CBL3N100, and V2L3N100 (see Table 2 for further details). They show similar decreasing rates in the gas mass fraction due to SN feedback considering the first 600 Myr of evolution. After this interval, the situation changes dramatically: the OBC induces the breaking of the previous monotonic trend due to the rising of strong inflows of matter, which leads to extremely high (and nonphysical!) masses in comparison with what would be expected for a dwarf galaxy. This issue was already found by Caproni et al. (2015) in their simulations: the OBC acts as an infinite reservoir of matter, which provides gas whenever the pressure equilibrium within the computational domain is broken due to the domain discretization (e.g., Zingale et al. 2002). On the other hand, the decrease of the amount of gas still remains after 600 Myr for both CBL3N100 and V2L3N100 runs, even though the loss rates lower gradually until they become almost null after ∼1.6 Gyr. There is also a systematic offset between the gas mass fractions from CBL3N100 and V2L3N100 runs, the former presenting a substantially higher value after an elapsed time of 3 Gyr (∼0.40 5 against ∼0.18 for V2L3N100). The reason is that gas flows reaching boundaries with speeds higher than v th = 2 km s −1 are allowed to leave the computational domain, while the CBC retains them, independent of how fast the gas flow is. The quasi-saturation in the gas removal is detected in both CBL3N100 and V2L3N100 runs. The same result was also found by Caproni et al. (2017) (see their Figure 4). Caproni et al. (2017) also pointed out that reverse shocks generated when the SN-driven gas reaches the computational boundaries could decrease the inferred gas losses due to the gas retention. From a simple analytical calculation using the escape velocity associated with the dark-matter (DM) halo, these authors estimated that the CBC decreased the gas losses by a factor of ∼2.5 after 3 Gyr of evolution. An alternative way to estimate the influence of the computational frontiers on the results is to put them farther and compare the amount of gas left inside the galaxy after a certain elapsed time. Thus, we run two additional simulations, V64L6N200 and V64L12N200, in which the computational domain was extended, respectively, by a factor of 2 and 4 in relation to the original box size. These two simulations were compared with the V64L3N100 run, in which v th was set to 64 km s −1 (the DM escape velocity used by Caproni et al. (2017) in their analytical calculations). The time evolution of the gas mass fraction inside R gal for these three simulations is shown in Figure 2. Until ∼500 Myr, these three simulations produced the same instantaneous gas mass fractions, as it can be seen in Figure 2. After that, the discrepancy between V64L3N100 and the other two runs increases monotonically until approximately 1.5 Gyr, when such differences stabilize approximately around a factor of about 2.7. Note this factor is compatible with the value estimated previously by Caproni et al. (2017) from arguments based on the comparison between the escape velocity of the dark-matter halo and the velocities in the adjacent cells at the computational boundaries. Simulations V64L6N200 and V64L12N200 agree with each other, indicating that a computational domain with a size of 6 kpc (∼6 times the tidal radius of our fiducial galaxy) is enough to minimize boundary effects on the gas losses in similar simulations performed in this Figure 1. Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for the simulations using OBC (OBL3N100, blue circles), CBC (CBL3N100, green triangles), and SBC with v th = 2 km s −1 (V2L3N100, red squares). These three simulations were made considering a cubic domain of 3 3 kpc 3 . Figure 2. Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for SBC simulations with different sizes of the computational domain: L = 12 kpc (V64L12N200, blue circles), L = 6 kpc (V64L6N200, green triangles), and L = 3 kpc (V64L3N100, red squares). All these three runs adopt v th = 64 km s −1 . Dashed black line represents the results from an OBC simulation with L = 60 kpc (OBL60N170). work. Besides, a low-amplitude oscillatory behavior in the curves concerning these two simulations is seen in Figure 2. We have interpreted this as the result of the competition between outward galactic winds driven by SNe and the DM halo's gravity, which tries to push gas back into the galaxy. In other words, these oscillations could be the numerical realization of the "keeping the gas spread and heated" suggested previously in the literature (e.g., Read et al. 2006;Caproni et al. 2017). Even though Figure 2 indicates some convergence in the gas mass fraction derived from the simulations V64L6N200 and V64L12N200, their domain dimensions are small in comparison to R 200 (∼30 kpc; see Table 1) of the dark-matter halo of our fiducial galaxy. To verify whether these results are indeed representative of the gas losses driven by supernovae, we run an additional OBC simulation, OBL60N170, with a length of 60 kpc (∼2R 200 ) in each Cartesian direction. Its derived instantaneous mass fraction is shown by a dashed black line in Figure 2, indicating a quite similar behavior inferred from the simulations V64L6N200 and V64L12N200. Thus, a domain size of about 6 kpc seems to be enough for HD simulations of isolated galaxies with similar properties listed in Table 1. The impact of the threshold velocity on the selective boundary condition simulations As it was discussed in the previous section, the usage of v th = 64 km s −1 in SBC simulation V64L3N100 Figure 3. Instantaneous gas mass fraction inside a galactocentric radius of 950 pc (tidal radius of the galaxy) for SBC simulations with different threshold speeds (from 64 to 1 km s −1 ). Simulation V64L12N200 is also plotted (purple circles). . Instantaneous gas mass fraction inside a galactocentric radius of 950 pc for the SBC simulations V2L3N200 (blue circles), V2L3N100 (green triangles), and V2L3N50 (red squares). All these simulations use the same value for v th (= 2 km s −1 ), but differing in terms of numerical resolution. The solid black line shows the results from the simulation V64L6N250. increased the amount of gas left after 3 Gyr of evolution by a factor of 2.7 in comparison to the simulations V64L6N200 and V64L12N200, which made use of larger computational domains. A question that arises is whether it is possible to recover the results found in larger box simulations just varying the value of v th in the SBC simulations. Thus, we run seven additional simulations with the same initial setup and resolution of V64L3N100, but decreasing the value of v th mostly by multiples of 2. A comparison among these simulations, as well with the larger box simulation V64L12N200 can be seen in Figure 3. Again, no apparent differences among all simulations in Figure 3 are noted until approximately 500 Myr of evolution. After this interval, the instantaneous amount of gas left inside R gal decreases systematically as v th is lowered from 64 to 1 km s −1 . The largest differences occur when v th 4 km s −1 , indicating that only a small portion of the gas that is pushed away by the SNe reaches the boundaries with speeds higher than ∼4 km s −1 . It can be seen in Figure 3 that simulations V2L3N100 and V1.5L3N100 led to gas mass fractions closer to that obtained in the simulation V64L12N200, indicating that the appropriated value of v th must be roughly between 1.5 km s −1 and 2.0 km s −1 for simulations with a cubic domain of 3×3×3 kpc 3 . Effects of the numerical resolution Adopting simulation V2L3N100 as a reference, we multiplied (divided) by a factor of 2 the number of computational cells, but keeping all additional parameters fixed, generating simulation V2L3N200 (V2L3N50). It means a change in the numerical resolution from 30 pc cell −1 to 15 pc cell −1 in the case of V2L3N200, while a resolution of 60 pc cell −1 is attained for the simulation V2L3N50. We show in Figure 4 the influence of the numerical resolution on the instantaneous amount of gas left inside the galaxy. No difference in the mass fraction among these three simulations is seen during the first 200 Myr, when V2L3N50 begins to show a higher massloss rate in comparison to the other ones. This trend is inverted after about 1 Gyr and remains so until the end of the simulations. Concerning simulations V2L3N100 and V2L3N200, there is no significant difference between them until ∼1 Gyr, but after 1.5 Gyr, f gas decreases slowly in V2L3N200, in contrast with simulation V2L3N100 that presents a small-amplitude oscillations around f gas ∼ 0.185. The increment in the numerical resolution from 30 to 15 pc cell −1 is allowed to solve the snowplow transition radius for number densities as low as 1 cm −3 (e.g., Cioffi et al. 1988;Ostriker & McKee 1988), avoiding over cooling issues that weaken the kinetic feedback from supernovae (e.g., Creasey et al. 2011;Simpson et al. 2015;Caproni et al. 2017). Thus, a larger fraction of gas reached speeds higher than the threshold speed of 2 km s −1 in simulation V2L3N200, leaving definitively the computational domain. At this point, it is interesting to verify whether the monotonic decrease of f gas in V2L3N200 is due to a rather low value of v th in SBC. For this aim, we run an additional simulation, V64L6N250, where we doubled the size of the computational domain but keeping the numerical resolution of 15 pc per cell inside a cubic subdomain of 3 kpc in size (see Table 2 for further details). The behavior of f gas as a function of time is shown in Figure 4 by the black solid line. No monotonic decrease of f gas after 1.5 Gyr is seen but there are smallamplitude variations around f gas ∼ 0.185 instead, as in simulation V2L3N100. It suggests that v th = 2 km s −1 adopted in V2L3N200 is subestimated somehow. Based on the results shown in Figure 3, an increment of about 0.5-1.0 km s −1 in v th might be enough to reconcile simulations V2L3N200 and V64L6N250. Finally, we can also note that the increment of numerical resolution led to a low amount of gas left inside the galaxy after 3 Gyr of evolution, a factor of ∼2.5 between the lowest and highest numerical simulations in Figure 4 (V2L3N50 and V2L3N200, respectively). This difference is not too big if we consider the usual uncertainties regarding the estimates of the mass in stars, gas and dark matter in galaxies, as well as a relatively poor knowledge concerning the individual efficiencies of the feedback mechanisms to remove gas in those systems. As it was already mentioned, a small fine tuning in the value of v th can diminish or even eliminate those discrepancies. DISCUSSION Our results showed that no influence of the BCs on the instantaneous gas-loss rates is observed until ∼600 Myr in the simulations discussed in Section 3.3. It suggests that noncosmological grid-based HD simulations involving isolated galaxies will not be substantially influenced by the choice of a particular BC if the simulated time is less or of the order of some hundreds of Myr, as it is the case of several previous works involving different types of galaxies (e.g., Mac Low & Ferrara 1999;Fragile et al. 2003;Wada & Venkatesan 2003;Fragile et al. 2004;Melioli & de Gouveia Dal Pino 2015;Emerick et al. 2019Emerick et al. , 2020. For analogous HD simulations but involving longer timescales of evolution, the usage of SBC may be a useful alternative without sacrificing the numerical resolution and/or increasing the computational costs in the case of putting the numerical frontiers far from the galaxy (e.g., Marcolini et al. 2006;Mori & Burkert 2000;Emerick et al. 2016). To provide a sense of the gain in CPU time, we run four additional simulations evolved during 200 Myr in a workstation equipped with 128 2.2 GHz processors. Two of these simulations adopt SBC with v th = 2 km s −1 , while the two complementary ones use OBC. Besides, domain volumes of 3×3×3 and 60×60×60 kpc 3 were built for both SBC and OBC. In the case of the small volume domain (3×3×3 kpc 3 ), 60 computational cells per Cartesian axis were generated, implying a numerical resolution of 50 pc per cell. For the larger volume simulations, we kept the same numerical resolution of 50 pc per cell between -1.5 and 1.5 kpc, but decreasing nonmonotonically this resolution until it reaches the numerical boundaries at -30 and 30 kpc, leading to 102 cells per Cartesian direction. The results can be summarized as follows: • The size of the computational domain is fixes, and the elapsed time to complete a simulation does not depend strongly on the assumed BC: in the case of a domain size of 3 kpc, ∼12.8 and 13.1 hours for SBC and OBC, respectively; for a 60-kpc box, the elapsed times were ∼48.7 and 48.6 hr for SBC and OBC, respectively; • However, a larger domain implies in a substantial longer time for the completion of the simulation. For instance, a larger computational domain with OBC led to a longer execution time by a factor of ∼3.7. Even though this factor is smaller than the ratio between the total number of the cells used in the simulations, (102/60) 3 ∼ 4.9, it shows that larger domains imply higher computational costs that could become prohibitive if high-performance computational resources are not accessible in practice. Besides the avoidance of the frontiers of the computational domain behaving as an infinity reservoir of matter in simulations with gravity, the SBC can decrease (or even eliminate) the occurrence of reversing flows of mat-ter due to a pure CBC. As any flow colliding with a CBC will have its normal velocity reversed, it induces spurious backflows of matter that could modify the previous gas motions at interacting zones, as well as the physical conditions (density and temperature) of the gas (mainly if the created backflows become strong shocks). Note also that these reversing flows are expected to occur even in the absence of gravity forces. The choice of a particular BC may also influence on the stability of the initial gas configuration under hydrostatic equilibrium with a gravitational potential. To quantify this effect on grid-based simulations of isolated galaxies, we rerun simulations OBL3N100 and V2L3N100 for 500 Myr turning off the SN feedback during the whole simulation. The results are shown in Figure 5. We note in the case of SBC (upper panels in Figure 5) that the initial gas distribution is well preserved during the whole simulation, with spurious speeds being lower than 0.25 km s −1 (∼50 percent of the cells have speeds lower than 20 m s −1 ). On the other hand, the usage of OBC in a relatively small computational domain (lower panels in Figure 5) induces catastrophic inflows of gas that destroyed the initial spherically symmetric cored distribution of the gas, producing spurious speeds as high as some tens of kilometers per second. To reduce such spurious motions using OBC, larger computational domains are needed. The magnitude of the spurious accretion also depends on the numerical resolution, as pointed out previously by Zingale et al. (2002). We analyzed the impact of the numerical resolution on the time stability of the initial gas configuration rerunning simulations V2L3N50, V2L3N100, V2L3N200 without SN feedback, including also an extra simulation with a lower numerical resolution in comparison with the previous ones (l = 150 pc cell −1 ). We show in Figure 6 the behavior of the spurious speeds in terms of numerical resolution after 500 Myr of evolution considering SBC. Trends of the increase of the mean and maximum spurious speed with the decrease of the numerical resolution are clearly seen in Figure 6, even though their values are always very small ( 1 km s −1 ) in comparison to the OBC simulation shown in Figure 5. These results suggest that SBC may be also useful in numerical problems somehow involving the hydrostatic equilibrium condition. CONCLUSIONS In this work, we studied the influence of the computational frontiers on the gas removal process in (small) galaxies. The option for using an initial configuration compatible with a typical dwarf galaxy (tidal radius of about 1 kpc) is justified by keeping the computational domain as small as possible without sacrificing substantially the numerical resolution, and keeping the computational costs relatively low as well. Three different boundary conditions were employed in this work: open (or outflow), closed, and selective boundary conditions. The 16 hydrodynamic simulations with types Ia and II supernovae feedback performed in this work adopted a cubic domain where the galactic center coincides with the center of the computational box. The majority of these simulations have frontiers put at a galactocentric distance corresponding to ∼ 1.6R gal . Our main results are summarized as follows. • No difference in the gas mass fraction left inside the galaxy is noted until about 600 Myr of evolution, independent of the three boundary conditions analyzed in this work. It suggests that similar simulations involving short periods of time can adopt open boundary conditions without any loss of integrity of the results; • After 600 Myr of evolution, open boundary conditions for a relatively small computational box (sizes smaller than ∼ 3R gal or about 10 times the characteristic radius of the galactic dark-matter halo) act as an infinity reservoir of gas due to darkmatter gravity whenever the pressure equilibrium within the computational domain is broken due to the domain discretization (e.g., Zingale et al. 2002). In this case, closed or selective boundary conditions are preferable if the increase of the computational edges are somehow unfeasible; • As it was already expected (e.g, Caproni et al. 2015), closed frontiers tend to retain more gas in comparison to the selective boundary condition, impacting on the amount of mass left inside the galaxy: a factor of 2 approximately (see Figure 1); • Concerning the influence of the value of v th used in the selective boundary condition simulations, no difference in f gas is seen until approximately 500 Myr of evolution. It remains true until 3 Gyr for the simulations using v th 8 km s −1 , coinciding with the results from the closed boundary simulation. For the simulations with v th 4 km s −1 , the instantaneous amount of gas left inside the galaxy decreases systematically as v th is lowered; • For v th 1.5 km s −1 , f gas decreases with time, in contrast with SBC simulations with higher v th that present a plateau-like behavior after ∼1.5 Gyr of evolution. Numerical simulations with larger computational domains ( 6R gal ) show similar plateau-like behavior, but showing also a smallamplitude oscillation around f gas ∼ 0.185 possibly produced by the competition between the pull from the dark-matter gravitational potential and the push due to the supernova feedback; • In terms of numerical resolution, our results show no difference in the mass fraction during the first 200 Myr when l is varied from 60 to 15 pc cell −1 . This interval is extended to about 1 Gyr considering simulations with 30 and 15 pc cell −1 only. The monotonic decreasing of f gas seen in V2L3N200 is not present in V64L6N250 with a larger computational domain, indicating that v th = 2 km s −1 adopted in V2L3N200 is subestimated somehow. Based on the results shown in Figure 3, a small increment of about 0.5-1.0 km s −1 in v th might be enough to reconcile simulations V2L3N200 and V64L6N250. • Although the strategy of putting computational frontiers as far as possible from the galaxy is al-ways desirable, our simulations with a selective boundary condition can lead to similar results but at less expensive demands regarding computational resources. As a final remark, even though we have analyzed the influence of the boundary conditions over the gas-loss rates using a dwarf spheroidal galaxy, the SBC strategy can be adopted for any type of galaxy or astrophysical system that demands closed numerical frontiers (e.g., see Lanfranchi et al. 2021 for an application involving SBC in the context of intermediate-mass black hole feedback in dwarf spheroidal galaxies).
7,352.8
2023-02-01T00:00:00.000
[ "Physics" ]
Biological Activity of Some Magnesium(II) Complexes of Quinolones A new magnesium complex of quinolone antibacterial agent was prepared. This new complex as well as a previously isolated complex of magnesium with ciprofloxacin were tested against various Gram positive and Gram negative microorganisms. Antimicrobial activities were evaluated using the agar diffusion test. The results have shown that all magnesium complexes are significantly less active than the parent quinolone drugs. It was also found that the activity of quinolones is reduced when the solutions of quinolones are titrated with magnesium ions. Introduction The quinolones represent a big family of synthetic antibacterial agents which are in broad use to cure several infectious diseases. Ciprofloxacin (cf l-cyclopropyl-6-fluoro-4-oxo-7-(1-piperazinyl)-1,4-dihydroquinoline-3-carboxylic acid) is one of typical members of the group whereas the second quinolone used in this study (FCl=l-cyclopropyl-7-chloro-6-fluoro-4-oxo-l,4-dihydroquinoline-3-carboxylic acid) is only a precursor of cf and is not used in clinical practice (Scheme 1). It was found that the absorption of the quinolone drugs is lowered when they are consumed simultaneously with magnesium or aluminium antacids (Polk, 1989; Lomaestro and Bailie, 1991). Many other ions (Ca, Fe, Zn...) found in pharmaceuticals exert similar effects to quinolones (Shiba et al., 1994). The proposed reason for such behaviour could be the chelate bonding of the quinolone to the metal. This was one of the reasons that many authors started to study the interactions of metal ions and quinolones. On the other hand, it seems that the role of metal ions is essential for the mode of action of these drugs. The activity of quinolones is due to the inhibition of the supercoiling of DNA catalyzed by the metalloenzyme DNA gyrase and it is also assumed that copper or iron complexes enable the bonding of the quinolone to the DNA (Shen and Pemet, 1985;Crumplin et al., 1980). Conflicting reports have appeared in the literature on the molecular details of drug-DNA and drug enzyme interactions. The first drug-DNA models have been proposed by Shen and co-authors (Shen et al., 1989) and have included hydrogen bond type interactions between the DNA unpaired bases and the quinolone, as well as a stacked dimerization of the drug. These models have been modified and imply a possible interaction between the C-7 substituent and the quinolone pocket on the B subunit of DNA gyrase (Morrissey et al., 1996). It was also discovered tlmt metal ions, especially magnesium, are involved in such interactions (Palu et al., 1992). It was suggested that magnesium acts as a bridge between the quinolone and the phosphate groups of the DNA. Recently, another model based on the intercalation of quinolone into the double helix of DNA was proposed (Llorente et al., 1996). The structure is stabilized by the binding of the magnesium ion with oxygen atoms present in quinolone, a phosphate and a purine base of the DNA. Our aim was to isolate new magnesium compounds of quinolones apart from those already prepared (ZupanO6 and Bukovec, 1996; Turel et al., 1996) and to compare the biological activity of these compounds against different microorganisms. The quinolone (FC1) (154 mg) was dissolved in 35 mL of acetone. 70.1 mg of Mg(NO3)26HzO was added and the solution was stirred and slightly heated. After few minutes the white product precipitated. After few hours the product was filtered out and washed with water and acetone. Finally the product was left in a drier at 50 C for one hour. It was found that the isolated sample corresponds to formula Mg(FCI-)z H20. 4.03 %. The same product was isolated if MgCI'6HO or Mg(CIO)z xH20 were used as reactants. All attempts to prepare the crystals suitable for X-ray analysis have failed. The product is only sparingly soluble in water and in different organic solvents. [Mg(cf)2(HzO)212HzO (abbreviation Mgcf) The product Mgcf was prepared as reported elsewhere (Zupan6i6 and Bukovec, 1996). Analyses and Physical Measurements The analyses of carbon, hydrogen and nitrogen were carded out on a Perkin-Elmer 204C microanalyzer. The magnesium ion content has been determined by a titration with the ethylenediaminetetraacetic acid (EDTA). First the decomposition of the complex was performed as follows. 50.0 mg of sample MgFC1 was dissolved in the mixture of 1.32 mL of concentrated nitric acid and 0.33 mL of sulphuric acid in a Kjehldahl flask. The mixture was heated till all the liquid evaporated. The residue was dissolved in the 100.0 mL of distilled water. The aliquot of the sample (25.0 mL) was transferred to the Erlenmeyer flask and the buffer solution (pH 10) was added. The sample was titrated with 0.010 M EDTA solution and the indicator Eriochrome black T was used for the determination of the end point (Welcher, 1958 Antimicrobial activities were evaluated using the agar diffusion test. The tested bacteria were allowed to grow overnight and their concentration was then determined. Bacterial culture was incorporated to Lauria Broth nutrient agar which was previously cooled to 42 C. The final concentration of bacteria was approximately 5 105 CFU/mL (CFU colony forming unit). Twenty millilitres of inoculated medium was poured into petri dishes and kept at 4 C until use. Circles of agar ( 1 cm) were cut out from the cooled medium. The MIC (minimal inhibitory concentration) values of Mgcf, FCI and MgFC1 were determined, using ciprofloxacin and ciprofloxacin hydrochloride (cfHCl) as a reference substances. MIC represents the lowest concentration of an antibiotic that will inhibit the growth of a tested organism. For estimating MIC, the antibacterial substances were diluted gradually in 10 nM potassium phosphate buffer pH 7.4, containing 2 % DMSO. Hundred millilitres of each dilution were poured into the holes cut in the inoculated medium, after that the system was kept at 37 C for 24 h. Finally, the diameters of inhibition zones were measured. Additionally we wanted to check if there was any difference in the biological activity between the magnesium complexes and the samples in which we added free magnesium ions to the solutions containing ciprofloxacin. The effect of magnesium ions on the antibacterial activity of ciprofloxacin was tested as follows: ciprofloxacin was diluted in 10 mM potassium phosphate buffer pH 7.4, containing 2 % DMSO. The final concentration of cf was 30 tg/mL. Fifty microlitres of ciprofloxacin were mixed with the same volume of different MgC12 dilutions in the same buffer. Agar diffusion test was used to determine the antibacterial activity, as described above. The tested microorganism was S. aureus. Results and Discussion The proposed structure of the complexes The infrared spectra of quinolones are quite complex and we only compared the most indicative vibrations of the samples used also in our previous studies (Turel et al., 1994;Turel et al., 1997;Turel et al., 1999). In the infrared spectrum of the free quinolone FCI we can assign the valence vibration of the carboxylic group v(C=O)c .group at 1723 cm " and the valence vibration of the ring carbonyl group at position 4 v(C=O)p at 1607 cmt. In the magnesium complex MgFCI a strong broad band appeared at 1626 cm. The absence of the v(C=O)c and the shift of v(C=O)p vibrations could be the evidence that these groups are involved in the bonding to the metal. The changes in the IR spectra of Mgcf are similar and were described before (Zupan6i6 and Bukovec, 1996). A similar infrared spectrum was found for the copper(II) complexes of ciprofloxacin (Turel et al., 1994;Turel et al., 1999). The X-ray structure of this copper complex-[CtI(cf)2]CI2"6H20 revealed that copper is bidentately bonded to the quinolone through oxygen atom of the carboxylic group and oxygen atom of the ring earbonyl group. Such bonding was found also in some other metal complexes of quinolones (Baenziger et Ruiz et al., 1998;Turel et al., 1999). According to these facts we assume that the bonding in magnesium complex of MgFC1 is similar (Scheme 2). /\ Scheme 2: The proposed bonding of magnesium to the quinolone. The Synthesis and Biological Activity of Some Magnesium(ll) Complexes of Quinolones The same effect is observed when cf is titrated with increasing concentration of MgCI2 (0-500 mM) ( Figure 1). Magnesium decreases the antibacterial activity of ciprofloxacin even at very low concentrations. The loss of activity is especially notable at the MgCI2 concentrations above 1 mM. We can thus conclude that the activity of quinolones is remarkably lowered in the presence of magnesium ion both in the complexed or free form. MgCl2(mM)
1,930.6
2000-01-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Optimization of the Convolutional Neural Networks for Automatic Detection of Skin Cancer Abstract Convolutional neural networks (CNNs) are a branch of deep learning which have been turned into one of the popular methods in different applications, especially medical imaging. One of the significant applications in this category is to help specialists make an early detection of skin cancer in dermoscopy and can reduce mortality rate. However, there are a lot of reasons that affect system diagnosis accuracy. In recent years, the utilization of computer-aided technology for this purpose has been turned into an interesting category for scientists. In this research, a meta-heuristic optimized CNN classifier is applied for pre-trained network models for visual datasets with the purpose of classifying skin cancer images. However there are different methods about optimizing the learning step of neural networks, and there are few studies about the deep learning based neural networks and their applications. In the present work, a new approach based on whale optimization algorithm is utilized for optimizing the weight and biases in the CNN models. The new method is then compared with 10 popular classifiers on two skin cancer datasets including DermIS Digital Database Dermquest Database. Experimental results show that the use of this optimized method performs with better accuracy than other classification methods. I ntroduction Skin cancer involves abnormal changes in the outer layer of the skin. This cancer is the most prevalent cancer in the world and contains about 75% of the world's cancer. Although most people with skin cancer are healed, it is still a major concern due to its high prevale nce [1]. Most skin cancers grow only locally and invade adjacent tissues, but some of them, particularly melanoma (cancer of the pigment cells), which is the rarest type of skin cancer, may spread through the circulatory system or lymphatic system and reach the farthest points of the body [2]. Melanoma forms the highest percentage of probability among different types of skin cancer [3][4][5][6][7]. On average, 4740 males and 2490 females died in 2019 due to melanoma [8]. Melanoma is more prevalent in some areas, especially in western regions and countries. According to the findings, the diagnosi s of melanoma in the initial stages can significantly reduce the mortality due to this cancer; but since the diagnosis of this disease at an early stage, even by specialists, is difficult, it will be very helpful to provide a method to early diagnosis of the melanoma or skin cancer [9][10][11][12]. In recent years, with the advancement of technology, particularly artificial inte lligence, suitable methods have been developed for this issue. In the meantime, image processing techniques are pr ogressing as successful methods [9,[13][14][15][16]. The application of image processing and computer vision for automatically identifying the patterns like cancer from images reduces human errors and increases the speed of detection. In addition, the importanc e of medical image processing can be considered as it helps physicians and radiologists to more easily diagnose the disease, thus protecting the patient against irreparable risks that will co me about. Artificial Neural Networks (ANNs) are one of the popular methods used in image processing. ANN is inspired by the intricate structure of the human brain, in which millions of neurons (cells) communicate with e ach other (synapse) to solve problems or store information. These networks are a collection of different models that are proposed by mathematicians and engineers to simulate a part of brain function [15][16][17][18][19][20][21]. The system is made up of a large number of extravagant processing elements called neurons that work together to solve a problem. Learning in natural systems occurs adaptively; this means that there is a change in the synapse as a result of learning. Recently, important developments are proposed based on new kinds of neural networks for analyzing visual systems. CNN is a trail of deep neural networks which is usually used on image or speech analyzes in machine learning [22][23][24]. In addition to different applications of CNN's in image processing, they have especially promising performance in different medical image problems like lesion classification [25], breast cancer [26], tumor diagnosis [26], brain analysis [27], panoptic analysis [28], and MR images fusion [29]. In the mentioned examples of CNN applications, the image has to be first divided into a lot of small superpixels and then the methods have to be performed on all of the superpixels. From the literature, it is observed that using CNN models improves the diagnosis system performance [30]. A part of the training step in the neural networks is to find the optimal solution to fit the target problem based on internal weights which is usually established by the back propagation (BP) algorithm. BP is a classic method that evaluates the error on each training pairs and adjusts the neurons weights to fit the desired output [31]. The error minimization is established by the gradient descent algorithm as it minimizes the cross-entropy loss in the image. This is a complicated optimization problem which needs high cost for solving it. Recently, the utilization of meta-heuristics in different applications is extensively increasing. One of these applications can be to use them for the cross-entropy loss minimization [32]. In the recent years, several kinds of meta-heuristic algorithms have been introduced. In 2016, Mirjalili and Lewis proposed a new meta-heuristic method called whale optimization algorithm [33]. The whale optimization algorithm is an inspiration of the bubble net hunting strategy of the humpback whales. Despite being new, it has good results for different applications [34][35][36][37][38]. Here, this algorithm is employed for the cross-entropy loss minimization of skin cancer images to improve the method efficiency. In the present study, the whale optimization algorithm is employed for the diagnosis of cancer images. The main purpose here is to optimize the weights of any layer of CNN. The proposed optimization algorithm shows suitable improvements about optimal training of the CNN. The main structure of the paper is given in the following. Section "Materials and Methods" describes comprehensive explanations about materials and methods including convolutional neural networks (CNN) and whale optimization algorithm (WOA). In Section "The Proposed WOA based CNN", the new optimized convolutional neural network based on a whale optimization algorithm is presented. Section "Dataset Description" briefly presents the dataset which is considered for performance analysis. Section "Implementation Results" investigates the experimental results by performing a comparison between the proposed method and 10 popular cancer detectors, and the conclusions of the paper are given in section "Conclusions". Materials and Methods In the following, a general description about the CNN and how to optimize them will be described. Convolutional neural networks Here, the membranous neurons respond to the motive in bounded areas called receptive fields. The receptive field for each neuron partially overlaps until the visual field is tiled. The reply to the every single neuron to the motive can be mathematically approximated by a convolution operation [39,40]. Convolutional neuron layers contain the most principal part of a CNN. For classification applications like image classification, multiple 2D matrices can be considered as the input and the output of the convolutional layer. It is important to note that there is no restriction about the equality in the number of the input and the output matrices. In this step, local feature extraction has been applied to extract the regional characteristics of the original image. The main purpose of the learning procedure is to obtain some kernel matrices to get better prominent features to be utilized in image classification. The BP algorithm can be used here for optimizing the network connection weights. The convolution in this layer is performed by a sliding window. Afterward, a vector has been generated based on the sliding window and the dot product and the weights are added up. Then, an activation function is utilized for each neuron which is often a rectified linear unit (ReLU), with a function f(x) = max(x, 0) [41]. This process has been implemented on the original image. For more scale reduction of the output, another process called max pooling has been employed; here, only the highest value is reported to the subsequent layer of the sliding grid. After in itializing the structure of a CNN, an optimization method will be required to fit the target problem based on internal weights. This process is usually applied by the BP algorithm. In BP, the err or on each training pairs is evaluated and then it is employed to adjust the weights of the neurons to fit the desired output [6,31]. BP uses a gradient descent algorithm for the error minimization. The gradient descent is a method based on minimizing the cross-entropy loss as the fitness function [42]. The proposed fitness function is given in the following. (1) where, describes the desired output vector and is the obtained output vector of the m th class. The softmax function is illustrated in the following formula: (2) where, N is the number of samples. The function L can be modified by the weight penalty to include a value, to keep the values of the weights from getting larger: where, W k describes the connection weight, k in layer la and L and K are the total number of layers and the layer l connections, respectively. Although CNN has been introduced as a powerful classification tool, designing an optimal structure for its layout is a significant problem: most of the designed layouts are based on trials and errors. Recently, there are some new works which have introduced modifications based on meta-heuristic algorithms [43,44]. Fig. 1 shows a simple skin cancer detection using ordinary CNN. In the fig. 1 the convolution layer evaluates the output of the neurons that are connected to the local area at the input. The calculation is performed by the point multiplication between the weights of each neuron and the area they are connected to (the activation mass). The main purpose of the pooling layer is also to subsample the input image to reduce the computational load, memory, and the number of parameters (over fitting). Reducing the size of the input image also causes the neural network to be less sensitive to image displacement (independent of the position). The WOA is a new stochastic optimization method which is derived from the hunting process of whales [33]. Like any evolutionary method, WOA starts with a random population set (candidate solutions) to search and find the global optimum (maximum or minimum) solution for the problem. The algorithm continues to improve and update the solution based on its structure until the optimum value is satisfied. The basic difference between the WOA and other meta-heuristic methods is how the WOA rules develop and update the solution. The WOA is inspired from the whale's trap and attack hunting strategy; the use of bubbles in spiral movement around the prey with which the trap is formed is given the name "bubble-net feeding behavior". The behavior of the bubble-net feeding process is shown in Fig. 2. From the Figure 1: A simple skin cancer detection using ordinary CNN figure 2, it is clear that the humpback whale first creates bubbles around the prey. This process is performed by spiral motion of the whale. Afterward, it attacks the prey. This process comprises the main contributor of the WOA. The explained created bubble-net system is mathematically defined as follows: where, p and r are random constants and are bounded between [0, 1], l is a random constant in the interval [-1, 1], t illustrates the present iteration, is a random constant in the interval [-1, describes the distance for the i th whale from the prey (the best solution), b defines the logarithmic shape of the spiral motion, a is a linear descent from 2 to 0 over the iteration. In the above equation, the first term models the encircling process and the second term models the bubble-net process. In WOA, these two terms comprise the exploitation and exploration terms of the algorithm [33]. As it is mentioned before, the WOA starts with a random population. The solutions are then updated in each iteration in order for mathematical modeling of the bubble net hunting and the prey encircling. Here, to ensure the convergence of the algorithm, the best solution improves the position of the agents when . Otherwise, the best solution will play the rule of the pivot point. In the following, the general pseudo code for the WOA is given. The Proposed WOA based CNN In the present study, a different strategy is utilized specifying the number of hyper-parameters; not only the most appropriate hyper-parameters for the CNN at the moment can be considered, it can also take into account the time to run each moment. As mentioned before, the primary purpose of this research is to design an optimization based technique for skin cancer classification. The main idea here is to utilize an optimized method to improve the system accuracy. Candidate solutions in the proposed optimized classification problem are a sequence of integers. In this method, at first, the minimum (min) and the maximum (max) limitations for the algorithm is determined to prevent the system errors. In this problem, max describes the size of the sliding window and min is 2. Here, the constant 2, presents the minimum value that is acceptable for the max pooling where no lower size exists. The other point which should be considered is that the value of the input data should be greater than the sliding window. Afterward, a group of solutions is randomly generated. In this problem, the initial population is set to 150, where the hyper-parameters settings of the CNN are described by the individuals, occurring within 10 integer values. The search agent vector for the proposed CNN is shown in Fig. 2. Afterward, the solutions are evaluated. Here, the halfvalue precision for the proposed optimized CNN is considered as the cost function on a skin cancer validation process. It is important to know that the general strategy has high computational cost; each member of the population describing the CNN requires training on the skin After initial population generating and evaluation of the initial cost, the position of the search agents are updated based on the parameters like prey encircling and bubble net hunting and the process repeats until the stop criteria are achieved. The designed optimized system was tested on the DermIS and the Dermquest databases based on the minimization of the MSE value for validating and testing. In the following, more explanations are described. Weights and biases are two important parameters which are used for optimizing the structure of the CNN. Therefore, in this research, these two features have been selected for optimizing, such that: (11) where, A and L are the total number of agents and the total number of layers respectively, l describes the layer index, n describes the number of the agent, and w in describes the value of the weight in layer i. In other words, the total parameters for optimizing can be described by the following vector: (12) Fig.2. shows these assignments. A simplified measured error between the reference and the system output is given below. (13) where, T describes the training samples number, k is the output layers number, d ji and o ji are the desired value and output value of the CNN respectively. Gradient descent contains the main part of the ordinary BP algorithm; this technique can be trapped easily into the local minimum. This shortcoming can lead to wrong results in some complicated pattern recognition problems [45][46][47]. Another advantage of using WOA than the BP for Error minimization is that the WOA based method does not require the backward phase as a high computational cost process. Fig. 3 shows the flowchart diagram of the proposed method. Dataset Description Two different dermoscopy databases have been employed for testifying and analyzing the proposed method: 1) DermIS Digital Database [48]: This is an image atlas on different kinds of skin cancers with differential diagnoses which was launched for medical image processing applications. This database is the largest online information service available on the Internet. 2) Dermquest Database [49]: This is an online medical atlas for dermatologists and dermatologist-based healthcare Implementation Results Experimental simulations was implemented using Matlab R2017® software on a Intel Core i7-4790K processor with 32 GB of RAM, and two NVIDIA GeForce GTX Titan X GPU cards with scalable link interface (SLI). The proposed simulations were implemented on two of the standard skin cancer databases to analyze the system performance. 70% of data was utilized as the training set and 10% for validating set. The remaining 20% was used as test sets. This division is known as the Pareto principle (80/20 rule) that states that, for many events, roughly 80% of the effects come from 20% of the causes [50]. For determining what images should be included in the training, validating, or testing part, they were selected randomly. For fair image processing, all images of the datasets were resized to 640×480. The proposed CNN was trained by the WOA method. In the presented experiment (Fig. 5), it is obvious that the learning rate varied between 0.2 and 0.9, since the radius and the number of neuron cells are different, almost 100% of training pixels will be included in the prototype neurons. The best case is to select a neural network with smallest neuron volume. Based on [51], it is possible to select a proper learning rate by the performance ratio. Fig. 5 shows that by increasing the learning rate, both the performance ratio and training time will increase. However performance ratio is significant, for making a trade-off between performance ratio and training time, the learning rate is selected to be 0.9. As explained before, Dermis and the Dermquest are employed as two most applicable databases for testifying the proposed method. 30,000 iterations were performed to train the proposed network. For making a correct and independent analysis of the images, the training step was repeated 60 times and the final results were described based on the mean values. For testifying the performance of the proposed system, five performance metrics were employed and are defined as follows. There are different research works which have been introduced for skin cancer detection [52][53][54]. Each of these methods have their own difficulties and shortcomings. Introducing all of these methods is not possible. Therefore 10 methods have been selected for comparison with our proposed method. The method of [55] is based on a commercial tool. The method of [56] is about a framework based on the semi-supervised system. For a fair comparison, automatically extracted descriptors of this method are employed. Some deep learning based systems like Ordinary CNN, AlexNet [57], VGG-16 [58], ResNet [59] , LIN [60], and Inception-v3 [61] are also utilized for this comparison. Table 1 illustrates a performance comparison between the proposed system and the aforesaid methods. As can be observed, the CNN/WOA method is most accurate when compared with the other 10 aforesaid methods. This is due to the combination of the CNN with the whale optimization algorithm. Applying this optimization algorithm for the CNN allows it to escape from the local minima. This gives a global minimum for the BP problem in the CNN and addresses better performance for the proposed method. The results show the effect of using the WOA optimization algorithm on the deep learning framework. For more clarification, the distribution of classification performance of the above table is shown in Fig. 6 as a bar chart. The proposed detection system here has two classes: the background region and the cancerous region. The input layer of the proposed CNN/WOA network contains 3 × n pixel feature vectors which describe the information about R, G, and B of the image. As it is explained before, rectified linear unit (ReLU) is utilized as the activation function of the network. The output layer presents a two labeled image including 0 (background region) or 1 (cancerous region). Fig. 7 show the results of some samples of the process of the skin cancer detection system detection using the proposed CNN/WOA method. In the figure, the first and the third columns show the original images and the second and the fourth column show the detected masks based on optimized CNN/WOA method. Experimental results show the high efficiency of the presented method for the diagnosis of the skin cancer regions. Conclusions In this paper, a new method is proposed for skin cancer detection. The proposed method uses a meta-heuristic based algorithm for optimization of the Convolutional neural network in training the biases and the weights of the network based on back propagation. To do so, the half-value precision is considered for the proposed optimized CNN as the cost function on a skin cancer validation process which includes a simplified measured error between the reference and the system output. In this study, a recently introduced algorithm called whale optimization algorithm is utilized for minimizing the error rate of the learning step for the Convolutional neural network. The proposed method is called CNN/WOA. The proposed method is then tested on images from two wellknown databases including DermIS Digital Database and Dermquest Database and compared with 10 number of popular classification methods. Final results show the accuracy prominence of the proposed system toward the compared classifiers.
4,953.2
2019-01-01T00:00:00.000
[ "Computer Science" ]
Multistage Models of Carcinogenesis by The simple multistage model of carcinogenesis is outlined. It provides a satisfactory explanation of the power law for the age incidence of many forms of epithelial carcinoma, for the effects in human populations of changing exposures to supposed carcinogenic agents, and for many of the observed effects of applied carcinogens in animal experiments. In particular, the evidence on the effects of starting and stopping cigarette smoking suggests that both an early and a late stage may be affected. In the absence of direct evidence on the nature of the cellular changes there is some reluctance to accept a model with more than two stages, and several forms of two-stage models provide good general explanations of observed phenomena. Such a model has recently been applied to breast cancer; another approach to this disease, effectively involving transformations of the time scale, is discussed. Introduction Multistage and related models of carcinogenesis have been discussed for about 30 years, and the growth in the literature has been almost as rapid as the rise of cancer incidence with age. In a short paper I cannot attempt a comprehensive review, and I shall aim to outline the topic in a general way, making more specific comments about some of the points which happen to have interested me over this period. More comprehensive reviews of the mathematical theory have been given by Armitage and Doll (1), Whittemore (2), Whittemore and Keller (3), and Peto (4) has provided a stimulating general review. Early Work The flurry of work in the early 1950s, which led to the formulation of a number of related models, was probably motivated by evidence from various sources. First, there was the epidemiological evidence that the mortality or incidence rates for many forms of human cancer increased rapidly with age. This might be a general effect of aging, the body becoming more susceptible to insults of various sorts, or it might be because carcinogenesis is a complex process requiring time and perhaps involving several qualitatively different stages. There were two considerations favoring the second explanation: the fact that people exposed to a high but short-lived carcinogenic risk (for example from irradiation or industrial hazards) often acquire cancer after a long period of time; and animal experiments such as those of Berenblum and Shubik (5) showed that some chemicals are especially effective either early or late in the induction process (the present terms for these being *Department of Biomathematics, University of Oxford, 5 South Parks Rd., Oxford OX1 3UB, England. "initiators" and "promoters"), suggesting that qualitatively different processes were at work during the early and late phases. I shall discuss later some more recent work on the distinction between age per se and duration of exposure to carcinogens. In the reviews mentioned earlier, fuller descriptions of some of the early models are given than can be presented here. They include the "multicell" theory of Fisher and Holloman (6) (requiring a mutationlike change to a specific number of neighboring cells in a tissue, and inconsistent with the unicellular nature of most tumors); and the "multistage" or "multihit" models of Stocks (7) and Nordling (8) (in which a specific number of changes in any order are required). The very similar model of Armitage and Doll (9) introduced the idea of a specific ordering of the changes, so as to accommodate the evidence from initiation-promotion experiments and also a number of features of the epidemiology of human cancer. There was also a series of papers by Iverson and Arley, starting with one (10) which postulated a randomly occurring initiating event followed by a randomly distributed induction period. This rather general formulation encompasses most of the other models, since in a multistage model the first stage can be taken as the initiating event while all subsequent events are subsumed into the induction period (1). Derivation of the Basic Model It will be useful to outline the theory of the Armitage-Doll model in slightly different terms to those of the original paper. Suppose that, in a particular tissue, there are N cells (or cell lines, if they divide) that can potentially experience carcinogenic transformation. The final development of cancer is the k-th and last of a series of sudden and irreversible changes (or stages) which must take place in a specific order. The clinical detection of the disease may be delayed by the period required for the tumor to grow to a detectable size: we shall assume that this is a relatively short lag and shall not consider it in any detail. (Many writers systematically replace the current time t by t -w, where w is the assumed lag.) Suppose that, for any cell which has experienced i-1 changes [which we shall call an (i -1)-cell], the event rate for next change is Xi, independent of time. That is the probability that the i-th change takes place in (t, t + dt) is Xidt + o(dt). This defines a time-homogeneous birth process. We should like to know fit), the event rate for the k-th change at time t (the process starting at time 0). General and particular solutions for this problem are well-known (11)(12)(13)(14) but are algebraically cumbersome. Fortunately, an approximation is adequate for almost all purposes. Consider the position for values of t small enough to make the probabilities of any of the changes in (O,t), in any one cell, very small. We can either take the limit of the general expression as t -O 0 (13,14), or use a straightforward argument (9) to show that f(t) -X1X2...k tk-1 (1) (k -1)! The cumulative probability, F(t), that the k-th change has taken place by time t is t~X 1X2...Xktk Clearly, Eq. (2) cannot hold indefinitely as t increases. However, in most cases lifetime values of t will still be sufficiently small for the limiting assumption (which concerns single cells) to be adequate. For the particular tissue with N cells, the probability that cancer (i.e., the k-th change) has not appeared by time t is Thus, the distribution function G(t) for the time to appearance of the first cancer is a Weibull distribution, with a density function g(t) = G'(t) = atklexp { -(aIk)tk} and hazard function h(t) = g(t)/{l -G(t)} = ote-1 (4) We have here the familiar power law. The limiting approximation on which it depends seems reasonably secure, since it assumes small rates per cell, but Moolgavkar (15) has pointed out that, for some values of the parameters which are plausible for human cancer, it may appreciably overestimate the hazard to be expected at high ages. Human Cancer The age-specific mortality rates for cancers at a particular site, or more directly the age-specific incidence rates obtained from cancer registries, can be regarded as roughly analogous to the hazard functions described mathematically by Eq. (4), since the denominators of the rates are the numbers of people alive at the ages in question. From Eq. (4), log h(t) = log a + (k -1) log t (5) and this linear log-log relation has been widely observed for a wide range of sites and human populations (16,1 7). It seems to be the usual finding in most epithelial carcinomas, but a variety of quite different relationships is seen for many nonepithelial tumours and for epithelial tumours at sex-specific sites (4). The slope in Eq. (5) is commonly in the range four to six, suggesting there may be around five to seven discrete stages. However, there are several reasons for caution. In the first place, several other diseases show rapidly increasing age-incidence curves, and one would not seek to explain them all by models of this sort. Secondly, a power law, with a slope of k-i, or something very close to it, could be obtained with fewer than k stages. Suppose some of the stages had rates increasing as powers of the time elapsing since the previous stage. Then the slope k -1 would be the sum of the (power + 1) for all stages before the last plus the power for the last; for instance, k -1 = 4 would arise from five constant rates, or two linearly increasing rates followed by a constant rate, or a quadratic rate followed by a linear rate. A reductio ad absurdum is to postulate one state with A°toc-; the model then becomes purely tautological. Two-stage models are discussed below. Third, a similar effect (of a high slope with a small number of stages) will be obtained if one or more of the event rates increases with age (rather than with time since last event). Fourth, the Weibull hazard, Eq. (4), can be obtained more generally, on the argument (18) that the time to first tumor in a tissue is the minimum of N random variables (the time to tumor in the N cells), and that Eq. (4) is a standard limit of the distribution of minima in large samples. However, for this limiting form to be valid there are restrictions on the shape of the extreme left-hand tails of the distributions of the cell-specific times, namely, that they are power functions like Eq. (1), and this might be taken to provide at least weak support for the multistage theory. Confidence in a multistage model must clearly depend on wider considerations than the power law. In particular, we need to consider the effects of external carcinogenic agents, data from animal experiments, and biological plausibility. These and other topics are taken up in later sections of the paper. As already noted, the cancers of sex-specific organs tend not to follow the power law. This is understandable since many of these organs are subject to changes in their hormone dependence at various periods throughout life or, like the uterine cervix, are affected by changes in sexual habits. Some tentative explanations of age-incidence can often be given in qualitative terms (9). Some recent quantitative modeling for breast cancer, in terms of a two-stage model is discussed in a later section. Animal Experiments and the Effects of Applied Carcinogens In experiments in which animals receive continuous application of a carcinogenic agent, the time to first tumor commonly follows a distribution close to the Weinbull (19,20). Such experiments not only provide a measure of support for the general theory, but also enable one to study the dose-response relationship. In the simple multistage model, suppose that m of the k stages are affected by the carcinogen, so that, for these values of i, Xi = d\oi, where d is dose intensity. Then, from Eq. (4) and the definition of a in Eq. (3), the hazard function should be proportional to dm. It is common to find m < k, suggesting that some but not all of the stages are affected by a particular carcinogen. Carcinogenic agents may, of course, not be applied at constant rates, and the question arises how the hazard function h(t) is affected if a particular rate constant, say Xi, is an arbitrary function of time Xi(t), which in the simplest case might be proportional to the dose intensity of d(t) of an applied carcinogen. The answer (9) is that h(t) is proportional to a weighted mean of At(t) in (O,t), the weight at time T (0 < T < t) being proportional to Ti-(t -T)k-i-1. This means that, for small values of i (early stages affected), what matters is the value of Xi(T) at low , whereas for high values of i (say k or k -1) the more recent values of Xi(T) carry most weight. These effects are explored more fully by Whittemore and Keller (3) and by Day and Brown (21). In this context one could broadly explain an initiatorpromoter experiment by saying that the initiator affects primarily the first step and the promoter primarily a later step (perhaps the second of two). However, Stenback et al. (22)(23)(24) have shown that the interpretation of these experiments may be complicated by aging and other effects. Earlier, Peto et al. (25) had carried out some experiments with regular benzpyrene applications to mice, which showed that under these circumstances the incidence of tumors depended on the time since start of exposure and not on age. This result is consistent with the view that the first stage is affected and that its enhanced event rate in the presence of benzpyrene is much greater than the natural background rate. Thus, whether or not it also affects some late stage(s), benzpyrene appears at least to "initiate" the first stage. In contrast with the age-independent effect of an initiator, however, Stenback et al., in experiments similar to (but much longer than) those of Berenblum and Shubik (5), found that the "promoting" effect of a TPA declined with age, suggesting a systemic aging effect in the response to TPA. Finally, to illustrate that the opposite effect is possible, Gray et al. (26) in experiments on radon inhalation by rats, found the incidence at a fixed time after start of exposure to increase with age. This is what might be expected if radon affected the second or a later stage, since with increasing age at start of exposure there would be more cells that had already undergone one or more of the early stages spontaneously. We return to the question of age effects in the next section. Human Data and Exposure to Carcinogens The considerations outlined in the first two paragraphs of the section titled "Animal experiments and the effects of applied carcinogens" would be expected to apply to human exposures as well as to animal experiments. One of the most illuminating examples is provided by the effects of starting and stopping smoking at different ages (4,(27)(28)(29). Data from prospective studies, such as the British and American data analyzed by Doll (16), show that nonsmokers have a log-log relationship for lung cancer with a slope k -1 of about four. For cigarette smokers, the same slope is obtained if time is measured not from birth but from the start of smoking. This is reasonable if smoking enhances one or more of the ki to such high levels that the naturally occurring changes are very much less frequent than those induced by smoking. Consider now the effect of stopping smoking. Smokers who stop retain their high rates, but at a constant level, perhaps until the nonsmokers' rates rise to that level. This is precisely what would be expected if smoking affected the (k -1)th of k stages, for there would be a pool of ex-smokers with (k -1)-cells, waiting for the final change which would occur at a constant rate. In due course, the pool will be augmented by naturally occurring (k -1)-cells and the rate will start to rise. Consider, secondly, the incidence rate at a fixed time after start of smoking, as a function of age at starting. The data are sparse but seem to indicate either little effect of age at starting or at most a rather modest positive effect. This would be consistent with an early stage being affected; (if the first stage were affected so that Xl, were increased dramatically by smoking, the process would effectively start at that point, but if the second stage were affected the number of 1-cells available for further transformation would increase approx-imately linearly with age at starting). Moreover, general considerations about the delay in the effect on a population of a marked increase in smoking suggest that an early stage is affected. Thus, different arguments support effects on both early and late stages. Both effects could, of course, be present. Some skin-painting experiments with benzpyrene on mice (19) have suggested an incidence proportional to (dose)2, in turn suggesting that two stages are affected by benzpyrene or that there is one stage with a quadratic effect. A preliminary analysis by Whittemore and Altshuler (30) of the study on British doctors (31) suggested that the incidence rate was proportional to the number of cigarettes, which provisionally implied that one state was affected proportionally to dose, i.e., that m = 1. However, an analysis (32) of a "reliable" subset of the doctors' data suggests a response more than proportional to dose; the estimate of m may be reduced by errors of measurement of smoking habits; and the effect of smoking on a particular Xi may be less than proportional to the daily consumption of cigarettes. The evidence thus points, somewhat loosely, toward the involvement of two stages. A useful discussion of the effects of removal of carcinogenic exposure in a range of human cancers, as well as in animal experiments, is given by Day and Brown (21). The concept that a carcinogen may affect only some of the rate constants helps us to understand some of the observed interactions between different carcinogens. In some instances, as in the interaction between smoking and asbestos exposure (33), the effects ofthe two agents appear to be multiplicative. This would be expected if they acted, with proportionate effects on the rate constants, for two different stages, say the i-th and j-th, since the hazard function, Eq. (4), involves the product XiXj. On the other hand, if both agents affect the same rate constant Xi, their effects could well be additive. Two-Stage Models: Breast Cancer In the absence of direct biological evidence about a succession of stages, models with several (five to seven) stages have often been regarded as implausible. A twostage model with exponential proliferation of the 1-cells has been discussed (34). The exponential growth in the rate constant for the second stage has much the same effect as a low-order polynomial and it is not surprising that the two-stage model with proliferation mimics fairly closely the multistage model with constant rates. Other two-stage models are detailed elsewhere (35)(36)(37). Moolgavkar and Venzon have studied a generalization of the model (39) permitting growth also of the 0-cells and have been able to fit data for a wide range of human cancers. The model has been adapted for breast cancer by Moolgavkar, Day, and Stevens (40) who postulate growth in the rate constant for the first initiation during puberty (with menarche following a logistic curve), subsequent proliferation of 1-cells with an enhanced rate during pregnancy, a reduced rate after menopause, and a protective effect of first birth by a subsequent reduction in 1-cell proliferation. They provide extremely impressive fits to data. Pike and his colleagues (41,42) have obtained equally impressive fits with a model conceptually different from, but very similar in its consequences, to that of Moolgavkar et al. Pike et al. adopt a power law with an index k -1 of 4.5, but assume that "time" (as used in the formula) is effectively expanded or contracted during a woman's life. Exposure starts at menarche (for which again a logistic curve is assumed), "time" moves more rapidly during reproductive life, with a temporary spurt during pregnancy, a fall after the first birth and a further fall after menopause. These authors suggest that the constancy of breast cancer rates in postmenopausal Japanese women (in contrast to the rise in other population groups) may be an effect of their low weights and low estrogen levels. The changes in the rate of passage of "time"are equivalent, in the simple multistage model, to the multiplication of all the rate constants by some factor varying throughout a woman's life, and may be motivated by the view that the rate constants depend on the rate of metabolism of stem cells, which may vary in the way indicated. It would, of course, be a rather strong assumption that all the rate constants should remain in the same ratios to each other although varying greatly with time. Low-Dose Extrapolation Considerable interest has been expressed in recent years in the assessment of low-dose carcinogenicity on the basis of extrapolation toward zero dose from the results of animal experiments in which high doses of test substances are used (43). Setting aside the important questions of the extrapolation from laboratory animal to man, there are serious problems about downward extrapolation within one animal species. The results depend heavily on the assumed nature of the dose-response curve at very low doses (44)(45)(46)(47). One plausible and helpful assumption is, however, suggested by many multistage models. Suppose, as before, that m stages are affected by the carcinogen. Since we are dealing with very low doses it will be inappropriate to assume the Xi to be proportional to the doseintensity d, because there may well be background effects, but a linear relation seems reasonable. At fixed t, therefore, from Eq. (3), the cumulative incidence at dose d will be P(d) = 1 -exp {-Hf (oi + Pid)} (6) where the parameters ai and Pi? absorb the constants and terms involving t in Eq. (3). A (7) where all the Oi are nonnegative, and this model has been studied in detail (48)(49)(50)(51). In Eq. (7), if 01 > 0, the response curve is essentially linear at low doses. This restriction will give more "conservative" assessments (i.e., a given excess risk will be reached at lower doses) than most or all other models proposed. Now, a maximum likelihood estimate (50) of 01 may be 0, in which case a steeper curve will be fitted at low doses; however, some nonzero value of 01 will always be consistent with the data, and so linear extrapolation can scarcely be excluded as a reasonable procedure. Other models have been proposed. Hartley and Sielken (52,53) generalize Eq. (7) to include time. Cornfield and his associates (54)(55)(56) have studied a multihit model (i.e., one involving hits occurring in an arbitrary order), which, with certain assumptions about background effects, has similar consequences to Eq. (6). However, Van Ryzin (57) points out that the low-dose linearity of Eq. (6) depends essentially on the assumption that the Xi are asymptotically linear in d. This linearity would follow if the background incidence were due to a carcinogenic agent, the dose of which combined additively with the applied dose. This need not be true. Biological Evidence and Conclusions In the construction of mathematical models for biological phenomena it is not uncommon to find that theories of quite disparate types provide good fits to the same data. Discrimination between models must then depend partly on general biological plausibility and partly on the ability of the models to explain new data. This is essentially the position with our present topic. As a statistician I can offer no authoritative guide to biological mechanisms. There seems little doubt, though, that the multistage theory, in some form or another, has provided a useful framework for hypothesis formation and for the design of observational and experimental studies. A number of experimental biologists maintain that carcinogenesis is a multistage process (the term 'multistep' is often used) (58,59), with perhaps an initial mutationlike stage of initiation being followed by one or more steps of a different nature (such as the activation of an oncogene). Evans and DiPaolo (60,61) have identified a number of specific stages in the progression of guinea pig fetal cells to neoplasia, such as morphological transformation, anchorage-independent growth, colony forming in agar, etc. Until and unless we obtain direct evidence about the presence and nature of intermediate stages, any statistical theory is likely to remain largely unfalsifiable, particularly if it is allowed to be modified with the flexibility to which we have become accustomed. The main contenders for generally applicable theories seem to be (a) the multistage theory, (b) some form of two-stage theory, and (c) the time-transformation theory of Pike and his colleagues. Until we have clear evidence for more than two states, it seems best to regard the multistage theory, like the dogmas of certain religions, as permitting either a literal or a figurative interpretation. That is, one can either assume that there really are k > 2 separate stages or one can regard some of the intermediate stages as being fictional shorthand for a single proliferative stage. There does seem a need to preserve at least two stages, so that we can distinguish between "early" and "late" effects of carcinogens. The explicit two-stage models, with appropriate assumptions about proliferation, seem to explain many of the known facts. However, the observations on the effects of starting and stopping smoking, described above, suggest that at least three stages are involved for lung cancer (two affected by smoking and a final stage). Moreover, the multiplicativity of the effects of asbestos and radiation with that of smoking suggests at least a third stage. In any proliferative model, the precise nature of the proliferative parts of the process is likely to remain indeterminate until and unless direct biological observations become available. The time-transformation model is relatively new and its full consequences have not, as far as I know, been explored. In one sense, it avoids some of the assumptions of the other models, in that the power law can be invoked as an empirical observation without any reference to stages. On the other hand, the particular way in which the response function is modified by changing circumstances (which we have seen would be equivalent to changing the rate for each of a number of stages by the same multiple) seems more specific than is required by other models, and it is unclear whether the model provides a suitable explanation for initiator-promoter data or other data in which an early or a late effect in indicated. In many areas of biomathematics the ingenuity of the mathematician often seems to run ahead of the ability of the biological scientist to provide the data needed to validate the mathematical models. In the study of carcinogenesis it is encouraging to see, to an increasing extent, the close cooperation between mathematicians and statisticians on the one hand, and biologists on the other, and I believe that in this sort of collaboration lies the key to the solution of some of the problems I have discussed in this paper. I am grateful to Mr. Richard Peto for helpful comments on the first draft of this paper.
6,045.2
0001-01-01T00:00:00.000
[ "Biology", "Mathematics", "Medicine" ]
Blind photovoltaic modeling intercomparison: A multidimensional data analysis and lessons learned The Photovoltaic (PV) Performance Modeling Collaborative (PVPMC) organized a blind PV performance modeling intercomparison to allow PV modelers to blindly test their models and modeling ability against real system data. Measured weather and irradiance data were provided along with detailed descriptions of PV systems from two locations (Albuquerque, New Mexico, USA, and Roskilde, Denmark). Participants were asked to simulate the plane‐of‐array irradiance, module temperature, and DC power output from six systems and submit their results to Sandia for processing. The results showed overall median mean bias (i.e., the average error per participant) of 0.6% in annual irradiation and −3.3% in annual energy yield. While most PV performance modeling results seem to exhibit higher precision and accuracy as compared to an earlier blind PV modeling study in 2010, human errors, modeling skills, and derates were found to still cause significant errors in the estimates. because their accuracy depends on the analysis and modeling pipeline, which commonly include irradiance transposition, module temperature, and power output modeling.Different models and their combinations may result in varying accuracies, and different assumptions for performance loss factors (derates) will significantly affect the energy yield estimations.These may also depend on the PV plant configuration, module type and geographical location.Furthermore, the modeler's skills and experience can affect the resulting accuracy. New PV performance models are continuously being developed whereas existing models are frequently updated.However, only a limited number of them have been evaluated independently from multiple aspects against high-quality field datasets (e.g., in previous studies [2][3][4][5][6][7] ).When an approach is tested against known datasets, the modeler might introduce a bias, which directly influences the approach's validity, reproducibility, and applicability for different systems.In such cases, blind intercomparisons are useful for benchmarking analysis pipelines and establishing the state of the art in PV performance modeling.The PV Performance Modeling Collaborative (PVPMC) was founded based on the outcomes of the blind PV modeling study in 2010. 8,9evious intercomparisons of PV modeling approaches include that of Friesen et al., 10 wherein time-series plane-of-array irradiance (Gpoa) and module temperature (Tmod) data were circulated to eight European institutions.These participants were asked to simulate the module-level performance of five PV technologies in seven climates, and it was found that the group's energy yield predictions agreed with ±5%.Moser et al. analyzed the long-term yield predictions of six expert modelers for a PV system in an Italian and an Australian site. 11ese modelers were required to independently obtain meteorological data for their simulations, which, for the Italian site, led to 6% differences in global horizontal irradiance (GHI), 20% differences in Gpoa, and ultimately nearly 30% differences in AC energy.And most recently, Vogt et al. 12 conducted an intercomparison of energy rating calculations per IEC 61853-3 13 with nine European institutes.Energy rating differences of 14% were found in the first blind comparison round.It took five rounds of calculations-and discussions among the participants-for the nine participants' calculations to agree within 0.1%.Ultimately, Vogt et al. 12 demonstrated how user-induced variability can be reduced when modelers have clear procedures for implementing key steps of the PV model chain. To provide an opportunity for PV modelers to test their models and modeling ability against high-quality, real system data and to help provide a baseline quantifying the variability of differ- | METHODOLOGY The blind PV modeling comparison was announced in July 2021 through the PVPMC email list (https://public.govdelivery.com/accounts/USDOESNLEC/subscriber/new?topic_id=USDOESNLEC_ 185).The data and document describing the exercise were downloaded >600 times.Sandia received 29 submissions from 28 participants with various modeling pipelines, including new commercial software.This effort represents 26 institutions from 12 different countries.Air temperature was measured using two Climatronics Aspirated Shield Temperature Sensors.Module temperature was measured using back-of-module resistance temperature detectors (RTDs), on one module of each string.Voltage and current were measured at the string level for all systems using voltage dividers and Manganin shunts.The Roskilde systems and measurements setup in scenarios 3-6 are described by Riedel-Lyngskaer et al. 14 The participants had access to general instructions, hourly averaged weather data from the locations (Gpoa was not included), module and inverter spec sheets, system designs, and test reports. | Scenarios and data The test reports were only available for the systems in Albuquerque (i.e., S1 and S2) and included IEC 61853-1 15 matrix data, IEC 61853-2 16 incidence angle modifier (IAM), nominal module operating temperature (NMOT), and PAN (Panneau Solaire) files.A frequently asked questions (FAQs) section was regularly updated on the PVPMC website to ensure everyone had access to the same information.Modeling results were collected and handled by Sandia, ensuring anonymity.The participants know their "participant number" only, and they had the option to exclude their name from any publication.This paper's author numbers and order are unrelated to the participant numbers in the figures. The participants were asked to simulate Gpoa (in W/m 2 ), module temperature (in C), and system DC power output (Pmp in W).Some participants resubmitted their estimates to correct "minor" mistakes such as modeling 48 modules instead of 12, submitting incorrect units (e.g., kWh/kWp or kW instead of W), reporting direct irradiance instead of global, and so forth. To ensure non-physical irradiance values (i.e., sun below the horizon, below/above physical minimum/maximum, static measurements, and inconsistent irradiance components) are ignored, the datasets were filtered based on version 2 of the Baseline Surface Radiation Network (BSRN) Global Network-recommended quality control tests. 17Furthermore, datapoints during days with snow were filtered out from both locations.The data from Roskilde include additional filters to ensure the proper operation of single-axis tracking (by comparing the tilt angles of neighboring trackers) and that the data acquisition system is online.All values lower than 100 W/m 2 in frontside Gpoa, 50 W in output DC power, and beyond À5 C and 45 C in ambient temperature were also filtered out. The validation datasets are available online in open access at two locations.The first is on the website of the PV Performance Modeling Collaborative at https://pvpmc.sandia.gov/.The second is on the Duramat Data Hub at https://datahub.duramat.org/dataset/pvperformance-modeling-data(doi: https://doi.org/10.21948/1970772). 18 | Statistics Based on the participants' affiliations, they were grouped into the following categories: (1) Commercial, (2) Research, (3) Software, and (4) Student.The commercial category includes consulting and engineering companies, independent engineers, owners, utilities, and producers.Figure 1 shows the percentage breakdown per category.Some models/software can reveal who the participant is when only used once and other models did not achieve an adequate statistical sample.To ensure anonymity and focus on approaches with significant participation, the following categories were created: 1. "Other model" refers to known models used by a single participant (e.g., pvlib-pvwatts from pvlib-python 19 ); 2. "Custom model" refers to "in-house" customized models that are not available to the public (these are models developed and used by some independent engineers); 3. "Other software" includes software used by a single participant (e.g., Archelios, 20 PVSol, 21 model 23 whereas in the case of temperature modeling, more than 60% of the participants used the PVsyst (Tcell), 24 Sandia Array Performance Model (SAPM), 25 and the Nominal Operating Cell Temperature (NOCT) model. 26It should be noted here that "PVsyst (Tmod)" refers to the temperature model in PlantPredict 27 ; this model is the same as in PVsyst, but it then gets converted to module temperature using the equation developed by Sandia. 25Finally, close to 50% of the participants used the PVsyst 28 and System Advisor Model (SAM) 26 software packages for PV performance modeling. As mentioned in subsection 2.1, the S1 and S2 systems included PAN files, IEC 61853-1 matrix, IAM, and NMOT data.As seen in Figure 3, most participants used the PAN files, mainly due to the high percentage of PVsyst users.Only 24.6% of the participants used the provided IEC 61853-1 data.The IAM and NMOT data were used by half of the participants.The percentages in Figure 3 are also categorized as a function of individual models and categories. | RESULTS To rank the participants, the mean absolute percentage error (MAPE) was used to compare the annual irradiation (Figure 4A) and energy yield (Figure 4B) estimates.The median MAPE in annual irradiation and energy yield estimations were close to 2% and 5.5%, respectively. Interestingly, the participants with the lowest MAPE (<<1%) in the annual irradiation estimation (i.e., P23, P2, and P22) exhibited high MAPE in annual energy yield estimation (ranging from 8.2% to 68.7%; the y-axis limits were truncated to 7% for clarity).Note that not all participants modeled all six scenarios. | Irradiance modeling A PV performance modeling pipeline always begins with irradiance.In this study, the participants had the measured global horizontal, direct normal, and diffuse horizontal irradiances and were asked to apply transposition models to estimate Gpoa. Figure 5 shows the diurnal front (top row) and rear (bottom row) Gpoa estimates by all participants.One of the participants in S3 appears to have simulated a fixed tilt system instead of tracking.As expected, front Gpoa is not as difficult to predict whereas problems arise when modeling the rear Gpoa, where minimum and maximum differences above 100% were observed.It is worth mentioning that despite these high differences in rear Gpoa, this component represents <10% of the total irradiance.participant made the same mistake.It is worth noticing that all but one of the PVsyst (Tcell) users exhibit nearly identical distributions demonstrating consistent calculations within the most popular software.However, it should be noted that while the median bias of this model is small, the comparison is against module temperatures, while PVsyst (Tcell) only calculates cell temperature.Minimum and maximum percentage differences from the measured front Gpoa at noon ranged from À11% (S5) to +61.3% (S3); the latter was due to a participant who simulated fixed tilt rather than tracked Gpoa.If this participant is excluded, the minimum and maximum differences would range from À11% to 1.99%.Minimum and maximum percentage differences from the measured rear Gpoa at noon ranged from À99.7% (S4) to +149.4% (S6). | OBSERVATIONS AND LESSONS LEARNED For the systems in Albuquerque (i.e., S1 and S2), the participants had module information that is not commonly available.PAN files might be available in databases but, in this study, the PAN files were specific to the modules in S1 and S2, that is, not generic representative PAN files.The objective here is to observe how the various assumptions or usage of additional information affected the results and to describe the lessons learned from this study. It should be expected that as module temperature increases, efficiency will decrease.To examine whether this holds true for all participants' temperature coefficient inputs, these trends were reverse calculated.This was done by taking a subset of data for modeled Gpoa from 800 to 1200 W/m 2 and wind speed lower than 10 m/s and fitting a regression model for modeled power against the module temperature by each participant (see Figure 9).Qualitatively, it can be observed that some participants assumed lower temperature dependency, while others assumed positive temperature coefficients.The latter might be due to an error in applying the sign in the equation; another speculation could be that the participant(s) may have used the temperature coefficient for current, instead of power.Furthermore, some participants miscalculated the system size by either overor under-sizing the number of PV modules.Therefore, human errors are not uncommon in PV performance modeling.Another common confusion observed during this blind PV modeling comparison was that many participants interchangeably used the U0 and U1 values of the Faiman model with the Uc and Uv values of the PVsyst (Tcell) model.Although these models are similar, they are not the same, and therefore, the U parameters should not be used interchangeably. Methods for parameter translation between temperature models (e.g., translate U0, U1 to Uc, Uv) have been recently published elsewhere, 30 and functions are available in pvlib-python under the pvlib.temperature.GenericLinearModel. The modeled temperature rise (i.e., the difference between modeled module temperature and measured ambient temperature) F I G U R E 1 0 Modeled temperature rise (i.e., the difference between modeled module temperature and measured ambient temperature) as a function of modeled Gpoa.Robust regression was used to de-weight outliers (dashed lines).The black dashed lines correspond to module measurements (only available for S1-S3) for wind speed <2 m/s and module temperature >0 C. respective models were frequently set too high.This resulted in maximum temperature difference between participants and field data of up to $15 C at 1000 W/m 2 in Albuquerque and $10 C in Roskilde, which would produce an error in simulated module power reaching $6% at those times.Recent work 32 shows that all these named temperature models can be improved to account for radiative losses, and a modified Faiman model was made available in the open-source pvlib-python library as pvlib.temperature.faiman_rad(after this study took place).Nevertheless, the improved models still require the appropriate empirical heat loss coefficients for the system being simulated. The relative efficiency across the modeled Gpoa plots (see The plots in Figure 12 show the bias in annual irradiation (A) and energy yield (B) for all participants.Although the irradiation bias for most participants was positive (i.e., showing overestimation) and the overall median was very close to zero (see red dashed lines), the energy yield was underestimated by most of the participants with a median value of À3.3%.This behavior raised the question about the derate assumptions made by the participants.Quantifying or setting the derates is a critical step in PV performance modeling.Derates (or performance loss factors) describe the losses that can occur within a system, for example, due to conductor resistance, soiling, module degradation, and so forth.After comparing derate assumptions by individual participants, it was found that the highest underestimations were exhibited by participants that over-budgeted for derates.In contrast, the participants applying modest derate assumptions achieved biases much closer to zero.This is interesting because the modeling community is often concerned with the accuracy of model equations and their parameter values, 33,34 whereas in this study, the errors were driven largely by the initial assumptions made by the modelers.It should be mentioned, however, that these scenarios include data from lab-scale systems that were built for research purposes.As such, these are likely to experience lower losses than utility scale power plants. To further examine the impact of the derate assumptions, the annual bias by each participant, for each scenario (i.e., Figure 12B), was applied as an adjustment to their corresponding hourly time-series (i.e., by multiplying the hourly modeled power time-series by one could be expected since software companies know their products better than anyone else.It is also worth noting that the commercial sector, which deals mostly with larger power plants, assumed higher derate values resulting in a higher bias spread (see orange boxplot).This is another indication that modeling different system sizes will require appropriate derate budgeting.The adjusted annual energy yield bias shows that all sectors but the student category exhibited distributions very close to zero. | CONCLUSIONS PVPMC's blind PV modeling intercomparison found that: 1.The irradiance transposition models seem to perform well, except the isotropic one. 2. Modeling the rear Gpoa is still challenging with errors exceeding $±100%.However, it should be mentioned that rear Gpoa represents $10% or less of the total irradiance.Unfortunately, the bifacial PV time-series in this study contained only a handful of rear Gpoa days.As such, no further analysis has been conducted to investigate the impact of their variations. Depending on data availability, future PVPMC blind modeling intercomparisons will include larger systems, subhourly time-series, investigations on rear Gpoa, and an iterative submission process that would enable a more detailed determination of the uncertainties involved at each step of a PV performance modeling pipeline. ent models and modelers, PVPMC organized a new blind PV performance modeling comparison in 2021.Measured weather and irradiance data and detailed descriptions of six PV systems from two locations (Albuquerque, New Mexico, USA, and Roskilde, Denmark) were provided.Participants were asked to simulate the systems' plane-of-array irradiance, module temperature, and DC power output and submit their results back to Sandia for processing.This work compares system-level performance modeling considering all DC-side loss factors.Rather than independently obtaining meteorological data for the simulations, participants were provided with measured meteorological and irradiance data as a starting point.This provision enabled the propagation of sources of uncertainty within the modeling pipeline instead of the results being affected by the uncertainty of the input data.Furthermore, this study was open to anyone (i.e., industry, research, and academia) to participate, rather than inviting specific individuals.As such, this article presents the multidimensional data analysis of the PVPMC blind modeling intercomparison, providing the results for each modeling step.Finally, it summarizes the lessons learned and areas where improvements are needed. For this comparison, six scenarios of practical interest to the community were identified and included (a) fixed and tracking systems, (b) monofacial and bifacial modules, (c) modules representative of the current PV market and upcoming technologies, and (d) distinctively different geographical locations/climates (see Table 1 ).In Albuquerque (S1, S2), GHI was measured using a Kipp and Zonen CMP-21 pyranometer.Kipp and Zonen CH1 and Eppley normal incidence pyrheliometers (NIP) were used to measure DNI.To measure the diffuse horizontal irradiance (DHI), two Eppley precision spectral pyranometers (PSP) were used, one having a shade disk and the other having a shade band.The Gpoa was measured using a Kipp and Zonen CMP-11 pyranometer.Wind speed was measured at 10 m above ground level using a Climatronics Wind Mark III Wind Sensor. Figure 2 Figure2shows statistics on the models used in this study.With respect to the transposition models, the majority used the Perez 3 . 2 | Another observation is an apparent time-shift in the estimates of some participants.It seems that there is confusion between instantaneous and time-averaged measurements especially when involving sun position.This study reported the hourly averaged irradiance data at the end of the hour.Therefore, most models should assume a sun position calculated 30 min before the hourly timestamp as being the most representative.On the other hand, other data sources commonly place timestamps at the beginning of the interval.As such, some software properly account for this by calculating the solar position and other time sensitive values at the center of the interval (i.e., +30 min).Modeling software, such as PVsyst, make an exception for timestamps that span sunrise or sunset to pick a sun position halfway between the horizon and the sun position at the neighboring daylight timestamp.In this study, some participants seemed to adjust by shifting their time-series by 30 min back, while others kept it at the end of the hour.This is clearly an area where procedures could be standardized.Empirical cumulative distribution functions (ECDF) presentresiduals in an ascending order to observe how they are distributed across the datasets.The ECDF plots in Figure6show the residuals between modeled and measured Gpoa grouped by the transposition models.The off-pink lines are the individual participants, and the black dashed lines indicate the median residuals per model.A steep rise near zero suggests that there are mostly small model errors and relatively few large ones.Most models except the isotropic indicated good accuracy (median values close to zero).The isotropic model underestimated Gpoa by 11.25 W/m 2 .Although the distributions of residuals for most Perez users cluster together, some outlying distributions still exist indicating errors in the implementation of solar position algorithms, system configuration, and the possibility of applying different Perez model coefficient sets, other than the coefficients of the most commonly used set "All sites composite 1990." 29When comparing residuals against system configuration, there was a slight over-estimation in the single-axis tracking system in S3 (median residual of 6.5 W/m 2 ) as compared to the fixed tilt systems in S1 and S5 with À1.7 W/m 2 and 0.77 W/m 2 , respectively.Module temperature modeling First, it should be mentioned that although the accuracy of resistance temperature detector (RTD) sensors is typically within 0.1 C, F I G U R E 1 Categorization of participants.It includes commercial entities, researchers, software companies, and students.Commercial entities have the following subcategories: consulting and engineering, independent engineer, owner, utility, and producer.it is still not possible to know what the representative temperaturefor a PV array is, unless an array is equipped with multiple sensors (e.g., one for each solar cell).This is practically and economically not feasible.Therefore, although this study compares with an average module temperature value from four different sensors, the differences reported in this work should not be taken in a strictly quantitative manner. Figure 7 Figure7shows the ECDF plots of the module temperature residuals for three out of six scenarios (due to availability), grouped by all models.It should be noted that some models, such as the Figure 8 Figure8shows the ECDF plots of the normalized power residuals for all scenarios, grouped by all model categories.The power residuals were normalized by the system's nominal capacities to allow a meaningful comparison among the different scenarios.Overall, all models F I G U R E 6 Empirical cumulative distributions of the plane-of-array irradiance residuals (in W/m 2 ) for all scenarios, grouped by the different transposition models.Participants are displayed in different colors, and the dashed black lines indicate the median residuals within the same modeling category."Other" and "Custom" categories include models that differ within the same category.F I G U R E 7 Empirical cumulative distributions of the module temperature residuals (in C) for all scenarios, grouped by the different temperature models.Participants are displayed in different colors, and the dashed black lines indicate the median residuals within the same modeling category."Other" and "Custom" categories include models that differ within the same category. underestimated the normalized power by up to 43.3 W/kWp (or 4.33%), whereas the SDM category demonstrated superior performance with a bias close to zero (À1.07 W/kWp).PVsyst users, who comprise 33% of the participants, group well together except in one instance.This mass underestimation raises the question of whether this is because of a modeling issue, input selection, or any other assumptions involved within the pipeline.This is further examined in Section 4. Empirical cumulative distributions of the normalized power residuals for all scenarios, grouped by the different models.The power residuals are normalized by the systems' nominal capacities (i.e.W/kWp).Participants are displayed in different color whereas the dashed black lines indicate the median residuals within the same modeling categories."Other" and "Custom" categories include models/software that differ within the same category.as a function of modeled Gpoa is shown in Figure 10.Robust regression 31 (colored dashed lines) was used to de-weight outliers.The black dashed lines correspond to module temperature measurements (only available for S1-S3) for wind speed <2 m/s and module temperature >0 C. Negative temperature differences were measured in Albuquerque where low sky temperatures led to significant radiative cooling of the modules.Only one custom model separately accounts for radiative losses and correctly predicts such negative values.All other models lump radiative losses together with convective losses and represent the combined heat loss with one or two empirical heat loss coefficients.In Driesse et al., 30 it was shown that all of the named models in Figures 7 and 10 are essentially equivalent; therefore, underprediction of module temperature in the simulations is a clear indication that the heat loss coefficients for the F I G U R E 9 Regression model fits for modeled power versus module temperature.The scatter was removed to improve clarity.The regression model fits considered datapoints for modeled Gpoa from 800 to 1200 W/m 2 and wind speed <10 m/s.Participants are shown in different colors. Figure 11 ) Figure 11) can provide information on the electrical modeling assumptions made by the participants based on whether the provided module data were used or not.The relative efficiency was calculated by taking the ratio of modeled efficiency and each participant's "nominal" efficiency.This was calculated by taking a subset of modeled Gpoa from 950 to 1050 W/m 2 and assuming the median temperature corrected (to 25 C) efficiency as the "nominal" efficiency for each participant.The plots in Figure 11 were categorized based on the participants' responses on PAN or IEC 61853-1 matrix data usage; the top and bottom rows correspond to S1 and S2, respectively, while the red circles are the measured values.The data in Figure 11 represent conditions where the AOI < 70 and the modeled Tmod is from 20 C to 30 C. IEC 61853-1 data for all modules show lower efficiencies at low irradiance.Many participants' results matched these data trends, whereas others calculated flat efficiencies with Gpoa or showed higher F I G U R E 1 1 Figure 12C, where the overall median bias is nearly zero indicating that the majority of bias errors in Figure 12B were indeed linear.This reinforces the potential conclusion that input assumptions matter more than the model, at least for the climates and systems investigated in this study.How do these results compare to the original PVPMC blind modeling study in 20108,9 ?Figure13(left) shows the bias in annual energy yield from the 2010 blind modeling study, whereas the plot on the right exhibits the bias from this study; these are categorized by the models.Overall, there is a large shift from overestimating energy yield (as high as $20% in 2010), to being very conservative with the 3 . 4 . 5 . 6 . Standardization is needed for handling sun position calculations when using time-averaged irradiance measurements.Incorporating a radiative loss term in module temperature modeling appears to improve accuracy.There is confusion around the U values for Faiman and PVsyst temperature models.Uc and Uv (PVsyst) values should not be used in place of U0 and U1 (Faiman) values.Most software and models showed similar results indicating good reproducibility among participants, especially when compared with the 2010 blind modeling study.For example, the spread in estimated energy yield among PVsyst participants is now $6% compared with $33% in 2010.7.Uncertainty and large variation in derate factors between participants appear to explain most of the differences; it was observed that modelers overestimated the derates resulting in significant power underestimation.8.Human errors are not uncommon.The intercomparison highlighted several errors related to the temperature coefficients and the efficiency across irradiance.There is an opportunity to develop screening tests that can detect such errors, thus assuring stakeholders of the accuracy of the modeling results.9. Modeler skill at understanding, choosing, and using the models and their parameter correctly, and accumulated experience observing various derate mechanisms in operational systems seems to be more important than the PV model itself (see 7 and 8 above).
6,057.8
2023-07-21T00:00:00.000
[ "Engineering" ]
A Bayesian Singular Value Decomposition Procedure for Missing Data Imputation Abstract Missing data are common in empirical studies. Multiple imputation is a method to handle missing values by replacing them with plausible values. A common imputation method is multiple imputation with chain equations (MICE). MICE defines a series of conditional distributions to impute missing values. Although MICE is relatively easy to implement, it may not converge to a proper joint distribution. An alternative strategy is to model the variables jointly using the general location model, but this model can become complex when the number of variables increases. Both approaches require integration of prior information when there are more variables than cases. We propose a Bayesian model that is based on the singular value decomposition components of a continuous data matrix to impute missing values. The model assumes that the matrix is of low rank by applying double exponential prior distributions on the singular values. We describe an efficient sampling algorithm to estimate the model’s parameters and impute the missing data. The performance of the model is compared to current imputation methods in simulated and real datasets. Of all the methods considered and in most of the simulated and real datasets, the proposed procedure appears to be the most accurate and precise. Supplementary materials for this article are available online. Introduction In many biological and social sciences studies, values for some units are missing. A dataset may have missing values for a variety of reasons. For example, missing data arise when survey respondents refuse to answer questions (Brick and Kalton 1996;Jannach et al. 2016), or when participants are lost to follow up in clinical trials (Hogan, Roy, and Korkontzelou 2004;Morita, Thall, and Müller 2008). Statistical analyses that are applied only to units with complete data may result in biased estimates and misleading conclusions (Little and Rubin 2019). Multiple imputation methods are a set of statistical procedures to handle missing data (Rubin 1996). These methods explicitly fill-in missing data entries with plausible values. The main attraction of these methods is that once a dataset with imputed values has been constructed, standard statistical methods for complete data can be used for analysis. Methods that rely on a single imputation of the missing values fail to adjust for the uncertainty that is inherent in the imputation of unobserved values (Rubin 1996). Multiple imputation methods generate H > 1 complete datasets to account for the uncertainty in the imputation process (Rubin 2004). Multiple imputation procedures can be broadly classified into two general strategies: Fully Conditional Specification (FCS) and Joint Modeling (JM). FCS methods, also known as multiple imputations by chained equation (MICE) (van Buuren et al. 2006), posit a series of univariate conditional models for variables with missing values. These procedures begin with a set of initial values. Using these values, for each variable with missing data, FCS methods estimate a corresponding conditional model. Estimates from this conditional model are used to predict and impute the variable's missing values. Repeating the estimation and prediction steps for each variable is called a cycle (Carpenter and Kenward 2012). These cycles are repeated until "convergence" is achieved and the set of imputed values is used to complete the first dataset. FCS algorithms continue for additional cycles to obtain the required number of imputed datasets. Continuous outcomes are typically imputed with either a set of univariate regression models or the predictive mean matching method (Rubin and Schenker 1986;Little 1988). The FCS approach is straightforward and widely adopted in many applications (White, Royston, and Wood 2011), but its theoretical properties are not well understood. Specifically, the stationary distributions of FCS methods may not correspond to inference on multivariate joint distributions of all the variables (Liu et al. 2013). Another limitation arises when the number of cases is smaller than the number of variables. Selecting the explanatory variables for each of the conditional models or defining prior distributions to identify all of the parameters in these conditional models can be complex (Carpenter and Kenward 2012). Recently, recursive partitioning algorithms were proposed as possible solutions to this problem (Burgette and Reiter 2010;Doove, Van Buuren, and Dusseldorp 2014;Shah et al. 2014). However, these methods still suffer from the same theoretical limitations (Xu, Daniels, and Winterstein 2016). JM procedures propose joint models for the entire dataset. Commonly, these procedures assume that the rows in the dataset are independent and follow a joint multivariate distribution. When all of the variables with missing values are quantitative, a commonly used model assumes that each row in the dataset independently follows a multivariate Normal distribution (Honaker, King, and Blackwell 2011b;Goldstein, Carpenter, and Browne 2014). The multinomial and log-linear models were proposed when all of the variables are categorical (Schafer 1997). The general-location and the restricted general location models were proposed to simultaneously describe quantitative and categorical variables (Little and Schluchter 1985;Schafer 1997). These models are statistically valid and are computationally efficient, but they lack the flexibility to model various data features such as skewness or complex dependencies that may arise when the rows of the dataset are dependent ( Van Buuren 2018). Moreover, when the number of rows in a dataset is smaller than the number of columns, the multivariate model is not identifiable without prespecified constraints or prior distributions for model parameters (Carpenter and Kenward 2012). A different JM approach that relaxes the assumption that rows of the dataset are independent is based on the Bayesian Principal Component Analysis (PCA) model (Audigier, Husson, and Josse 2016). This procedure assumes that each entry in the data matrix has independent Normal distribution with an expectation calculated from the singular value decomposition (SVD) components with a known rank k of the entire matrix, and an unknown variance σ 2 . Missing values are imputed using a data augmentation algorithm that iterates between sampling σ 2 and imputing the missing values. This procedure ignores the sampling variability of the rank of the matrix and the estimated SVD components. Low-rank matrix recovery algorithms have been proposed as single imputation procedures for missing values (Laurent 2009). Theses algorithms view missing values estimation process as an optimization problem under certain sparsity constraints on the singular values (Hastie, Tibshirani, and Wainwright 2019). These algorithms do not assume that the rank of the data matrix is known and estimates its rank within the optimization process. However, the variability of model parameters and the variability of imputed values are commonly ignored when reporting estimates that are based on these procedures. Bayesian matrix completion algorithms can provide the variabilities associated with model parameters and imputed values by generating samples from their joint posterior distribution. Yang et al. (2018) devised a Bayesian matrix completion algorithm which assumes that the columns of the matrix follow a multivariate Normal distribution with mean zero and a common precision matrix. Low-rank matrix is induced by assuming a Normal-Inverse Wishart prior distribution. To improve the sampling speed, variational Bayesian algorithm is implemented (Tzikas, Likas, and Galatsanos 2008). A different Bayesian approach to sample from a low-rank matrix is the Bayesian SVD model (Hoff 2007). In the absence of missing values, the Bayesian SVD model was proposed as a possible procedure to estimate the variabilities of the rank of the matrix and its SVD components by considering them as random variables. The Bayesian SVD algorithm requires sampling on a Stiefel manifold (Chikuse 2003), which can be complex. To overcome this issue, a relaxed SVD factorization that relies on orthogonal low-rank matrix factorization was proposed (Song and Zhu 2016). This algorithm assumes that the eigen vectors are orthogonal, but not orthonormal. Sampling from this relaxed model does not require sampling on a Stiefel manifold, but this factorization may result in multiple equal modes. We propose a Bayesian low-rank matrix completion algorithm by modifying the Bayesian SVD model in three major ways. First, we introduce a set of double exponential prior distributions on the singular values to adaptively shrink the singular values toward zero. These prior distributions mimic the Lasso penalty used in low-rank matrix recovery algorithms (Laurent 2009). Second, we propose an efficient MCMC procedure that reconstructs the double exponential prior distributions as a mixture of Normal distributions (Park and Casella 2008;Griffin and Brown 2010). Third, we modify the Bayesian SVD model to handle missing values using the Data Augmentation algorithm (Tanner and Wong 1987). The proposed Bayesian SVD multiple imputation model does not assume that the rows of the dataset are independent and it can handle datasets with more variables than cases. Using simulated data we examine the operating characteristics of the proposed model and demonstrate that it is superior to existing JM and FCS methods. Lastly, we demonstrate the accuracy and robustness of the new procedure by applying it to data on fine needle aspiration biopsy procedure for breast cancer patients. The article proceeds as follows: Section 2 describes the Bayesian SVD model and the proposed model with double exponential prior distributions. Section 3 presents simulations to examine the propriety of the proposed method and Section 4 compares the proposed method to existing imputation methods. Section 5 examines the performance of existing imputation methods and the proposed methods on a real dataset, and Section 6 provides discussion and conclusions. Bayesian SVD Multiple Imputation Model Let Y = {Y ij } be a m × n data matrix of continuous entries such that Y ij ∈ R. Matrix decomposition procedures involve partitioning matrix Y into constituent elements (Hoff 2007;Tuncer, Tanik, and Allison 2008;Audigier, Husson, and Josse 2016). Formally, let Y = M + E, where M = {M ij } is a m × n mean matrix and E is a matrix of independent and identically distributed Normal random variables with mean zero and variance 1/φ. The mean matrix M can be decomposed as UDV T , where U is a m × k matrix with orthonormal columns, V is a n × k matrix with orthonormal columns, and D is a k × k diagonal matrix with nonnegative diagonal elements d 1 ≥ · · · ≥ d k , where k ≤ min{n, m} is the rank of M. The sets of orthonormal matrices U and V are Steifel manifolds, denoted as V k,m and V k,n , respectively. For a matrix M of rank k, Hoff (2007) proposed a Bayesian matrix decomposition model which assumes uniform prior distributions for U and V over the spaces V k,m and V k,n , respectively. This model is formally defined as (1) For computational simplicity, the singular values are not restricted in the model to be nonnegative. Hoff (2007) assumed a set of Normal prior distributions for d 1 , . . . , d k iid ∼ N(μ, 1/ψ), and derived the conditional posterior distributions for U, D, V when the data matrix is fully observed. We introduce a set of double exponential prior distributions for the singular values, and extend the sampling scheme to incorporate a step for missing value imputation. The double exponential prior distributions produce adaptive shrinkage to singular values' estimations, which is similar to the Lasso penalty in the lowrank matrix recovery algorithm (Park and Casella 2008). Double Exponential Prior Distributions for Singular Values Consider a set of double exponential prior distributions for d 1 , d 2 , . . . , d k of the form In addition, let the marginal prior distribution of φ be a Gamma distribution with shape parameter ν 0 /2 and rate parameter ν 0 σ 2 0 /2, φ ∼ Gamma(ν 0 /2, ν 0 σ 2 0 /2). Park and Casella (2008) noted that it is important for the double exponential prior distribution to condition on φ because it guarantees that the posterior distribution p(φ, λ|Y) is unimodal. Sampling the singular values with this prior distribution can be computationally complex. A possible solution to improve computational efficiency is to represent the double exponential distribution as a scaled mixture of Normal distributions (Andrews and Mallows 1974;Park and Casella 2008), Combining (1) and (3) the posterior distribution is where π(λ) and π(φ) are the prior distributions for λ and φ. Sampling Algorithm in the Absence of Missing Values To sample from the posterior distribution p U, D, V, φ, λ, τ 1 , . . . , τ k |Y, k we developed an efficient Markov chain Monte Carlo (MCMC) procedure that is based on the original Bayesian SVD sampling procedure (Hoff 2007). Formally, let U [,j] and V [,j] be the jth columns of U and V, respectively. In addition, let U [,−j] and V [,−j] be the matrices of all the remaining columns of U and V, respectively. Lastly, Let −j] . The likelihood function can be expressed as Conditional on U [,−j] , U [,j] is in the column null spaces of U [,−j] . Specifically, U [,j] is a basis for the null space of columns of U [,−j] and u j is uniform on the m − (k − 1) dimensional sphere. Equation (5) shows that the conditional posterior distributions of u j |Y, D, U [−j] [,j] , which is a von Mises-Fisher distribution. Thus, given U [,−j] , U [,j] can be sampled by generating u j from the von Mises-Fisher (vMF) distribution on the m − (k − 1) sphere with parameter φd j E −j V T [,j] and setting U [,j] [,j] from its corresponding conditional distribution can be generated similarly. Based on Equation (4), the full conditional distributions for 1/τ j and d j are the inverse-Normal distribution and the Normal distribution, respectively. To complete the Bayesian model a prior distribution for the hyper-parameter λ is required. One possible prior distribution for λ is the conjugate Gamma distribution with shape parameter α and rate parameter β. To generate S samples from the joint posterior distribution p(U, D, V, λ, φ|Y), the steps in Algorithm 1 are repeated S times. Because the true rank of the mean matrix M is unknown, we can treat it as a fixed constant or assume that k follows a prior distribution and estimate its posterior distribution in the sampling process (see Hoff (2007) for further details). In some applications of matrix decomposition, the goal is to represent the n variables in the data using a matrix M of rank k << min{m, n} that displays the main patterns in the data. In such applications, estimating the posterior distribution of the rank can be important when describing the k factors that represent the main data patterns. In other applications of matrix decomposition, the goal is to impute missing values in order to implement downstream analysis with statistical tools for complete data. When imputing the matrix, the posterior distribution of the rank is less critical, because researchers are mainly concerned with improving the prediction accuracy of the missing values. Moreover, when the imputed matrices are used in downstream analyses, it has been suggested to rely on saturated models rather than restricted models to avoid uncongeniality (Meng 1994;Murray 2018). When setting the rank to a fixed constant in the Bayesian SVD procedure, k can not be set at its maximum value min{m, n}, because when k = min{m, n} the conditional distributions of U [j] or V [j] are Bernoulli distributions over the support of two unit vectors both orthogonal to either U [,−j] Algorithm 1 Bayesian SVD Sampling or V [,−j] . In the MCMC sampling procedure, U [j] or V [j] can only rotate by π degrees from their initial values. This limits the MCMC sampling procedure from generating meaningful posterior samples. Setting k to min{m, n}−2 avoids the extreme case of k = min{m, n}, and it provides maximal flexibility for estimating the singular values, because the prior distribution would shrink the nonsignificant singular values. This shrinkage effect on the singular values will be demonstrated in Section 3. However, when k = min{m, n} − 2 the Bayesian SVD algorithm requires sampling min{m, n} − 2 eigenvectors, which increases the computation time significantly. In large matrices with limited computational power, a smaller k may be selected. When k is set to be significantly smaller than min{m, n} − 2, the estimates of M may be biased, because significant positive singular values are set to zero. In the Gamma prior distribution of φ, the shape parameter is set at ν 0 = 2. This prior distribution was derived based on guidelines for effective sample size computations, which ensures that inference is not dominated by the prior distribution (Morita, Thall, and Müller 2008). For the rate parameter, we use the "Empirical Bayes" method to calculate σ 2 0 by averaging the residual variances over different ranks of M (Hoff 2007;Casella 2001). Algorithm 2 summarizes the steps for estimating σ 2 0 using the "Empiricial Bayes" method. In this algorithm, we ignore the correlations between φ and the prior distribution of d j to gain computational efficiency. This leads to underestimation of σ 2 0 ; however, we find that this underestimation of the hyperparameter does not have significant impact on the convergence rate of the MCMC algorithm. Algorithm 2 Empirical Bayes Algorithm to Estimate σ 2 0 for l ∈ {1, . . . , min{m, n}} do • LetÛDV T be the least-square projection of Y onto the set of rank-l matrices; Sampling of the Bayesian Lasso Parameter The model parameter λ governs the magnitude of the Lasso prior distributions on the singular values because integrating over In contrast to the low-rank matrix recovery algorithms (Laurent 2009), in the Bayesian SVD model, λ can be estimated by imposing a prior distribution and sampling from the corresponding posterior distribution as proposed in Section 2.2, or by using an Empirical Bayes procedure that relies on marginal maximum likelihood (Park and Casella 2008;Casella 2001). In the Empirical Bayes procedure, λ is estimated at each iteration of the MCMC sampling procedure. Formally, λ (r) is approx- are the elements along the diagonal ofD (0) andÛ (0)D(0)V (0)T is the least-square projection of Y onto the set of matrices with rank K 0 = min{m, n}. This λ (0) is the empirical Bayes estimates of λ conditional on σ 2 0 (see Appendix for further derivation). Because σ 2 0 is underestimated, λ is also underestimated in this procedure. We find in the simulation that this choice of λ (0) does not impact the convergence rate significantly. Sampling Procedure with Missing Values where Y mis denotes entries of Y with W ij = 1 and Y obs denotes entries of Y with W ij = 0. Let θ be the population parameters governing Y, and η be the parameters governing the missing data mechanism. We assume that the missing data mechanism is ignorable if it fulfills the two conditions (Rubin 1976): • Missing at random (MAR), p(W = w|Y obs = y obs , Y mis , η) = p(W = w|Y obs = y obs , η), ∀η, where w, y obs are realized values of W and Y obs , respectively. • θ and η are distinct (e.g., their joint parameter space is the product of the parameter spaces of θ and η). In Bayesian inference, distinctness implies that θ and η are a priori independent. The data are said to be missing not at random (MNAR) when the first condition is violated. Under MAR, the MCMC scheme in Section 2 can be extended to include a step for missing data imputation using the Data Augmentation algorithm (Tanner and Wong 1987). The Bayesian SVD procedure with missing entries is summarized in Algorithm 3. Algorithm 3 Bayesian SVD Matrix Completion Algorithm Initialization: • Impute Y miss with the columns' means of the observed entries • Initialize ν 0 , σ 2 0 as proposed in Section 2.2 and λ (0) as proposed in Section 2.3 • Initialize {τ 1 , . . . , τ k } by sampling from their prior distributions, is the Hadamard product. Evaluating Convergence and Model Fit To examine the validity and the performance of the proposed Bayesian SVD model, we perform two sets of simulation analyses. The first examines the convergence of the proposed sampling procedure and the fit of the proposed model. This evaluation was performed on complete matrices. The second set of simulations examines the performance of the proposed procedure in comparison to other imputation methods on matrices with missing values. To evaluate Algorithm 1 without missing data, we drew a total S = R + L posterior samples, where R is the number of burn-in samples and L is the number of samples that were used to make inferences on the singular values. To ensure that the MCMC algorithm is sampling from the posterior distribution, we set R = L = 10,000 for a total of 2×10 4 MCMC iterations. Simulation Configurations The data matrices in these simulations have m = 20 rows and n = 40 columns. The true rank of these matrices is set as k * = 5 and the singular values are set to D * = diag(16, 14, 10, 2, 1). In addition, we assume that all elements of E = {e ij } are independently generated from a standard Normal distribution, Competing Methods and Evaluation Metrics for Complete Matrices We compare the proposed Bayesian SVD model with k = min{m, n} − 2 to least-square estimation, which relies on the true rank, k * , of M. The least-square estimate is based on principal component analysis, and is derived by calculatinĝ Figure 1(1) displays the shrinkage effects on the singular values when the data matrix is fully observed. As λ increases, more singular values are estimated closer to 0. The three largest singular values, {16, 14, 10}, are shrunk to 0 for values of λ ≥ 5. The two smallest singular values, {2, 1}, are shrunk to 0 for values of λ ≥ 1. This shrinking effect is similar to the one observed for the low rank matrix recovery algorithm with the Lasso penalty except that here the singular values are not shrunk to 0 exactly. Figure 1(2) displays the MSE of the mean matrix for different values of λ. The vertical lines represent the posterior median and 95% credible intervals for λ when it is updated in each MCMC step by the proposed MCEM method. The MSE for the median λ is close to the smallest MSE when λ is set to a constant. Even when the true rank is correctly specified, the least-square estimation results in larger MSE than the proposed Bayesian SVD model with λ updated using MCEM where true rank is unknown and k = min{m, n} − 2. Figure 2 shows a trace plot of λ for each iteration of the MCMC algorithm described in Section 2.3. As expected, the initial λ (0) underestimates λ. However, the convergence is not significantly influenced by the choice of λ (0) as the sampling algorithm converges to the approximate posterior distribution of λ after 2000 iterations. Simulation Configurations Similar to Section 3, the mean data matrix M is simulated by sampling U ∼ uniform(V m,k * ) and V ∼ uniform(V n,k * ). The singular values are simulated by sampling where k * is the rank of the mean matrix and μ mn = n + m + 2 √ mn. The error matrix E = {e ij } is generated such that e ij are independently sampled from standard Normal distribution , N (0, 1), or a t-distribution, t(df = 5). The simulations examined several sizes of matrices. One matrix size included more columns than rows, Y N 20×40 , one matrix included more rows than columns, Y N 40×20 , and one set of matrices had equal number of rows and columns, Y N 100×100 and Y t 100×100 . The true rank of Y N 20×40 and Y N 40×20 was set at k * = 5, the true rank for Y t 100×100 was set at k * = 20, and the true rank for Y N 100×100 was set at either k * = 20 or k * = 30. Table 1 depicts the parameters and configurations of the simulations. Missing Data Mechanisms We examine three types of missing data mechanisms. The first missing data mechanism is the missing completely at random (MCAR) mechanism (Rubin 1976). Under this mechanism, the missing indicators for each cell of the data matrix are simulated independently. Formally, W ij ∼ Bernoulli(0.15) such that m i=1 W ij < m ∀j and n j=1 W ij < n ∀i to avoid completely missing rows and completely missing columns in the data matrix. The next two mechanisms are nonignorable mechanisms. These mechanisms are challenging because they can be hard to specify correctly, and they may be based on non or weakly identified parameters. Both mechanisms are MAR, but they assume that the parameter spaces of θ and η are not distinct. The first nonignorable mechanism relies on column-wise correlation of the mean matrix, M. Let be the column-wise correlation of M and let i (j) be the column that has the largest correlation with column j. Missing data indicators are simulated independently conditional on the column with the largest correlation. Formally, W ij ∼ Bernoulli(p ij ), where logit p ij = α + βM i (j) j . The slope coefficient, β, is set at 10, and α is chosen such that the average missingness rates are either 15%, 30%, 50%, or 70%. This mechanism generates missingness based on parameters that may be of interest when analyzing the complete dataset, and we denote this mechanism as NIG_CORR. The largest column-wise correlation ranges between 0.67 and 0.87 for Y N 20×40 , between 0.47 and 0.71 for Y N 40×20 , and between 0.43 and 0.56 for Y N 100×100 when the true rank is k * = 20. The corresponding columnwise correlations of the M matrix have range of 0.94 to 0.99 for Y N 20×40 , 0.68 to 0.86 for Y N 40×20 , and 0.87 to 0.99 for Y N 100×100 . The second nonignorable mechanism is based on the largest singular value of M, denoted as NIG_SV. This mechanism assumes that missing values are generated based on parameters that are estimated during the imputation process, which can lead to biased estimates of the parameters and therefore imprecise imputed values. Formally, let M * = d s U [s] V T [s] be the matrix that is generated by setting all of the singular values to 0 except the largest one, where d s is the largest singular value, U [s] is the corresponding left-singular vector, and V [s] the corresponding right-singular vector. We generate the missing values indicators W ij from Bernoulli(p ij ), where logit p ij = α + βM * ij , such that β = 10 and α is chosen such that the average missingness rates are either 15%, 30%, 50%, or 70%. Imputation Methods We compare the performance of multiple imputation Bayesian SVD to currently available methods. When inferences on the imputed matrices are based on the empirical distribution of the MCMC samples from Algorithm 3, then a large number of independent samples for Y miss is required (Little and Rubin 2019). We set R = L = 5000. As seen in Section 3.3, this number of iterations ensures that Algorithm 3 is generating samples from the posterior distribution, and it enables us to obtain large number of independent samples of Y miss . For matrices Y N 20×40 and Y N 40×20 , we assumed that k = min{m, n} − 2 = 18. For Y N 100×100 and Y t 100×100 , we assumed that k = 50, in order to reduce the computation time over large number of simulations. These assumed ranks are greater than at least 1.5 the true ranks. We compare all of the methods using H = 100 imputations. This number of imputations was selected because it is likely to be an upper bound on the number of imputations that would lead to reproducible results (White, Royston, and Wood 2011;Carlin 2014). Moreover, similar results were observed when we increased the number of imputations for the proposed Bayesian SVD method and when we decrease the number of imputations of existing methods (data not shown). The performance of the proposed Bayesian SVD algorithm was compared to the following four existing imputation methods: Marginal Mean Imputation imputes missing values with the average of the observed values in each column (Carpenter and Kenward 2014). This is a single imputation method that may result in biased estimates of the correlations between variables (Carpenter and Kenward 2014). MICE is an FCS procedure that assumes a univariate regression model for each variable with missing values. For continuous outcomes the most commonly used models are linear regression models or predictive mean matching (Van Buuren 2018). Recently, classification and regression trees (CART) and random forests were proposed as possible imputation methods (Doove, Van Buuren, and Dusseldorp 2014). All of these method can result in more reliable inferences by modeling nonlinear relationships between variables (Doove, Van Buuren, and Dusseldorp 2014). These methods are implemented in the R mice package ( van Buuren and Groothuis-Oudshoorn 2011). Here, we implemented the random forest model as the univariate model because it can be implemented without manual adjustments across all simulation configurations. Because there are large number of columns compared to the number of rows, CART, linear regression and predictive mean matching tend to fail in some simulation replications and result in improper outputs. When using the random forest algorithm, the set of initial predictor variables comprised of variables that have observed marginal correlations with the response variable which are greater than or equal to 0.3. In our numerical studies we observed that the imputed values were relatively stable after five iterations. Following the recommendations in the literature (White, Royston, and Wood 2011;van Buuren and Groothuis-Oudshoorn 2011), we implemented the algorithm for 5 burnin cycles followed by 100 cycles to generate the imputed datasets. Amelia is a JM procedure that assumes a joint multivariate Normal distribution for all of the continuous variables. The mean vector and the covariance matrix are estimated using the Expectation Maximization with Bootstrap (EMB) algorithm (Honaker and King 2010). Using these estimates and their corresponding sampling variances, the missing data are imputed by sampling from the corresponding multivariate Normal distributions. The method is implemented in the R Amelia package (Honaker, King, and Blackwell 2011a). When some variables are highly collinear or when the number of variables is larger than the number of observations, the model is unidentifiable. To overcome this issue, the package employs the Ridge prior distribution to estimate model parameters (Honaker, King, and Blackwell 2011a). Similar to MICE we have implemented this procedure using 100 imputed datasets. missMDA is another JM procedure that assumes Y = UDV T + E, where UDV T are the SVD components of the mean matrix and E is an error matrix composed of independent Normally distributed entries from N (0, σ 2 ). The algorithm assumes that the true rank of the matrix is known and use it as an input. In practice, the rank k is commonly determined using cross-validation (Audigier, Husson, and Josse 2016). missMDA initializes missing values using column means of the observed data matrix, then it iteratively estimates σ 2 and the missing values using principal components methods that are based on the first k principal components (Audigier, Husson, and Josse 2016). The method is implemented in the R missMDA package . The multiple imputation procedure was implemented with the Bayesian option and 100 imputed datasets. Evaluation Metrics Point estimates for the mean matrix from the proposed Bayesian SVD model and the methods described in and the four other imputation methods were recorded. To estimate M with existing imputation method, we useM LS assuming that k = k * in each complete dataset h. We have also examined the performance of the existing methods assuming that M is full rank. The results with both of these ranks were similar. Thus, we only report the results with k = k * in the manuscript, and the results assuming full rank are provided as supplementary material. In the Bayesian SVD imputation method, the estimates of M are obtained from the sampling algorithm. Figure 3 shows the paths of the singular values for different λ values when 15% of the entries are missing in the data matrix Y N 20×40 . For all three types of the missing data mechanisms, the proposed Bayesian SVD procedure results in λ values with MSE W of the mean matrix that are close to the corresponding minimal MSE W of the mean matrix when λ is set to a constant. The trends in shrinkage of the singular values are similar to those observed in Section 3.3, where none of the singular values are significantly bigger than 0 when λ ≥ 5. The MSE of the mean matrix is bigger under NIG_CORR and NIG_SV than under MCAR. This is because recovering the missing entries is more complex in nonignorable configurations. Table 2 compares the proposed Bayesian SVD multiple imputation model with the four multiple imputation methods described in Section 4.3 for different matrix dimensions and error distributions when the rate of missing values is 15%. Under all simulation configurations, the largest average MSE W is observed for marginal mean imputation, because this method ignores the correlations between variables in the matrix. The average MSE of the mean matrix from the proposed Bayesian SVD model is the smallest compared to all other methods. This suggests that the proposed model provides more accurate estimation of the mean matrix. In addition, the standard deviation of MSE W across replications from the proposed model is the smallest for Y N 20×40 and Y N 40×20 , and it is similar to Amelia for Y N 100×100 and Y t 100×100 . The JM procedure in Amelia generally has the lowest MSE W among the existing procedures described in Section 4.3. Estimation of the Mean Matrix The averages MSE W of Marginal Mean imputation, MICE, Amelia and missMDA are smaller for Y N 40×20 compared to Y N 20×40 . This probably stems from the ability to estimate model parameter more precisely when there are more cases than variables. The proposed Bayesian SVD imputation has similar MSE W for both matrices, because the eigenvalues are similar for the original and the transposed matrix. Figure 4 depicts the distribution of MSE W across 100 replications when Y N 100×100 is simulated with different missingness rates for each of the missing data mechanism described in Section 4.2. The results are similar to the ones observed when 15% of the values are missing. The proposed Bayesian SVD method has the smallest MSE W under all of the examined simulation settings. Across all methods, the variability of MSE W generally increases when the overall missingness rate increases. For low missingness rates, Amelia performs better than Marginal Mean imputation, MICE, and missMDA. As the missingness rates increase, Amelia performs similarly to these methods. With Y N 100×100 , when the number of singular values is k * = 30, similar trends to k * = 20 are observed (data not shown). Specifically, the Bayesian SVD method has the smallest MSE W in all configurations and bigger differences are observed for nonignorable missing data configurations. Table 2. Estimation of the mean matrix: average, standard deviation, and 2.5% and 97.5% quantiles of the mean squared error of the mean matrix (MSE W ) for 15% missingness rate over 100 replications. Y N 20×40 are simulated data matrices with m = 20, n = 40, k * = 5, and e ij ∼ N (0, 1); and Y N 40×20 are simulated similarly with m = 40, n = 20. Y N 100×100 are simulated data matrices with m = 100, n = 100, k * = 20, and e ij ∼ N (0, 1). Y t 100×100 are simulated data matrices with m = 100, n = 100, k * = 20, and e ij ∼ t(df = 5). Bayesian SVD model with double exponential priors for the singular values. Table 3 compares the MSE of the proposed multiple imputation model and the four multiple imputation methods. For Y N 20×40 and Y N 40×20 , the proposed Bayesian SVD multiple imputation model has the smallest MSE for the column-wise correlation compared to the four methods and for all three types of missing data mechanisms. For Y N 100×100 and Y t 100×100 , the MSE for all four methods are relatively small. This is because the column-wise correlations are calculated using both the imputed values and the observed values. The effects of poor Table 3. Estimation of the column-wise correlation of the mean matrix: average, standard deviation, and 2.5% and 97.5% quantiles of the mean squared error (MSE ) of the column-wise correlation of the data matrix over 100 replications. imputation on the correlation is attenuated when the dimension of the data matrix is large and the proportion of missing entries is small. Effect of t-distributed Error The average MSE W and MSE are larger for Y t 100×100 compared to Y N 100×100 for MICE, Amelia, missMDA and the Bayesian SVD method (Tables 2 and 3). This is because the t df =5 -distributed error matrices have more extreme values than standard Normal distributed error matrices. The Bayesian SVD multiple imputation provides the smallest average MSE W for the mean matrix even when the distribution of E is misspecified. This suggests that the proposed Bayesian SVD model for multiple imputation is robust in terms of component-wise symmetric error matrix. For the column-wise correlation, all four methods have similar MSE because the column-wise correlations are based on the imputed values and the observed values, which attenuates the effects of inaccurate imputation. Computation Time We examine the computation time over 100 simulated datasets using one core of an Intel(R) Xeon(R) CPU E5-2650. The average time to complete 10 4 imputation using the Bayesian SVD with double exponential prior is 560 sec for Y N 20×40 and 2807 seconds for Y N 100×100 (Table 4). Using the Bayesian SVD algorithm with Normal prior distributions increases the average computation time by 14 seconds for Y N 20×40 , and it is approximately similar for Y N 100×100 (Table 4). The imputation methods Amelia and missMDA require significantly less time to complete because they require smaller number of iterations to converge. Compared to the Bayesian SVD methods MICE requires significant more time to impute small matrices, but its running time is shorter for larger matrices. Comparing the number of seconds that are required to generate one complete dataset for Y N 20×40 , the Bayesian SVD methods is 12.8 and 3.7 times faster than MICE and missMDA, respectively. This shows that when the auto-correlations between MCMC iterations is low, the Bayesian SVD methods can be computationally efficient with small and medium sized matrices. Simulations Using Real Data Matrix In Section 3 and 4 the mean matrix was simulated based on specific rank of the matrix and the singular values followed a predefined Uniform distribution on the Steifel manifold. Real datasets may not follow this matrix construction. To examine the performance of the proposed Bayesian SVD model and compare it to the four imputation methods described in Section 4.3, we used the UCI breast cancer Wisconsin (Diagnostic) dataset (Dua and Graff 2017). This dataset includes 30 features computed from a digitized image of a fine needle aspirate of a breast mass. These features describe the characteristics of the cell nuclei present in the image. A total of 569 images are included in the dataset, and it has been widely used in the machine learning literature as an example data-set for model testing (Mangasarian and Wolberg 1990). To generate a matrix with relatively small number of cases, we randomly sample 50 out of the 212 malignant breast cancer cell nuclei. We define these samples as the mean matrix, M. To generate the matrix Y we perturbed the sampled M matrix of 50 rows and 30 columns by adding a random error distributed as N(0,1) to each cell. The missing data indicator matrix W are generated based on the three missing data mechanisms described in Section 4.2 such that on average 15% of the cells are missing. Similar to Section 4, multiple imputation Bayesian SVD with double exponential prior distribution was implemented with k = min{50, 30} − 2 = 28 and with L = 10 4 MCMC iterations. For MICE, 100 complete datasets were generated, and variables in the conditional models were included if they have marginal correlations with the outcome variable that are greater than or equal to 0.3. For missMDA, 100 complete datasets were generated, and the rank of the matrix was estimated using crossvalidation. For Amelia, 100 complete datasets were generated. For Marginal Mean, MICE, missMDA, and Amelia, we estimated the M matrix assuming that the rank is 28. Table 5 shows MSE W and MSE over 100 replications. Generally, the Bayesian SVD multiple imputation method has the smallest MSE W and MSE . For NIG_CORR, missMDA has smaller MSE W , but Bayesian SVD has smaller MSE This suggests that even when the mean matrix is not generated under the Bayesian SVD model assumptions, imputations for missing values using the Bayesian SVD model is generally more precise than marginal mean imputation, MICE, Amelia, and missMDA. Discussion We propose a new joint modeling approach for multiple imputation that is based on the SVD components of a data matrix. The method does not rely on the assumption that the rows of the datasets are independent, and it can be applied to datasets where the number of variables is larger than the number of cases. Simulations suggest that the proposed method results in more precise estimations for the missing values as well as the column-wise correlation of the data matrix compared to exisiting imputation procedures under various missing data mechanisms. In simulation analyses, we examined datasets with more rows than columns, datasets with more columns than rows, and datasets with equal number of rows and columns. Existing methods to impute continuous data such as the multivariate normal model and fully conditional specification based on random forests require strong prior distributions to perform model estimation when the number of cases is smaller than or equal to the number of variables. In Amelia, which is an R software that uses the multivariate normal distribution, the Ridge prior distribution is assumed when many columns are collinear. MICE, which is a FCS approach, requires that the predictors of the univariate models be prespecified for every univariate model. The proposed Bayesian SVD method does not require such specifications, because it relies on a shrinkage prior distribution for the singular values. The proposed method relies on a full Bayesian representation of the SVD components of the data matrix. The MCMC scheme uses a rejection-sampling method to sample from the von-Mises Fisher distribution, and it can become computationally intensive when the dimension of the data matrix increases (Hoff 2009). One possible way to reduce computational complexity is to assume that the mean matrix has lower rank. However, this may result in biased imputed values. Because the imputation process is only implemented once, this computational complexity should be less of a concern. When researchers encounter computational constraints, we suggest to select k < min{m, n}− 2 when both dimensions of the data matrix are larger than a 100. This reduction is reasonable in many cases because lowdimensional signal matrix occurs frequently in practice (Hoff 2007;Josse and Husson 2016;Hastie, Tibshirani, and Wainwright 2019). In the low-rank matrix recovery algorithm, the Ridge penalty on the singular values have been proposed. In our simulations, we did not identify significant differences between the Ridge prior distribution and the Double Exponential prior distribution. However, in any specific application, these could have significant effect on the imputation process. Selection of a prior distribution should be based on prior knowledge. When such prior knowledge is unavailable, we propose to examine the performance of the imputation using posterior predictive checks for both prior distributions and select the model that has best performance. Future extension may use Bayesian Model Averaging to obtain better average predictive performance (Hoeting et al. 1999). In many real applications, missing data occurs in both quantitative and categorical variables. This may limit the usefulness of the proposed method. However, there are multiple cases where the proposed method can be applied to impute missing data. For example, one limitation of the generalized location model is that as the number of discrete categories increases, there are less observations to estimate the covariance matrix of the multivariate Normal component. To address this issue and restrict the number of estimated parameters, multivariate linear regression constraints are commonly applied (Schafer 1997). These constraints impose significant restriction on the distribution of the continuous variables given the categorical variables. Instead, the proposed Bayesian SVD method can be used to impute multiple continuous variables with small number of observations while relying on weaker constraints. Another example arises in longitudinal or clustered studies. These studies result in possible correlation across individuals and variables. The proposed Bayesian SVD method can be applied to correlated data without the need to specify an exact correlation structure. When there is a natural multilevel structure, a possible extension of the Bayesian SVD model is to partition the variability into the between and within groups and preforming SVD within each component (Husson et al. 2019). In conclusion, we proposed a novel method to impute missing continuous data based on the Bayesian SVD model. We described a new prior distribution for the model, and show that this Bayesian SVD model is more accurate and precise in imputing missing values compared to existing methods. However, these benefits may require increased computational complexity.
10,710
2022-07-29T00:00:00.000
[ "Mathematics" ]
Genetically Engineered Foods and Moral Absolutism: A Representative Study from Germany There is an ongoing debate about genetic engineering (GE) in food production. Supporters argue that it makes crops more resilient to stresses, such as drought or pests, and should be considered by researchers as a technology to address issues of global food security, whereas opponents put forward that GE crops serve only the economic interests of transnational agrifood-firms and have not yet delivered on their promises to address food shortage and nutrient supply. To address discourse failure regarding the GE debate, research needs to understand better what drives the divergent positions and which moral attitudes fuel the mental models of GE supporters and opponents. Hence, this study investigates moral attitudes regarding GE opposition and support in Germany. Results show that GE opponents are significantly more absolutist than supporters and significantly less likely to hold outcome-based views. Furthermore, GE opponents are more willing to donate for preventing GE admission than supporters are willing to donate for promoting GE admission. Our results shed light on why the divide between opponents and supporters in the German GE debate could remain stark and stable for so long. Introduction Environmental policy challenges, such as increasingly extreme weather conditions, has led to the new EU Commission's stipulation that the European Union should become climate-neutral by 2050.This requires a profound transformation of agriculture, which can only be implemented with innovative, efficient, and sustainable technologies and processes.Consequently, the genetic engineering (GE) of goods has become a source of controversy (for an overview, see Genetic Literacy Project, 2021).One side argues that GE contributes to making our situation worse but the other side claims that GE is key to overcoming societal challenges. 1ore specifically, non-governmental organizations (NGOs) concerned about GE foodswarn of potential health damages related to GE (GM Watch, 2020), such as those caused by toxins or allergen substances (Garden Organic, 2021;Debating Europe, 2021).GE proponents, on the other hand, argue that, through GE, healthier foods can be developed, such as the famous golden rice (Golden Rice Project, 2021). Recent research shows that GE innovations can address vitamin deficiencies and play an important role in the defeat of famines and diseases (Rauner, 2017;Kohli & Dupont-Inglis, 2020).More recently developed traits include increased lycopene content in tomatoes (Wang et al., 2019;Ku & Ha, 2020).Heidi Godman of Harvard Medical School argues that these new properties bring health benefits, such as reduced risks of cancer or stroke (2012).By contrast, NGOs often report that GE crops are detrimental for the environment, for example through the risk of contamination or increased pesticide use (Friends of the Earth Europe, 2021), which harm animals, plants, and the ecosystem as a whole (Testbiotech, 2021;GeneWatch, 2021).GE supporting organizations retort that the use of GE has been found to reduce pesticide use (GMO Answers, 2020).Additionally, GE is said to provide further environmental benefits, such as products with longer shelf life to reduce waste (Debating Europe, 2021), increased soil-compatibility of crops (Parrott, 2018), and protection of biodiversity (Bayer, 2021). Generally, the European GE debate revolves around economic risks and benefits.Opponents of the technology worry about economic consequences of its products, such as high costs, which are said to affect small farmers, particularly in poor countries, who suffer disproportionately (Cotter et al., 2015).This argument is often accompanied by a worry of large corporations having too much power (Voelker, 2020).On the other side, research reports higher crop yields through GE, which particularly benefits small farmers in developing countries (Klümper & Qaim, 2014). Similarly, the European public is divided on the topic.In a special edition of the Eurobarometer, published by European Commission (2013), 61% of participants felt uneasy about GE foods.Although a decade later this number has dropped significantly, another Eurobarometer on Food Safety in 2019 showed that 27% of participants still see GE for food production as their main topic of concern (European Commission, 2019). Generally, the positions taken in the public debate either favor or reject GE.Because the outcomes that both sides expect from this technology seem diametrically opposed and, thus, appear to be mutually exclusive.For example, as mentioned above, although proponents proclaim health benefits from GE foods, the opposition expects GE foods to be detrimental to human health.Consequently, the demands regarding this technology appear similarly incompatible: GE opposition calls for a continued ban on GE, or at least an extremely strict regulation (Die Grünen/EFA, 2016), but proponents call for a widespread approval of GE, or at least more open regulations (Albert, 2020). These opposing positions have not yet been bridged in the European debate, causing a lack of regulation of state-of-the art technologies, to society's detriment.Apart from the fact-based dissent, GE-positions appear to be based on deeply held beliefs.Regarding GE opponents, these beliefs have been described as morally absolutist (Scott et al., 2016;Fernbach et al., 2019).These findings indicate that the debate may also fail due to irreconcilable ideological divisions.It is important to note that the term "moral absolutism" is used in different contexts.Importantly, it can also be used in distinction to moral relativism (Gowans, 2021) to describe the view that there exists non-subjective moral truth independent of time and circumstances.This is not the way that moral absolutism is understood here.In our context, it describes a categorical conviction that GE is intrinsically morally good or bad irrespective of its specific consequences.In this sense, moral absolutists with respect to GE are chronically insensitive to empirical evidence concerning its actual effects. It is the aim of this article to provide research that helps to understand better what causes this discourse failure regarding GE.We investigate moral attitudes relating to GE for food production.Our main objective is to shed light on the potential role of moral absolutism and principle or outcome-based moral attitudes in preventing constructive debates on GE. This article proceeds as follows.In "Perception as Tradeoff within the Debate on Genetic Engineering", we elaborate on the perception as a tradeoff that characterizes the European GE debate and the problems that come with it.In "Moral Absolutism Aggravates Perception as Tradeoff", we explicate the concept of moral absolutism, which might constitute an obstacle for a fruitful debate."Study Design" outlines the design of our representative empirical study to investigate moral absolutism of GE supporters and opponents with a representative German sample, and "Results" discusses the results and concludes the article. Perception as Tradeoff Within the Debate on Genetic Engineering Figure 1 illustrates the juxtaposed demands of GE opponents and supporters.The axes show the debating parties' interests.For example, GE opponents demand a ban of GE because it could be a risk to human health.GE supporters demand approval of GE because they hold that GE could be beneficial to human health.Although both sides seem to officially share the common ends of human health and environmental integrity their positions on GE technology seem intransigent.This may be due to fundamentally opposed beliefs about the actual health effects of GE or because one or both groups only use this argument to cover up their absolutist attitude toward the technology.Consequently, potential debate outcomes are only thought of as being located along the graph.It thus appears that if the debate moves in the direction of the interests of one party, it necessarily moves away from those of the other party.The situation is perceived as a problem in which one party can only be better off at the expense of the other.This perception of a tradeoff is dominating the debate (Pies et al., 2017(Pies et al., , 2021)).Consequently, the public debate is conducted as if, ultimately, the winner-takes-all.This perception impedes the development of mutually beneficial agreements and, thus, leads to a blocked debate (Pies, 2009).As long as the debate revolves around such perceived tradeoffs, solutions are hard to find. Emotionalized Debating Fosters Perceptions as Tradeoff In addition to these opposing views, the controversial debate is characterized by emotional campaigning on both sides.For example, Cotter et al. (2015) accused GE manufacturers of deliberately putting farmers into a dependency that forces them to buy ever more expensive seeds with ever more expensive pesticides, which ultimately drives them into ruin.Relatedly, using pejorative names and imagery, such as of "Frankenfood" and "fish tomatoes," has been a common tool in anti-GE campaigning Fig. 1 Interests in the GE debate are perceived as diametrically opposed (perception as tradeoff) (adapted from Pies et al., 2021) and for the media to grab attention (see, e.g., Hellsten, 2003;Spiegel International, 2009;GMO Awareness, 2021). Conversely, on the support side, strong wording is used at times.In 2016, 110 Nobel laureates called on Greenpeace in a public letter to refrain from campaigns against GE (Roberts, 2016).Currently, 158 Nobel Laureates have signed.In this letter, the signatories charge anti-GE organizations with "denying the facts" and "misrepresenting their risks, benefits, and impacts."They ask Greenpeace specifically to "cease and desist in its campaign against Golden Rice."In this context, they call Greenpeace's campaigning against GE in general and against Golden Rice in particular a "crime against humanity" (ibid.). This perception of a tradeoff is exacerbated by debating strategies that are not aimed at finding common ground, but at portraying the opponent as someone who is acting in bad faith.Hence, debating is not meant to convince one's opposition by arguments but aim at a third-party audience that the speakers intend to persuade.For example, molecular biologist Bock (2015, p. 4) speaks in one of his essays of "systematic self-deception, hypocrisy and mendacity […], which unfortunately seems to become a habit in our political landscape and for which the handling of the topic 'genetic engineering' has become almost symptomatic" (own translation).The geneticist Nellen (2018) spoke of "hysteria" and "ignorance" in one of his articles.Similarly, Szibor (2013) complained that all fears and frustrations of this world would be projected into green GE.These examples show that some scientists argue emotionally and, thus, sharply criticize the behavior of anti-GE organizations. The Consequences: Discourse Blocks Distort Policy-making Such obstacles to a constructive debate are problematic because they may have adverse effects on policy-making.The legislation on GE in Germany2 is a case in point: This law restricts GE research and development with almost prohibitively high regulations and safety requirements which make it nearly impossible to develop GE products (Leopoldina, 2019(Leopoldina, , 2020)).This legislation has been passed in 1990.Since then the regulations on cultivation and distribution of GE plants for human consumption have not been significantly revised to account for more recent scientific evidence and newer technological developments.Additionally, a more recent ruling by the European Court of Justice (ECJ) on the recently developed and more advanced genome editing technology (CRISPR/Cas9) in summer 2018 decided to regulate this new technology as strictly as the earlier GE technologies (Callaway, 2018).As a consequence of those policies, apart from one genetically modified potato plant (MON18), GE crops are virtually nonexistent in the European Union (Die Bundesregierung, 2021).Applied research and industrial research in biotechnology are leaving the European Union.For example, in 2012, BASF moved its GE unit into the United States (Zeit online et al., 2012). According to the German National Academy of Sciences Leopoldina, the European Union's regulatory framework on GE can not be scientifically justified (Dederer, 2020).Thus, they demanded a thorough renewal of this framework (Leopoldina et al., 2019).The Academy also called for European legislation on GE to be updated to reflect the current state of scientific opinion. The failure of the legislative process to keep up with the scientific consensus is driven-at least in part-by a distorted public debate.Dysfunctional debates have consequences for policy making, as well as for individual behavior. 3According to Pies (2009), public debates influence policy-making by putting pressure on policymakers to placate public opinion.Ideally, discourse is a competition of ideas, in which the idea that creates the most appropriate balance between various stakeholder demands wins.However, distorted debates are reflected in the legislative process.This means that adopted policies may result in institutional frameworks that aredetrimental to society (Pies et al., 2017).As a consequence, innovations might be hampered and, thus, cannot be used to society's advantage (van Eenennaam et al., 2021;Pies et al., 2017). Moral Absolutism Aggravates Perception as Tradeoff The Obstacle: Debating Parties See GE as a Goal, Not as a Potential Means So far, the GE debate has not transitioned from perception as tradeoff towards an open-ended search for solutions that make all sides better off.On the contrary, as described earlier, GE supporters and opponents defend their own positions with emotional arguments.Moreover, their argumentation strategies aim to convince others (often third parties) of their own positions instead of contributing to finding mutually accepted solutions.Thus, instead of being perceived as a potential means to reach a common goal, GE is handled as the main goal itself.Banning or approving GE has become the central interest within the debate.This tendency is reflected in the recent European Court of Justice (ECJ) ruling on GE.For present regulation, the focus is on the technology used for breeding rather than the product.This is also reflected in the recent European Court of Justice (ECJ) ruling with respect to GE (Leopoldina et al., 2019, p. 5).In this regard, GE is not treated as a means for certain outcomes-or products-but as the central topic of interest.Here then, a means is normatively elevated to a moral end in itself (Pies, 2017).For one party, a prohibition of GE for food production has become the debate's central moral goal, and for the other party, it is an approval of GE.Therefore, the result is a moral conflict.The problem with conflicting moral goals is that finding a compromise is even more unlikely because such moral goals can result in strong moral convictions that additionally impede constructive debates, as explained in the following section. Absolute Moral Convictions May Lead to Failed Debates The drivers of discourses failure can also be studied on a psychological level by looking at discourse participants' mindsets.Research in moral psychology investigates which cognitive mechanisms and mental models cause people to hold their beliefs as absolute, thereby shutting them off against new information and against opposing views.Beliefs about right or wrong can become moral convictions, such as abortion is wrong; nuclear energy is dangerous and should be prohibited; or genetically modifying organism for food production is wrong.Moral convictions are defined as "the subjective belief that something is fundamentally right or wrong" (Bauman & Skitka, 2009; italics added). The concept of moral convictions investigated in the psychological literature can be linked to the philosophical concept of moral absolutism (see, e.g., Jackson & Smith, 2006;Rachels, 1970).Moral absolutism is the view that holds some actions are intrinsically right or wrong irrespective of their consequences (McConnell, 1981).In other versions, the absolutism does not apply to all but to some actions, which should be absolutely prohibited (Rachels, 1970).An example is the reference to divine laws that condemn certain kinds of behavior under all circumstances.In this sense, moral absolutism is an extreme form of nonconsequentialism, because consequences are strictly disregarded.This is not true for most nonconsequentialist theories.Whereas most nonconsequentialist views hold that "the moral status of an action is not determined solely by its consequences," the absolutist "maintains that certain actions are always wrong, regardless of the consequences of not performing them" (McConnell, 1981, p. 287).McConnell (1981), thus, distinguishes between nonconsequentialist moral theories that are absolutist (complete disregard of consequences) and others that merely reject the exclusive consideration of consequences when assessing a given action.Specifically, one could say that an agent S is a moral absolutist with respect to an action A if and only if S holds that A is right (or wrong), and that the rightness (or wrongness) of A is entirely independent of the consequences of A. 4In the context of new technology evaluation, moral attitudes play an important role.If a technology is rejected solely on principled grounds, it is much harder to engage holders of this view in the debate.Fact-based arguments on the value of a technology only speak to people who do not reject the idea that a technology's value is at least co-determined by its consequences.Misselhorn, for instance, argued that, as one cannot rule out that autonomous cars might run into moral dilemmas, the technology should be considered with moral skepticism because dilemmatic decisions should not be taken by automata irrespective of the benefits that they may generally bring in terms of traffic safety (Dörhöfer, 2018). To study people's moral views, the famous trolley case (Foot, 1967) has been used in empirical studies to find out whether participants prefer a rule-based (i.e., nonconsequentialist) or an outcome-based (i.e., consequentialist) moral approach.The thought experiment reveals a dilemma between good consequences (e.g., saving five human lives but causing one person to die) and a profound moral principle (e.g., no act is permissible that causes harm or kills a human being).Participants who find the act that saves the lives morally permissible are thus identified as outcome-minded, whereas those who find the act impermissible are identified as rule-minded.This way of distinguishing participants in behavioral experiments has revealed systematically different behavioral patterns and moral attitudes (Cornelissen et al., 2013;Ostermaier & Uhl 2017). Thus far, empirical research has investigated the moral attitudes of GE opponents.Scott et al. (2016) found that roughly 70% of people with anti-GE beliefs qualify as moral absolutists.Furthermore, Fernbach et al. (2019) showed that extreme GE opponents tend to think they know more about the GE foods than others do but, on average, know less when tested on genetics. For data collection, we conducted an online survey through the German online panel provider GapFish.Our sample is representative of the German population according to age, gender, income, and education.After sorting according to an attention check, our analysis includes complete responses from 636 participants. The study took approximately 15 min to complete.After providing informed consent and demographic information, participants indicated their overall attitudes toward GE for food production (see the Measures section).Depending on their responses, participants received one of two versions of the questionnaire-one tailored to GE opposition, and the other tailored to GE support.First, they were asked to select an NGO to which they may want to donate.This NGO either supports the ban of GE plants or the admission of GE plants, respectively.Second, participants had to state whether they would like to donate or not. In the second part of the survey, participants had to evaluate items on moral absolutism regarding GE (described below).We also asked participants to which degree they generally find NGO activities and donating to be effective.In addition, participants had to indicate how important they find spirituality and religion for their personal lives and were asked to rate the standard trolley case morally (Foot, 1967).The survey continued with three questions of the standard cognitive reflection task (Shane, 2005), an elicitation of participants' political orientation, and three exploratory open questions about naturalness and sanctity.The study ended with an attention check and the opportunity to give feedback. Below, we describe our main measures in more detail. GE Attitude Participants were asked to indicate their attitude toward GE ("Which statement is closest to your position toward genetically engineered plants for human consumption") on a six-point Likert scale, anchored by "I am strongly against it" and "I am strongly in favor of it."Participants who selected one of the first three Likert points (i.e., ranging from "I am strongly against it."to "I tend towards opposing it.")were subsequently provided the survey version for GE opponents.Respectively, participants who selected one of the last three Likert points (e.g., ranging from "I tend towards supporting it."to "I am strongly in favor of it.")were subsequently provided the survey version for GE supporters. Moral Absolutism Moral absolutism was assessed adopting three agree/disagree statements from previous research (Baron & Spranca, 1997;Scott et al., 2016).For GE opponents, these were as follows.(a) "The GE of plants for human consumption should be prohibited.This should apply regardless of how great the benefits and how small the risks of genetic engineering are" (b) "If the genetic engineering of plants for human consumption is only approved on a restricted basis, this is just as wrong as unrestricted approval.The extent of the approval does not matter."(c) "Approval of genetic engineering of plants for human consumption would be wrong even in a country where everyone thinks approval would be right."For GE supporters, these were as follows.(a) Genetic engineering of plants for human consumption should be allowed, regardless of how great the risks and how small the benefits of genetic engineering."(b) "If the genetic engineering of plants for human consumption is partially banned, this is just as wrong as a complete ban.The extent of the ban does not matter."(c) "Banning genetic engineering of plants for human consumption would be wrong even in a country where everyone thinks the ban would be right." Willingness to Donate Participants were asked to select one NGO to which they would potentially donate.For GE opposition, they could select from the following German NGOs: Gene-ethical Network, Interest Group for GE-free Sowing, GE-free Regions, and Alliance for GE-free Agriculture. 5For GE support, subjects could select from the following German NGOs: Transparency Genetic Engineering, World Health Organization, Forum Green Rationality, and Innoplanta. 6Participants could also select "I find them all equally bad" or "I find them all equally good," respectively.Participants then had to select, hypothetically, whether they would like to donate 5Euros to their previously selected NGO. Trolley Problem This measure assessed whether participants decided based on consequentialist or rule-based theory.Describing the standard version of the trolley problem, participants were given a scenario in which five workers would die through an approaching train if no action was taken (i.e., changing the switch) versus one worker would die if action was taken.Participants then had to select whether they found it morally acceptable to change the switch. Cognitive Reflection Task Three cognitive reflection tasks were adapted from Shane (2005).These tasks recorded whether participants tended to reflect on given problems or rather decided intuitively.The first one read, "A racket and a ball cost a total of 1.10 Euros.The racket costs 1.00 euro more than the ball.How much does the ball cost?"The second read, "If it takes 5 machines 5 minutes to make 5 devices, how long would it take 100 machines to make 100 devices?"The third read, "In a lake there is a small area covered with lily pads.Every day the area doubles.If it takes 48 days to cover the entire area of the lake with water lilies, how long would it take to cover half the area of the lake with water lilies?"Participants then had to provide their solutions in free text boxes. Main Results Of our 636 survey participants, 484 (76.1%) can be identified as holding at least moderately negative attitudes toward GE (i.e., moderately oppose, oppose, or strongly oppose).A minority of 152 (23.9%) can be identified as holding at least moderately positive attitudes toward GE (i.e., those who moderately support, support, or strongly support).In the following, we refer to the former group as "opponents" and to the latter as "supporters."It is noteworthy that opponents express their preference against GE more strongly than supporters express their preference for GE.The median answer of the opponents is that they oppose GE, but the median answer of the supporters is that they moderately support them.Panel 1 of Table 1 provides an overview of the numbers and proportions of participants that moderately oppose (or support), oppose (or support) with medium intensity, and strongly oppose (or support) the GE of crops for human consumption. We first investigate whether supporters and opponents show different levels of agreement with statements that express moral absolutism.It turns out that opponents agree on average with 2.29 (SD = 1.02) of the three absolutist statements, but supporters agree on average with only 1.42 (SD = 1.05) of the three absolutist statements.This difference is statistically highly significant (p < 0.001, M.W.U Test).One might suspect that this effect is merely driven by the fact described above: the opponents' preference against GE is stronger than the supporters' preference for GE.Therefore, we compare the agreement with the absolutist statements for each of the three given levels of preference intensity separately.This demonstrates that we observe a highly significant difference in the agreement with the absolutist statements between opponents and supporters for each of the three levels of preference intensity.The results are summarized in Panel 2 of Table 1. The literature suggests that moral absolutism is related to nonconsequentialist or deontological ethical thinking.Therefore, we check whether GE opponents and supporters express a different willingness to pull the lever in the trolley problem.Pulling the lever implies the outcome-based choice of deviating the trolley to the sidetrack and, thus, intentionally killing one person to save five.Not pulling the lever implies the principle-based choice of refusing to kill a person intentionally, irrespective of some greater good.Indeed, we find that only 322 of 484 GE opponents (66.5%) are willing to pull the lever, but 120 of 152 supporters (78.9%) are willing to do so (p = 0.003, Fisher's Exact Test).This result suggests that opponents are indeed less likely to make an outcome-based choice in the trolley dilemma than supporters are. One might expect that a higher conviction for one's cause also reflects a higher commitment to make personal sacrifices for one's cause.Therefore, we offered opponents a list of NGOs that were committed to preventing GE, and supporters were offered a list of NGOs that were committed to promoting GE.We asked either group to choose their preferred NGO.Later in the survey, we asked participants for their willingness to donate 5 Euros for the NGO that they had previously selected.We find that 168 of the 484 (34.7%) opponents state a willingness to donate to their cause of preventing GE, while only 37 of the 152 (24.3%) supporters state a willingness to donate to their cause of promoting them (p = 0.017, Fisher's Exact Test).This implies that opponents of GE express a higher commitment to their cause in monetary terms than supporters do.Notice that this effect could also be based on opponents' relatively stronger trust in the efficacy of "their" NGOs.Therefore, we asked opponents (supporters) whether they believed, first, that NGOs lobbying for banning (promoting) GE were effective and, second, whether they considered small donations to these GE to be effective.Participants expressed their approval rating to the statement that NGOs committed to their respective causes are effective and that small donations would be effective on Likert scales from 1 (fully agree) to 7 (do not agree at all).Indeed, opponents have a greater trust in the efficacy of "their" NGOs than supporters do (3.05 vs. 3.37, p = 0.009, M.W.U.Test), as well as in the efficacy of small donations to their respective cause (3.36 vs. 3.72, p = 0.012, M.W.U Test).This greater trust can also be explained against the background of the current regulations in the European Union.An almost complete ban on GE for food production can be interpreted as a success of anti-GE NGOs. Finally, we checked for potential differences in the levels of cognitive reflection shown by opponents and supporters of GE.To this aim, we compared the correct solutions to the three cognitive reflection tasks.The average number of tasks that were correctly solved by opponents and supporters were similarly low (0.86 vs. 0.82, p = 0.393, M.W.U.Test).This suggests that both groups do not differ in their levels of cognitive reflection. Analysis of Demographic Differences In our representative German sample, opponents are on average approximately 5 years older than supporters (45.0 vs. 39.8, p < 0.001, M.W.U.Test) and more likely to be female (55.4% vs. 39.5%, p < 0.001, Fishers' Exact Test).In terms of political preferences, fewer of the opponents are willing to vote for one of the parties currently represented in the German Bundestag (24.5% vs. 12.5%, p = 0.002, Fisher's Exact Test). Participants were also asked for their agreement to the statements that spirituality and religion play an important part in their lives (1 = fully agree, 7 = do not agree at all).Participants tend to ascribe a generally low importance to spirituality and religion.The importance that opponents and supporters ascribe to spirituality does not differ significantly (4.72 vs. 4.54, p = 0.266, M.W.U.Test).Opponents, however, assign an even lower importance to the role that religion plays in their lives than supporters do (4.89 vs. 4.47, p = 0.039, M.W.U Test). Discussion and Conclusion Our study investigates the prevalence of absolutist moral attitudes in GE supporters and opponents that may contribute to the failed public debate on this technology.We find that both camps of our representative German sample show moral absolutism in their GE attitudes, yet moral absolutism is more pronounced with opponents than supporters.This effect remains if we control for the strength of the conviction.These findings are further strengthened by participants' answers to the trolley problem.Here, we find that GE opponents are more likely to give a principle-based answer than GE supporters are.In the literature, some principle-based (or nonconsequentialist) moral views have been described as absolutist, which is in line with what we find in our data.GE opponents also exhibit a higher willingness to donate 5 euro to their cause compared to GE supporters.This effect might be driven by stronger trust in the effectiveness of those NGOs, which might be attributed to the successful anti-GE campaigns of NGOs.It remains an open question whether the lack of willingness to donate by GE supporters is mainly driven by their lack of trust in the respective NGOs' effectiveness to change current regulation.Other reasons, such as the belief that a change in regulation has very poor chances of success in Europe or that lack of interest in the topic to consider donation a viable option, might also play a role. Our findings may inspire more empirical inquiries into questions on the ethics of technology.Moral intuitions have the adaptive advantage to detect violations of moral values in social behavior (e.g., if a perpetrator harms someone).Yet, morality may also play an ambivalent role.Greene (2014) argues that although our moral intuitions are well-equipped to react to issues that have been familiar to humanity for a long time, this adaptation cannot be assumed regarding technologies that emerged only in the most recent human history (Greene, 2014).If Greene is correct, it would be worthwhile to investigate more systematically whether absolutist or consequentialist responses of participants with regard to technologies are asymmetrically based on intuitive or deliberative reasoning. Relevant implications for the ethics of technology arise if our cognitive mechanisms have evolved to focus on harm that might be caused by something happening instead of harm that might be caused by this something being prevented from happening.This phenomenon has been described as omission bias (Baron & Ritov, 2004).It should be noted that evolution need not be understood only genetically.For instance, Gintis (2011) and Gintis et al. (2012) suggest that a gene-culture coevolution determines basic principles of human morality.In any case, this cognitive mechanism would lead to a skepticism that may transcend the necessary caution toward new technology and may inhibit societal benefits that can be brought about by the introduction of that technology.Our findings suggest that ethical research accompanying new technologies should consider this understated aspect by stressing the point that engaging, as well as forgoing, a technology has far reaching societal consequences and the potential of harm (Deutsch, 2011). In a democratic society, open debate is the prerequisite to successful regulation, and failed debates should be prevented.Our findings indicate that moral absolutism is one driver that complicates GE debates because this moral attitude leads the debating parties to perceive attitudinally dissimilar others as adversaries (i.e., the interests of the debating parties are perceived within a tradeoff).This empirically supported diagnosis implies that such debating failures can be overcome if the perception is changed, and thus, the issue is no longer perceived within a win-lose paradigm but as a search for solutions of mutual improvement (Pies, 2009).Technically speaking, the tradeoff line-according to which one party's win is the other one's loss-is abandoned; instead, both parties search for solutions that make them both better (see Fig. 2).The goal of the debating parties is no longer to realize predetermined goals at the expense of other parties, but rather to engage in a search for a new goal that can be shared by the other party.Naturally, this is a tedious process in which, at times, only incremental improvements might be achieved, but in a liberal democracy, it remains the way to achieve societal progress.Within the GE debate, this paradigm shift would mean that the debating parties no longer focus on mere means, i.e., banning or admitting GE crops to the European market, but widen their perspective to the goals that should be reached by these means. Specifically, there is a divide between those who focus on the effects on the ecosystems and those who focus on farming efficiency.Consequently, the interests of these two types of agriculture-conventional and organic-appear to clash.On the one hand, organic agriculture is more concerned with ecological farming methods but lacks efficiency (Meemken & Qaim, 2018, p. 57).Conventional farming, on the other hand, is more efficient but lacks adaptive farming methods (Leopoldina et al., 2019, p. 11).However, considering the ever-increasing necessity for sustainable production, both types of agriculture need to move toward one another; conventional agriculture needs to become more ecological, and organic farming needs to become more efficient (Pies et al., 2021).Focusing on the common goal to make agriculture more sustainable opens up room for mutual betterment.For these purposes, GE can be a means to reach a common goal: GE opens up opportunities for both, for example, more ecological farming through reduced pesticide use and more efficiency through higher yields (Ahmed et al., 2020, p. 1). Funding Open Access funding enabled and organized by Projekt DEAL.We thank the German Research Foundation for funding through Research Unit FOR 2569 'Farmland Markets-Efficiency and Regulation' under grant number 31737455. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
7,973
2023-09-06T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Philosophy" ]
Measurement of Upsilon production in 7 TeV pp collisions at ATLAS Using 1.8 fb-1 of pp collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the Large Hadron Collider, we present measurements of the production cross sections of Upsilon(1S,2S,3S) mesons. Upsilon mesons are reconstructed using the di-muon decay mode. Total production cross sections for p_T<70 GeV and in the rapidity interval |Upsilon|<2.25 are measured to be 8.01+-0.02+-0.36+-0.31 nb, 2.05+-0.01+-0.12+-0.08 nb, 0.92+-0.01+-0.07+-0.04 nb respectively, with uncertainties separated into statistical, systematic, and luminosity measurement effects. In addition, differential cross section times di-muon branching fractions for Upsilon(1S), Upsilon(2S), and Upsilon(3S) as a function of Upsilon transverse momentum p_T and rapidity are presented. These cross sections are obtained assuming unpolarized production. If the production polarization is fully transverse or longitudinal with no azimuthal dependence in the helicity frame the cross section may vary by approximately +-20%. If a non-trivial azimuthal dependence is considered, integrated cross sections may be significantly enhanced by a factor of two or more. We compare our results to several theoretical models of Upsilon meson production, finding that none provide an accurate description of our data over the full range of Upsilon transverse momenta accessible with this dataset. I. INTRODUCTION Since the discovery of the J/ψ and Υ mesons [1,2] the study of heavy quark-antiquark systems has provided valuable input for our understanding of Quantum Chromodynamics (QCD).However, despite being one of the simplest systems in QCD it has proven difficult historically to describe the production properties of these states adequately.Results on J/ψ and Υ hadroproduction and polarization [3][4][5][6][7][8][9][10][11][12] exhibit inconsistencies between measurements and theoretical predictions [13]. Measurements at the Large Hadron Collider (LHC) of differential production spectra of various charmonium and bottomonium states together with measurement of their spin-alignments, prompt double-quarkonia production, and production of quarkonia in association with photons, vector bosons or open heavy-flavor final states will allow discrimination between different theoretical approaches based on color-singlet corrections [14][15][16], coloroctet terms [17], the k T -factorization approach [18] and other production models [19], and provide additional input toward an improved understanding of quarkonium hadroproduction. Studies of bottomonium production complement concurrent studies of charmonium systems due to the larger mass of the bottom quark compared to the charm quark, allowing more dependable theoretical calculations than in the charmonium family, which suffer from poor perturbative convergence [17].Extension of cross-section measurements to higher meson transverse momenta provides valuable input to improvements in the theoretical description since in this regime different processes can dominate and, experimentally, the impact of spin-alignment uncertainties are mitigated. Taking advantage of the large increase in integrated luminosity delivered by the LHC in 2011, this paper updates a previous measurement [8] and reports the Υ (1S) cross section presented differentially in two intervals of absolute Υ rapidity [20], |y Υ |, and fifty intervals of Υ transverse momentum, p Υ T , and as a p T -integrated result in forty-five bins of absolute Υ rapidity.In addition, new measurements of the equivalent Υ (2S) and Υ (3S) differential spectra and their production ratios relative to the Υ (1S) are reported. Υ production can proceed directly, where the Υ meson of interest is produced in the hard interaction, or via the production of an excited state which subsequently decays.This so-called "feed-down" contribution complicates the theoretical description of quarkonium production as calculation of color-singlet P -wave and higher orbital angular momentum quarkonium production suffers from the presence of infrared divergences [21].From the experimental perspective, separation of direct and feeddown contributions is hindered by the small mass splitting between the bottomonium states, which impedes the detection of additional decay products that would indicate indirect production.Contributions from feed-down vary between the Υ (1S), Υ (2S) and Υ (3S) states due to the changing presence of various kinematically-allowed decays and influence the inclusive production rate.Study of the Υ production ratios as a function of kinematic variables as presented here thus provides an indirect but precise measure of these feed-down contributions. This new analysis extends the p T range of our previous cross-section result [8] to 70 GeV, at which new contributions to Υ production [15,16] such as from associated Υ + bb production may play a more important role and the impact of the dependence of the production cross section on the spin-alignment of the Υ is relatively small.Both the fiducial cross section, measured in the kinematic region with muon transverse momentum p µ T > 4 GeV and muon pseudorapidity |η µ | < 2.3, and the corrected cross section, which is defined in this paper as the cross section in the p T -η phase space of the Υ corrected for the acceptance of the decay products to the full Υ production phase space, are reported.In the former case, the results have no dependence on assumptions about Υ spin alignment, while the latter measurements are more easily compared to model predictions and to the results of other experiments. II. THE ATLAS DETECTOR The ATLAS detector [22] is composed of an inner tracking system, calorimeters, and a muon spectrometer.The inner detector directly surrounds the interaction point and consists of a silicon pixel detector, a silicon microstrip detector, and a transition radiation tracker, all immersed in a 2 T axial magnetic field.It covers the pseudorapidity [20] range |η| < 2.5 and is enclosed by a calorimeter system containing electromagnetic and hadronic sections.Surrounding the calorimeters is the large muon spectrometer built with three air-core toroids.This spectrometer is equipped with precision detectors (monitored drift tubes and cathode strip chambers) that provide precise measurements in the bending plane within the pseudorapidity range |η| < 2.7.In addition, resistive plate and thin gap chambers with fast response times are used primarily to construct muon triggers in the ranges |η| < 1.05 and 1.05 < |η| < 2.4 respectively, but are also used to provide position measurements in the non-bending plane and to improve pattern recognition and track reconstruction.Momentum measurements in the muon spectrometer are based on track segments formed in at least two of the three precision chamber planes. The ATLAS detector employs a three-level trigger system [23], which reduces the 20 MHz proton bunch collision rate to the several hundred Hz transfer rate to mass storage.The Level-1 muon trigger searches for hit coincidences between different muon trigger detector layers inside pre-programmed geometrical windows that bound the path of triggered muons of given transverse momentum and provide a rough estimate of its position within the pseudorapidity range |η| < 2.4.At Level-1, muon candidates are reported in "regions of interest" (RoIs).Only a single muon can be associated with a given RoI of spatial extent ∆φ × ∆η ≈ 0.1 × 0.1.This limitation has a small effect on the trigger efficiency for Υ mesons, corrected for in the analysis using a data-driven method based on analysis of J/ψ → µ + µ − and Υ → µ + µ − decays.The Level-1 trigger is followed in sequence by two subsequent higher-level, software-based trigger selection stages.Muon candidates reconstructed at these higher levels incorporate, with increasing precision, information from both the muon spectrometer and the inner detector and reach position and momentum resolution close to that provided by the offline muon reconstruction. III. DATASET AND EVENT SELECTION Data for this study were collected during the 2011 LHC proton-proton running period between March and August using a trigger that requires the presence of two muon candidates with opposite charges that are subject to a fit constraining them to a common vertex while taking into account track parameter uncertainties.A very loose selection on vertex χ 2 which is fully efficient for signal candidates, was imposed to ensure proper fit convergence, as well as the requirement of opposite charge and that p µ T > 4 GeV and |η µ | < 2.3.This trigger was largely unprescaled and collected data at a rate of approximately 20 Hz in this period of data-taking. Events are required to contain at least one primary vertex candidate that has at least five tracks with p T > 0.4 GeV, and at least two muons identified by associating candidates found in the muon spectrometer with tracks reconstructed in the inner detector [8,22].Multiple scattering in the calorimeters and toroids of the ATLAS detector degrades the muon spectrometer resolution for low energy particles.As the majority of muons selected for this analysis have low momentum, we assign values for parameters such as p T and η to the muons based on track fits using inner detector information only.To ensure accurate inner detector measurements, each muon track must contain at least six silicon microstrip detector hits and at least one pixel detector hit.Muon candidates passing these criteria are required to have p µ T > 4 GeV and |η µ | < 2.3 and a successful fit to a common vertex. Good spatial matching (∆R ≡ (∆φ) 2 + (∆η) 2 < 0.01) between the muon candidate in both the offline reconstruction and the trigger used to select the event is required to facilitate data-driven estimates of the di-muon trigger efficiency.Furthermore, both muons forming an Υ candidate must be associated to their trigger-level candidates in this manner.In this way, the efficiency of the trigger requirements on the di-muon candidate are incorporated into the trigger efficiency correction.All di-muon candidates passing these criteria are retained for analysis. The distribution of the invariant mass of the µ + µ − system is shown in Fig. 1 for the selected di-muon candidates.As is apparent from the plot, the mass resolution (120 MeV) for di-muon candidates detected in the central region of the detector (|y µµ | < 1.2) is significantly better than the resolution (214 MeV) for those candidates falling into the forward region (1.2 ≤ |y µµ | < 2.25).A total of 3.9 × 10 6 and 2.3 × 10 6 candidates with 8 < m µµ < 11.5 GeV are reconstructed in the central and forward regions respectively. [GeV] , background-only fit (dashed curve) and the total signal plus background shape (solid curve).The background shape is modelled here by a fourth-order polynomial and the signal peaks each modelled by a single Gaussian.Also quoted are the fitted mass resolutions of each of the three signal peaks, determined from the fit with a common resolution parameter scaling with invariant mass. IV. CROSS SECTION DETERMINATION Differential Υ cross sections are measured according to the relation: where Br(Υ → µ + µ − ) represents the appropriate branching fraction of the Υ (nS) to di-muons, Ldt is the integrated luminosity, ∆p T and ∆|y| are the bin sizes in Υ transverse momentum and rapidity, respectively, and N Υ is the corrected number of observed Υ (1S), Υ (2S), or Υ (3S) mesons in a bin.Corrections are applied to account for selection efficiencies, bin migration effects due to finite detector resolution, and, in the case of corrected cross-section measurements, acceptance.Determination of the cross sections proceeds through several steps.Firstly, a weight is determined for each selected di-muon candidate equal to the inverse of the total efficiency for the candidate.Secondly, a fit is performed to the distribution of weighted events binned in di-muon invariant mass, m µµ , to determine the number of Υ (nS) mesons (with n = 1,2,3) produced in each (p µµ T , y µµ ) bin.Thirdly, these values are corrected for (p µµ T , y µµ ) bin migrations.Finally, the differential cross section multiplied by the Υ → µ + µ − branching fraction is calculated for each state using the integrated luminosity and the p T and y bin widths as in Eq. (1). The weight, w, for each Υ candidate includes the fraction of produced Υ → µ + µ − decays with both muons falling into the kinematic region p µ T > 4 GeV and |η µ | < 2.3 (the acceptance, A, is used only in calculating corrected cross sections), the probability that a candidate falling within the acceptance passes the offline reconstruction requirements (the reconstruction efficiency, ε reco ), and the probability that a reconstructed event passes the trigger selection (the trigger efficiency, ε trig ).The weights assigned to a given candidate when calculating the fiducial (w fid ) and corrected (w tot ) cross sections are then given by: A. Acceptance The kinematic acceptance A(p T , y) is the probability that the muons from an Υ with transverse momentum p T and rapidity y fall into the fiducial volume of the detector defined by the p µ T > 4 GeV and |η µ | < 2.3 selection applied to each muon in the di-muon pair.In order to calculate the acceptance as a function of the rapidity and transverse momentum of each of the Υ states, taking into account possible angular dependences in their decays, we use an analytic formula describing the decay of Υ states in their decay frame [24], The helicity frame is used, where θ * is the polar angle between the µ + momentum in the Υ decay frame and the direction of the Υ momentum in the laboratory frame.The corresponding azimuthal angle φ * is the angle between the quarkonium production plane (defined in the quarkonium decay frame by the two momenta of the incoming protons in that frame) and the quarkonium decay plane in the lab frame. The λ i coefficients parameterize the spin-alignment state of the Υ .In some parts of the phase space the acceptance may depend on these parameters quite strongly.We have identified five extreme cases that lead to the largest possible variations of acceptance within the phase space of this measurement and define an envelope in which the results may vary under any physically-allowed spin-alignment assumption: isotropic distribution independent of θ * and φ * (λ θ = λ φ = λ θφ = 0, labeled FLAT); longitudinal alignment (λ θ = −1, λ φ = λ θφ = 0, labeled LONG); and three types of transverse alignment (λ θ = +1, λ φ = λ θφ = 0, labeled T +0; λ θ = +1, λ φ = +1, λ θφ = 0, labeled T ++; λ θ = +1, λ φ = −1, λ θφ = 0, labeled T +−).The central values of our measurements are derived under the isotropic production assumption.The spread in corrected cross-section results derived under all five assumptions is used to quantify the full envelope of possible variations of the result due to spinalignment.Some constraints on physically-allowed λ i combinations exist [24,25].In particular, λ θφ must be zero for λ θ = ±1 and any λ φ , and |λ θφ | ≤ 0.5 for λ θ = λ φ = 0. Acceptance corrections with non-zero λ θφ were found to lead to corrections with smaller or equal variations from the isotropic scenario than any parameter-space scenarios on the λ θ -λ φ plane.Acceptance weights for each polarization scenario, including those with non-zero λ θφ can be found in hepdata [26]. Figure 2 presents the two-dimensional acceptance map for Υ (1S), as a function of the Υ transverse momentum and absolute rapidity in the unpolarized (FLAT) acceptance scenario as well as the maximum relative variation (ratio of largest to smallest acceptance correction) of the acceptance maps across all spin-alignment scenarios. Acceptance is significantly decreased at very high rapidities due to the muon pseudorapidity requirement of |η µ | < 2.3.When Υ are produced at rest, both muons are likely to be reconstructed in the muon spectrometer on opposite sides of the detector.When the Υ has a sufficient boost, the two muons are more likely to be found in the same side of the detector and the probability that at least one muon is below the required p µ T threshold increases, leading to a drop in acceptance.At higher p T the Υ imparts greater momentum to the two muons, the likelihood of a muon being below threshold decreases and the acceptance asymptotically approaches 100%.As Υ transverse momentum increases the variation of the acceptance corrections between spin-alignment scenarios decreases, becoming 10% at the largest p T ranges studied.Similar behaviours are observed in the acceptance maps for the Υ (2S) and Υ (3S). This spin-alignment systematic uncertainty only applies to the corrected cross-section measurements.For fiducial cross sections, measured in the kinematic region p µ T > 4 GeV and |η µ | < 2.3, we do not have to correct our results to the full phase space of produced Υ mesons.Spin-alignment can thus affect only our estimates of the reconstruction and trigger efficiencies.Since we measure these in many bins of muon p T and η, effects from differing distributions of events over individual bins due to different spin-alignment assumptions are negligible compared to other sources of systematic uncertainty. B. Reconstruction and trigger efficiencies The efficiency of the offline reconstruction criteria for events with muons within the fiducial region is given by: In this equation, q represents the charge of the muon and the identities of the two muons are labeled with indices 1, 2. The efficiency of the track selection criteria, ε trk , for tracks originating from real muons is determined in Ref. [7] using data collected in 2010.Due to the presence of additional pp interactions in the same and neighbouring bunch crossings in 2011 data the systematic uncertainty on the tracking efficiency has increased, with ε trk assessed to be 99 ± 1.0% over the whole kinematic range. The efficiency to reconstruct a muon, ε µ , is derived using a tag-and-probe method on J/ψ → µ + µ − data.The tag muon corresponds to a muon candidate with p µ T > 4 GeV and |η µ | < 2.4, and must have fired a singlemuon trigger in the event, as required by the trigger matching algorithms.The probe track is only required to pass the inner detector track quality, p T , and η cuts and be consistent with having the same vertex as the identified tag muon.This technique provides a sample of muon candidates unbiased with respect to both trigger and offline reconstruction, and with favorable signal to background ratio. The muon reconstruction efficiency is then derived in two-dimensional p T -η bins (fourteen muon p T intervals and twenty-six charge-signed muon pseudorapidity [q • η] intervals).The ratio of the fitted J/ψ → µ + µ − signal yield for those probe tracks identified as muons to the fitted signal yield for all probe tracks in these doubledifferential intervals is identified as the single-muon reconstruction efficiency. Due to the toroidal magnetic field, muons with positive (negative) charge are bent towards larger (smaller) pseudorapidity.This introduces a charge dependence in the muon reconstruction efficiency.We calculate efficiencies as a function of charge-signed pseudorapidity as negative muons at positive rapidities are affected in the same manner as positive muons in negative rapidities.The charge dependence is particularly noticeable at very large |η| where the muon can be bent outside of the geometrical acceptance of the detector.In particular at low p T , where the muons of particular charge may be bent away from (rather than toward for the opposing charge) the middle/outer spectrometer planes, they will not be identified as muons. The efficiency of the di-muon trigger to select events that have passed the offline selection criteria, ε trig , is also calculated from data.This can be factorized into three terms: where ε RoI is the efficiency of the trigger system to find an RoI for a single muon with transverse momentum, p µ T , and charge-signed pseudorapidity, q • |η µ |, and c µµ is a correction for effects related to the di-muon elements of the trigger.This correction accounts for the di-muon vertex and opposite charge requirements, and for loss of efficiency in the di-muon trigger if the two muons are close enough together to register only as a single RoI. The di-muon correction, c µµ , itself consists of two components each evaluated in three separate regions of di-muon rapidity: barrel ( 2), and endcap (1.2 ≤ |y µµ | < 2.25), to account for the different behaviors of these corrections in these regions.The correction c a is due to the effect of vertex and opposite charge requirements on the trigger and is determined from the efficiency at large di-muon angular separations.No difference was observed from deriving the result as a function of ∆η and ∆φ separately rather than versus ∆R.The asymptotic values are found using the ratio of candidate J/ψ → µ + µ − decays selected by the standard di-muon trigger to those selected by a similar di-muon trigger that makes no charge or vertex requirements.These values are found to be 99.1 ± 0.4%, 97.5 ± 0.9%, 95.2 ± 0.4% in the barrel, transition, and endcap regions, respectively.We extract this dependence on ∆R in the same three regions of di-muon rapidity as used for c a from a sample of offline reconstructed J/ψ → µ + µ − and Υ → µ + µ − candidates with p µ T2 >8 GeV selected using a single-muon trigger with a threshold of 18 GeV.The 8 GeV requirement on the lower p T muon is made to ensure that the efficiency for the trigger system to identify RoIs compatible with muons of p T > 4 GeV reached its plateau value.The ∆R dependence of c ∆R is then extracted from the fraction of fitted di-muon candidates in a ∆R interval within this control sample selected using auxiliary triggers that additionally pass the di-muon trigger used in this analysis. The final component of the di-muon trigger efficiency, ε RoI , represents the single-muon trigger efficiency with a threshold of p µ T > 4 GeV.This is measured using wellreconstructed J/ψ → µµ candidates in data that pass a single-muon trigger (with a threshold of 18 GeV).The ratio of the yield of J/ψ candidates (determined by fitting the invariant mass distributions) that pass both this single-muon trigger and the 4 GeV p µ T threshold di-muon trigger used in this analysis, to the yield of J/ψ candidates that pass the single-muon trigger is identified as the single-muon trigger efficiency ε RoI in the p µ T and q•|η µ | interval considered.In each case, the reconstructed muon(s) are matched to the muon(s) that triggered the event for each of the single or di-muon triggers.The number of candidates passing the di-muon trigger is then further corrected by c µµ for di-muon correlation effects [27]. The individual and overall efficiencies and weight corrections, calculated using the methods described above, are shown as functions of p µµ T and |y µµ | in Fig. 3 for the Υ (1S). C. Extracting the number of Υ mesons The number of produced Υ (nS) mesons used in our cross-section determination is found by fitting signal and background functions to the m µµ spectrum of weighted Υ candidates.The use of per-candidate weights rather than average weights allows us to correct for acceptance and efficiency without introduction of biases associated with the use of average values for these quantities.We determine the Υ differential cross sections separately for each spin-alignment scenario.We perform least squares fits to m µµ histograms filled using the weights, w, for each candidate in bins of di-muon transverse momentum, p µµ T , and rapidity, y µµ .The form of the χ 2 for each (p µµ T , |y µµ |) bin is: where N i represents the number of Υ (nS) candidates in bin i. Predictions for each bin are constructed using four sources of di-muon candidates: where the f X here represent fit function parameters and the F X are normalized probability density functions.In order to avoid bias due to the choice of fit model, several parameterizations of signal and background, which describe the data well, are used.Each of the three Υ resonances are parameterized by single Gaussian, double Gaussian, or Crystal Ball functions, chosen to provide a reasonable description of the experimental mass resolution and energy loss effects that dominate the observed signal line-shapes.The background parameterizations vary with di-muon p T .At low p µµ T , and for all rapidity bins, an error function multiplied by either a second-order polynomial, or a second-order polynomial plus an exponential, is used to model m µµ turn-on effects accurately.At mid p µµ T a second-order polynomial, or second-order polynomial plus an exponential is adequate to describe the shape of the background under the Υ peaks.At high p µµ T a first-order polynomial or a first-order polynomial plus an exponential is sufficient to describe the background contribution.Average fitted values for the numbers of Υ (nS) mesons (N 1S , N 2S , N 3S ) taken over all combinations of signal and background models are used in the extraction of cross sections, while the maximum deviation of any fit result from the average is used as an estimate of the fit model systematic uncertainty. In the fits N 1S , N 2S , N 3S , the peak mass of the Υ (1S) meson (M 1S ), parameters describing the shape of the Υ → µ + µ − mass distribution, and the background parameters are allowed to vary freely while M 2S , M 3S are fixed relative to M 1S using measured mass splittings [28]; and the line-shape width parameters σ 2S , σ 3S are related linearly in m µµ to σ 1S .The distribution of the χ 2 -probabilities from the fits are uniformly distributed indicating that our signal and background parameterizations provide good descriptions of the data.Figure 4 shows the invariant mass distribution of the fitted signal region for representative low, mid, and high Υ p T intervals, in both the central and forward rapidity regions. In order to account for bin migrations due to finite detector resolution, corrections in p µµ T are derived by first fitting a smooth analytic function to the backgroundsubtracted p µµ T spectra of Υ events in data.This fitted distribution is deconvolved with a Gaussian distribution using a Gauss-Hermite quadrature integration technique, with a p T resolution derived from the fitted invariant mass and muon angular resolutions in data.From the distributions with and without convolution, migration corrections bin-by-bin are derived.These corrections are found to be within 0.5% of unity for central rapidities and within 1% of unity for forward rapidities.The number of efficiency-corrected Υ → µ + µ − decays extracted from the fits in each (p µµ T , y µµ ) bin are corrected for the difference between true and reconstructed values of the di-muon p T and rapidity.Using a similar technique, binmigration corrections as functions of y µµ are found to be negligible. V. SYSTEMATIC UNCERTAINTIES We consider the following sources of systematic uncertainty on the Υ (nS) differential cross sections: luminosity determination, reconstruction and trigger efficiencies, migration between p µµ T and |y µµ | bins due to resolution, acceptance corrections, and the background and signal fit models used.The range of these uncertainties for the three Υ states is summarized in Table I and their breakdown in each source is given for the corrected cross-section analysis in Figs.5-7. The relative luminosity uncertainty of 3.9% is described in more detail in Ref. [29].Other sources of systematic uncertainty are discussed below.As the statistical components of the uncertainties associated with the determination of ε reco and ε trig are dominant, the uncertainties on the cross sections are derived from the statistics of the control samples used to extract them using a series of pseudo-experiments, randomly varying the weights used for each Υ candidate in data, according to the uncertainties on the efficiency maps.Systematic uncertainties associated with the fit model used to extract the number of Υ (nS) decays from our di-muon data sample, are quantified by taking the largest deviation of the fitted values of N nS found in the six possible model combinations from the average value.Systematic effects, due to differences in the underlying kinematic distributions of the control sample and the data distributions, are found to be negligible due to the fine differential binning of the efficiency and acceptance corrections. Uncertainties associated with the acceptance correction include statistical uncertainties on the determination of the correction in fine p T and rapidity bins.This constitutes approximately a 0.5% uncertainty across the measured spectrum.A shift in the interaction point along the beamline axis, z, influences the acceptance particularly at large rapidity.To estimate the impact of this effect, the acceptances were recalculated with shifts in z by ±62 mm, corresponding to one Gaussian standard deviation of the vertex z position distribution in the analyzed data, and the variation in the measured cross sections was calculated as a function of Υ p T and rapidity.These variations result in uncertainties on the acceptance corrections of 0.4-0.7% as a function of p T , and 0.6-1.5% as a function of rapidity, growing toward larger rapidities and lower transverse momenta.Our estimate of the correction to the number of Υ → µ + µ − decays fitted in each (p µµ T , y µµ ) bin for differences between reconstructed and true values of p µµ T due to bin migration depends on a knowledge of the di-muon p T resolution.Allowing this resolution to vary within its uncertainty results in a negligible change in the Υ (nS) cross sections. As mentioned earlier, Υ spin-alignment effects have an impact on the determination of cross sections extrapolated to full phase space.As in the ATLAS measurement of J/ψ production [7], we quote an uncertainty due to Υ spin-alignment by comparing results obtained using the unpolarized assumption (λ θ = λ φ = λ θφ = 0) to those derived under the other four extreme parameter combinations.The size of the possible variation in each bin is taken to be the largest positive and negative deviation from the unpolarized baseline result.We provide the cross-section results for each of the spin-alignment scenarios along with tabulated values [26] of the weight corrections so that the corrected data can be determined in terms of any spin-alignment scenario. VI. RESULTS AND DISCUSSION We measure the differential cross section multiplied by the di-muon branching fractions of Υ (1S), Υ (2S), and Υ (3S) mesons as a function of Υ transverse momentum and rapidity, both in a fiducial region defined by p µ T > 4 GeV, |η µ | < 2.3 (free from spin-alignment uncertainties), and corrected back to the full muon decay phase space with |y Υ | < 2.25.We additionally present measurements of the production cross sections of the Υ (2S) and Υ (3S) relative to the Υ (1S) as a function of Υ (1S) transverse momentum and rapidity.Tabulated results of all measurements presented in this paper are available in hepdata [26]. A. Fiducial cross sections Differential cross sections multiplied by the Υ → µ + µ − branching fractions, d 2 σ/dp T dy × Br(Υ → µ + µ − ), are calculated within the fiducial acceptance of our analysis (p µ T > 4 GeV, |η µ | < 2.3) using the results of fits to di-muon candidates in data corrected, candidate by candidate, for efficiencies using the fiducial weights, w fid , defined in Eq. ( 2).These differential fiducial cross sections along with total uncertainties are shown in Figs. 8 and 9 and span four orders of magnitude across the p T range studied. Integrating over all p Υ T and both rapidity bins we find cross sections within the fiducial acceptance of the detector as shown in Table II.The results for the Υ (1S) are consistent with our previous measurement [8] to a significantly higher p Υ T and with increased precision.Presenting the results in a restricted kinematic phase space removes any uncertainty due to the spin-alignment of quarkonia.This allows unambiguous comparison of differential spectra with any theoretical approaches which can provide predictions with kinematic restrictions applied to the decay products of the Υ .NLO color-singlet [14] predictions [30] have previously been compared to differential Υ (1S) fiducial production cross sections [8] and were found to underestimate the measured production rates by approximately an order of magnitude and to not reflect the p T dependence of the data.This is not surprising as it has been known [16] for some time that higher-order corrections are both large and necessary in order to adequately describe quarkonium production at high p T with color-singlet terms.At this time, no color-singlet calculations beyond NLO are available for quarkonium production measurements quoted in a restricted muon acceptance, nor are Color Octet [17] or Color Evaporation [19] approaches currently able to account for the kinematics of leptons from the quarkonium decay.The fiducial measurements presented here are precise and free from any assumptions on the angular dependencies of the di-muon system, accurately reflecting the production dynamics in proton-proton collisons at √ s = 7 TeV. B. Corrected cross sections We also calculate differential cross sections multiplied by the Υ → µ + µ − branching fractions extrapolated to the full muon phase space within |y Υ | < 2.25.For these corrected cross sections, the results of fits to di-muon candidates in data are corrected, candidate by candidate, for efficiencies using the total weights, w tot , defined in Eq. (2).Results are shown for the isotropic spinalignment scenario in Figs. 10 and 11 as a function of Υ p T and y for all three states and, integrated over all p Υ T and both rapidity bins, in Table III. Our results are consistent with measurements by CMS [6] and, in the small region of rapidity overlap from 2.0 < |y| < 2.25, with LHCb [11].These results allow us to test phenomenological models of Υ production not just at low p T (for complementarity with results from the Tevatron experiments) but in a newly-probed region of significantly boosted Υ , where higher-order contributions become particularly important. Figures 12-17 show the differential cross sections as a function of p T and rapidity for each of the Υ states in comparison with theoretical predictions.The effects of varying spin-alignment assumptions from the nominal assumption of isotropic muon angular distributions independent of θ * and φ * are indicated by a shaded band. Clearly, spin-alignment has a large effect on the Υ production cross sections, especially at low p Υ T , and in partic- ular if there is a non-trivial azimuthal component to the spin-alignment.New results [4,12] of Υ spin-alignment from CDF and CMS suggest that the spin-alignment is consistent with unpolarized production.Our central assumption of isotropic Υ decays is consistent with these results.Nevertheless, as these spin-alignment measurements are made at different center-of-mass energies or in a restricted phase space in both p Υ T and rapidity with respect to measurements presented here, we provide the 2.05 ± 0.01 ± 0.12 ± 0.08 nb Υ (3S) 0.92 ± 0.01 ± 0.07 ± 0.04 nb results under a variety of polarization scenarios so that the impact of spin-alignment on the corrected cross sections can be quantified across the full range of study.The contributions of the five polarization scenarios can be seen in the lower panes of each plot where the ratio of the differential cross section under these spin-alignment assumptions to the unpolarized scenario is shown.Across the whole p T range studied the envelope is bounded from above by the T ++ (λ θ = +1, λ φ = +1, λ θφ = 0) scenario with a maximal φ * variation.From below, the crosssection envelope is bounded by fully longitudinal spinalignment at very low p T , with the T +− (λ θ = +1, λ φ = −1, λ θφ = 0) scenario resulting in the largest downward variation at p T 4 GeV.In this measurement we extend the p T range above 45 GeV where the maximal possible impact due to the unknown spin-alignment of Υ is below ±10%.This is significantly smaller than the theoretical uncertainties and is of similar magnitude to current experimental uncertainties.As such, this region will offer a precision environment to compare future theoretical studies of Υ production to data. In Figs.12-14 a comparison is also made to two theoretical predictions of Υ production.The first [16], is a QCD-based calculation using the color-singlet mechanism [14], referred to as NNLO* CSM and presumes that Υ meson production occurs via a color-singlet state and includes full corrections up to next-to-leading order (NLO), as well as some of the most important nextto-next-to-leading-order (NNLO) terms.This inclusion significantly modifies the prediction.The partial nature of the higher-order calculation limits the applicability of the calculation to values above a particular Υ p T threshold and increases the sensitivity of the prediction to the choice of renormalization and factorization scales.The second prediction, known as the Color Evaporation Model [19,31], labeled as CEM, is a phenomenological model for inclusive Υ production based on quark-hadron duality.This model assumes that any heavy Q Q pair produced in the initial hard scattering evolves to a quarkonium state if its mass is below the threshold of open heavy-flavor meson pairs.Predictions of the CEM model involve a single constant that must be determined from corrected cross-section measurements for each quarkonium state and use a b-quark mass of 4.75 GeV.As in the case of the CSM, divergences in the predictions restrict the applicability of the model at low Υ p T .Uncertainties from factorization and renormalization scales are estimated from varying the scales independently up and down by a factor of two and additional uncertainties are estimated from varying the b-quark mass [32]. Color-singlet calculations at NLO/NNLO* predict a largely longitudinal polarization of direct Υ particularly at high p T (although the effect of feed-down introduces large uncertainties to this prediction).The Color Evaporation Model offers no explicit prediction of the spinalignment evolution of Υ but the nature of the model suggests no strong polarization should be observed as no single production mechanism dominates. As can be seen in Figs.12-14, the two models provide quite different descriptions of Υ production.Predictions from CSM are for direct Υ production only, and so do not account for Υ production that arises from feeddown from the production of higher Υ states or from radiative decays of the χ bJ (nP).From previous measurements [33] the contribution of feed-down to Υ (1S) production is known to be approximately 50%, but the p T dependence of the feed-down is not well-known and cannot be reliably predicted and so no explicit correction is made to the CSM predictions shown.No correction is needed for the CEM as this is already an inclusive calculation.Feed-down contributions are expected to similarly contribute to Υ (2S) production, but no measurement or reliable prediction for the relative contribution exists, so we do not apply a correction to the direct Υ (2S) CSM predictions either.For Υ (3S) there is no feed-down from higher Υ states.Recently, the ATLAS experiment discovered [34] the existence of a state or states interpreted as the χ bJ (3P) below the BB threshold that are expected to have a significant branching fraction for radiative decays into Υ (3S) + γ and thus induce a feed-down contribution to Υ (3S) (and other Υ states).As the relative production and decay rates of these states are also as yet unknown no correction is applied to the CSM predictions for Υ (3S) production either. For each of the three Υ (nS) states, the NNLO* CSM predictions (considering also the additional normalization uncertainty due to the poorly-known contributions from feed-down) fit our data well in the moderate p Υ T region, but exhibit a steeper p T dependence than seen in data.The predictions therefore significantly under-estimate the cross section at high p T .Theoretical developments in the prediction of feed-down contributions may improve this description.CEM predictions appear to show a better match with data at high p Υ T .These predictions underestimate the rate (favouring a smaller choice of renormalization/factorization scale or larger bquark mass) and have problems in modeling the shape of the spectrum, particularly at lower p T .Shape discrepancies cannot be accommodated within the uncertainties quoted as changes in the scale choice introduce correlated changes in the prediction as a function of p T .Figures 15 to 17 show the variation of the production cross section as a function of absolute Υ rapidity integrated across all p T .The dependence on rapidity is relatively flat in the interval of rapidities in the ATLAS acceptance. Variations between spin-alignment scenarios lead largely to a change in the normalization of the distributions with little variation in shape except at high rapidity.There, the T ++ scenario leads to an increase in the cross section with increasing rapidity, while the fully longitudinal scenario leads to a drop in cross section at high (|y| 1.7) rapidity. C. Cross section ratios From our differential cross-section measurements, we explore the p T and rapidity dependence of Υ (2S) and Υ (3S) production relative to the Υ (1S) by deriving the ratios for |y Υ | < 2.25 with n=2,3.Such observables are sensitive to the magnitude and kinematic dependencies of feed-down contributions between the three Υ states.Results of the differential cross-section ratio measurements are presented in Figs.18 and These measurements are made under the assumption of unpolarized Υ (nS) mesons, and take into account statistical correlations between the fitted numbers of Υ (1S), Υ (2S) and Υ (3S) mesons.Systematic uncertainties are estimated by varying acceptance, efficiency, and fit model assumptions coherently in the numerator and denominator when calculating the ratios, thereby partially canceling uncertainties in the ratio.Luminosity uncertainties cancel entirely. The measured Υ (nS)/Υ (1S) ratios are relatively constant in the 0 < p T < 5 GeV interval at ∼ 20% and ∼ 7% respectively for the Υ (2S) and Υ (3S).At higher p T a significant and steady rise in the relative production rates of higher Υ states is apparent, in agreement with measurements by CMS [6].However, at the larger p Υ T values (above p Υ T of 30-40 GeV) accessible for the first time with these measurements, evidence of a saturation in this rise is apparent, suggesting that we are probing a regime where direct production dominates over contributions from the decays of excited states.In contrast, the rapidity dependence of these production ratios is quite flat across the full |y| < 2.25 rapidity interval. Higher-order color-singlet calculations are not cur- rently able to predict the evolution of these production ratios in p Υ T due to the significant feed-down contributions.At leading order in the quark velocity in the perturbative expansion, the production ratio of the direct contributions is proportional to the ratio of the squares of the magnitudes of the wavefunction at the origin (or the partial decay widths), multiplied by the branching fractions to di-muons for each of the states in question.The predicted ratios then are 36% for Υ (2S)/Υ (1S) and 29% for Υ (3S)/Υ (1S) for direct production.At high p T , where the measured production ratio plateaus, the values are somewhat higher than these predictions suggesting that there is some enhancement (with respect to the simple picture above) of the Υ (2S) and Υ (3S) production relative to Υ (1S) production. We compare our differential cross-section results to predictions from two theoretical approaches describing Υ production.Our measurements find both the NNLO* CSM and the CEM predictions have some problems in describing the normalization and shape of the differential spectra.In particular, NNLO* dramatically underestimates the rate at high transverse momenta, where the data tend to agree better with the CEM.The inclusion of P -wave feed-down contributions in the theoretical calculation may help to improve the description.Large scale uncertainties in these predictions allow possible contributions from color-octet terms to contribute to the production rate in addition to singlet diagrams.The differential production ratios indicate that the increase in the production of higher Υ states as a function of p Υ T relative to the Υ (1S) observed previously begins to saturate at 30-40 GeV.Above ∼ 40 GeV, the envelope of possible variations in the differential cross sections due to spin-alignment is reduced to below ±10%.This, along with the expected reduction in feed-down contributions, results in a relatively well-controlled region in which to study quarkonium production without the dominant experimental and theoretical effects that complicate such studies at lower p T . FIG. 1 : FIG.1:The di-muon invariant mass spectum for events used in this analysis.Separate spectra are shown for those events with the di-muon candidate (left) in the central region of the detector (|y µµ | < 1.2) and (right) in the forward region (1.2 ≤ |y µµ | < 2.25).Overlaid are individual shapes of the fitted Υ (nS) signals (shaded regions), background-only fit (dashed curve) and the total signal plus background shape (solid curve).The background shape is modelled here by a fourth-order polynomial and the signal peaks each modelled by a single Gaussian.Also quoted are the fitted mass resolutions of each of the three signal peaks, determined from the fit with a common resolution parameter scaling with invariant mass. FIG. 2 : FIG.2:(Top) Acceptance as a function of Υ pT and rapidity for the Υ (1S) in the unpolarized acceptance scenario; (bottom) the ratio of largest to smallest acceptance correction between extreme spin-alignment scenarios. FIG. 3 : FIG. 3: Efficiency and acceptance corrections contributing to the overall fiducial and corrected efficiencies (and the inverse of the efficiency, the weight) as a function of the Υ (1S) p µµ T in the central (top) and forward (middle) regions of the detector and as a function of |y µµ | (bottom). FIG. 4 : FIG.4: Fits to the efficiency-corrected mµµ spectra for candidates in the low (0.5-1.0 GeV; top), mid(20)(21) GeV; middle), and high (40-45 GeV; bottom) p µµ T intervals, for central (left) and forward (right) di-muon rapidities.Fit results are shown for the simplest fit model considered: a single Gaussian signal plus either a second-order polynomial multiplied by an error function (low pT), second-order polynomial (mid pT), or first-order polynomial (high pT) background parameterization. FIG. 5 : FIG.5: Sources of statistical and systematic uncertainty on the corrected Υ (nS) production cross-section measurements in the central rapidity region.A common luminosity uncertainty of ±3.9% is not included. FIG. 6 : FIG.6: Sources of statistical and systematic uncertainty on the corrected Υ (nS) production cross-section measurements in the forward rapidity region.A common luminosity uncertainty of ±3.9% is not included. FIG. 7 : FIG.7: Sources of statistical and systematic uncertainty on the corrected Υ (nS) production cross-section measurements as a function of Υ rapidity.A common luminosity uncertainty of ±3.9% is not included. FIG. 10 : FIG. 10: Differential cross sections multiplied by the di-muon branching fraction, for Υ (1S), Υ (2S) and Υ (3S) production extrapolated to the full phase space for the (top) |y Υ | < 1.2, (bottom) 1.2 ≤ |y Υ | < 2.25 rapidity intervals.Points with error bars indicate the results of the measurements with total statistical and systematic errors.Results are shown assuming an isotropic spin-alignment scenario. FIG. 11 : FIG.11: Differential cross sections multiplied by the di-muon branching fraction, for Υ (1S), Υ (2S) and Υ (3S) production extrapolated to the full phase space, pT-integrated as a function of absolute Υ rapidity.Points with error bars indicate the results of the measurements with total statistical and systematic errors.Results are shown assuming an isotropic spinalignment scenario. FIG. 12: Differential cross sections multiplied by the di-muon branching fraction, d 2 σ/dpTdy × Br(Υ → µ + µ − ), for Υ (1S) production extrapolated to the full phase space for (top) central and (bottom) forward rapidities.Points with error bars indicate results of the measurements with total (statistical and systematic) uncertainties.The maximal envelope of variation of the result due to spin-alignment uncertainty is indicated by the solid band.Also shown are predictions of direct production with the NNLO* Color Singlet Mechanism (CSM) and inclusive predictions from the Color Evaporation Model (CEM).These theory predictions are shown as a ratio to the data in the lower panes for CEM (middle) and CSM (bottom), along with detail of the variations of the cross-section measurement under the four anisotropic spin-alignment scenarios as a ratio to the nominal data. FIG. 15 : FIG.15: Differential cross sections multiplied by the di-muon branching fraction, dσ/dy × Br(Υ → µ + µ − ), for Υ (1S) production extrapolated to the full phase space.Points with error bars indicate results of the measurements with total (statistical and systematic) uncertainties.The maximal envelope of variation of the result due to spin-alignment uncertainty is indicated by the solid band.The variations of the cross-section measurement under the four anisotropic spin-alignment scenarios are shown in the lower pane as ratios to the unpolarized scenario. FIG. 18 : FIG.18: Ratios of differential Υ (2S)/Υ (1S) and Υ (3S)/Υ (1S) cross sections multiplied by the di-muon branching fractions as a function of di-muon rapidity.Points with error bars indicate results of the measurements with statistical uncertainties while shaded areas correspond to total uncertainties on each point, including systematic effects, but excluding spinalignment effects. FIG. 19 : FIG.19: Ratios of differential Υ (2S)/Υ (1S) and Υ (3S)/Υ (1S) cross sections multiplied by the di-muon branching fractions versus Υ pT in the (top) central and (bottom) forward rapidity regions.Points with error bars indicate results of the measurements with statistical uncertainties while shaded areas correspond to total uncertainties on each point, including systematic effects, but excluding spin-alignment effects. TABLE I : Summary of statistical and systematic uncertainties on the cross-section measurements.The systematic uncertainty due to the acceptance determination applies only to the corrected cross-section measurements and does not include possible variation of the result due to spin-alignment.Values quoted refer to the range of uncertainties over p µµ T in each of the y µµ regions considered. TABLE II : Integrated fiducial cross-section measurements for Υ (nS).Uncertainties quoted represent statistical, systematic, and luminosity terms respectively. TABLE III : Corrected cross-section measurements in the isotropic spin-alignment scenario.Uncertainties quoted represent statistical, systematic, and luminosity terms, respectively.
11,586.4
2012-11-30T00:00:00.000
[ "Physics" ]
Stimulus–effect relations for left ventricular growth obtained with a simple multi-scale model: the influence of hemodynamic feedback Cardiac growth is an important mechanism for the human body to respond to changes in blood flow demand. Being able to predict the development of chronic growth is clinically relevant, but so far models to predict growth have not reached consensus on the stimulus–effect relation. In a previously published study, we modeled cardiac and hemodynamic function through a lumped parameter approach. We evaluated cardiac growth in response to valve disease using various stimulus–effect relations and observed an unphysiological decline pump function. Here we extend that model with a model of hemodynamic feedback that maintains mean arterial pressure and cardiac output through adaptation of peripheral resistance and circulatory unstressed volume. With the combined model, we obtain stable growth and restoration of pump function for most growth laws. We conclude that a mixed combination of stress and strain stimuli to drive cardiac growth is most promising since it (1) reproduces clinical observations on cardiac growth well, (2) requires only a small, clinically realistic adaptation of the properties of the circulatory system and (3) is robust in the sense that results were fairly insensitive to the exact choice of the chosen mechanics loading measure. This finding may be used to guide the choice of growth laws in more complex finite element models of cardiac growth, suitable for predicting the response to spatially varying changes in tissue load. Eventually, the current model may form a basis for a tool to predict patient-specific growth in response to spatially homogeneous changes in tissue load, since it is computationally inexpensive. Introduction The capability of the human body to maintain an adequate level of oxygen delivery to the organs is fundamental for survival. The body can rely on several complex mechanisms to achieve this goal. Cardiac growth is the main mechanism of response to chronic changes in blood flow demand, induced for example in the growing body. An in depth review of the cardiovascular adaptations from fetus to adolescence can be found in Dallaire and Sarkola (2018). Cardiac growth, although essential, can evolve into a maladaptive process if the growth stimulus is severe or brusquely applied, leading to a pathological type of growth (Grossman 1980). A disease capable of altering either the preload or afterload of the cardiovascular system, like for instance any valve disease, can promote an abnormal type of growth. Left ventricular hypertrophy has been related to an adverse prognosis during long-term follow-ups, increasing the chance of mortality (Gosse 2005;Muiesan et al. 2004;Pierdomenico et al. 2011;Selmeryd et al. 2014;Spirito et al. 2000;Tuseth et al. 2010). Moreover, although cardiac growth phenotypes are well characterized (Dweck et al. 2012;Ganau et al. 1992;Rodrigues et al. 2016), the relation between the growth stimulus and the long-term effects on the cardiovascular system is still not completely clear. Being able to predict changes in left ventricular size and shape not only will increase the knowledge on cardiac growth, but it might also help patient prognosis and guide the treatment of choice. So far several models of cardiac growth (Arts et al. 2005;Göktepe et al. 2010;Humphrey and Rajagopal 2002;Kerckhoffs et al. 2012b;Kroon et al. 2009;Lin and Taber 1995;Taber 1998) have been proposed, along with reviews on the state of the art (Bovendeerd 2012;Witzenburg and Holmes 2017); however, the nature of the growth stimulus is still under debate. In a recent paper (Rondanina and Bovendeerd 2020), we studied growth of the left ventricle (LV) using a simple multiscale model. We designed a growth law capable of coupling changes in tissue mechanical load, identified as growth stimuli, into a volumetric change, expressed by LV wall and cavity volume. We explored several choices and combinations of growth stimuli, both stress based and strain based, with the aim to investigate the stimulus-effect relation. We investigated growth in response to three cases of valve disease, aortic stenosis (AS), aortic regurgitation (AR) and mitral regurgitation (MR). Although we were able to achieve stable end growth states, in most cases we obtained a drastic decrease in cardiac output (CO) and mean arterial pressure (MAP) between 20 and 40%. Even though valve pathologies might decrease cardiac function (Goodman et al. 1974;Kamperidis et al. 2015), there is evidence that mean arterial pressure and cardiac output can be maintained at a normal level (Cowley Jr 1992;Guyton 1967;Kainuma et al. 2011;Lorsomradee et al. 2007). If we accept as healthy a cardiac index of about 2.9 l/min/m 2 (Ganau et al. 1992;Huang et al. 2011;Wisenbaugh et al. 1984) and a MAP of 100 mmHg (Remmen et al. 2005;Rongen et al. 1995), these values are often within the reported ranges for patients having AS (Lloyd et al. 2017;Rajani et al. 2010), AR (Greenberg et al. 1981;Röthlisberger et al. 1993) or MR (Kainuma et al. 2011). However, some studies report a clear decrease in the hemodynamic function (Goodman et al. 1974;Kamperidis et al. 2015;Martinez et al. 2012;Wisenbaugh et al. 1984). This might be caused by an incomplete hemodynamic feedback or by the incapability of the body to cope the disease severity. In this study, we aim to extend our previous model of cardiac growth with a hemodynamic feedback mechanism which acts upon the circulatory system in order to restore homeostatic levels of pressure and flow. Such mechanisms are known to act on the short term and the long term (Dampney et al. 2002;Hall 2015). Short-term regulation includes feedback processes which can be triggered rapidly, with baroreceptors (Kirchheim 1976), chemoreceptors (Guyenet and Koshiya 1995) and humoral responses (Goodwin et al. 1972;Hilton 1975). The baroreflex feedback is an important short-term mechanism, through which cardiac properties (contractility, heart rate) and vascular properties (peripheral resistance, venous tone) are adapted to maintain mean arterial pressure (Folkow 1978;Guyton 1981;Secomb and Pries 2011). Fluid exchange between the vascular and interstitial space, driven by hemodynamic and osmotic pressure, in combination with neurohumoral control of renal function is known to control vascular volume on the time scale of hours to days. On an even longer timescale, cardiac adaptation in terms of contractility is taken over by growth, while heart rate remains normal (Akinboboye et al. 2004;Ganau et al. 1992;Seldrum et al. 2018). Vascular adaptation is realized through persistent changes in stressed blood volume and systemic vascular resistance (Cowley Jr 1992;Guyton 1981;Jacobsohn et al. 1997;Secomb and Pries 2011). In line with the approach in our previous work (Rondanina and Bovendeerd 2020), we aim for a phenomenological description of the cardiovascular system adaptations on the long term. We follow previous studies which suggest how vasculature resistance and blood volume can be adapted to regulate the mean arterial pressure (MAP) (Cowley Jr 1992;Guyton 1981;Osborn 2005) and cardiac output (CO) (Guyton et al. 1955;Jacobsohn et al. 1997). CO is an important determinant of the amount of oxygen supplied to the vital organs, while the MAP is the driving force behind CO. Our model aims to recover the CO by updating the afterload of the system, represented by the systemic resistance, while MAP is restored with a change in the preload, described by the stressed blood volume. The scope of this study is to reevaluate the relation between a growth stimulus and its effects at organ and tissue levels in the presence of the hemodynamic feedback. As in our previous study, we test the model in case of three valve diseases: AS, AR and MR. We evaluate the obtained growth in terms of the left ventricular end diastolic volume index (EDVI), left ventricular mass index (MI) and relative wall thickness (RWT). Methods In this work, we extend the approach proposed in Rondanina and Bovendeerd (2020). More specifically, we extend the three submodels for left ventricular (LV) mechanics, systemic circulation and cardiac growth with a fourth submodel for hemodynamic feedback. Left ventricular mechanics model To describe left ventricular mechanics, we use the one-fiber model (Arts et al. 1991;Bovendeerd et al. 2006) which couples the mechanics at the organ level, identified by left ventricular cavity pressure p cav and cavity volume V cav , with the mechanics at tissue level, described with myofiber stress f and sarcomere length l s . here l s,0 is the sarcomere length at zero pressure, V cav,0 represents the unstressed cavity volume, V wall represents the wall volume and f is the fiber stretch ratio. Myofiber stress f consists of an active component, which depends on l s and the time elapsed after activation, and a passive component, which depends on f . A full description of the model can be found in Bovendeerd et al. (2006) and Rondanina and Bovendeerd (2020). Systemic circulation model The systemic circulation is described by a lumped parameter model ( Fig. 1) which interacts with the LV mechanics model. The arteries (A) and the veins (V) are modeled by a resistance R, a capacitor C and an inertance L in series while the peripheral vessels are approximated by only one resistance. The pressure drop p over each resistance, capacitor and inertance is defined as follows: where q is the flow through each segment (R, and L) while V C and V C,0 are the stressed and unstressed volumes that a vessel can accommodate. According to Eqs. 2a and 2c, we can write the arterial flow q A as: and the venous flow q V as: where p LV is the LV cavity pressure, p A and p V are the arterial and venous pressure, R A and R V are the arterial and venous resistance, and L A and L V are the arterial and venous inertance, respectively. The aortic valve (AV) and mitral valve (MV) are approximated as a diode which function is regulated by the parameters k AV and k MV . For an healthy AV, k AV is equal to 1 when p LV is higher than p A ; otherwise, it has a value of 10 6 . Similarly for an healthy MV, k MV is equal to 1 when p V is higher than p LV ; otherwise, it has a value of 10 6 . The peripheral flow q P is described with Eq. 2a as follows: where R P represents the resistance generated by all the peripheral vessels. Moreover, we compute the cardiac output (CO) as the average of q P over a complete cardiac cycle. Pressure levels in the model are dependent on the total stressed blood volume V sb , that identifies the amount of blood volume exceeding the sum of all unstressed blood volumes: where the summation of n includes the zero pressure volumes of arteries ( V A,0 ), veins ( V V,0 ) and ventricle ( V cav,0 ). Moreover, V sb is also related to the mean circulatory filling pressure p mc : where we neglected the compliance of the LV. In turn, p mc is an important determinant of LV filling pressure, and hence, Fig. 1 Lumped parameter model of the circulation. With mitral valve (MV), aortic valve (AV), venous and arterial resistance ( R V and R A ), compliance ( C V and C A ) and inertance ( L V and L A ) , peripheral resistance ( R P ) and venous, arterial and peripheral flows ( q V , q A , q P ). This model is coupled with the one-fiber model of left ventricular (LV) mechanics from which we obtain myofiber stress ( f ) and sarcomere length ( l s ) the LV filling volume. An increase in the filling volume will cause an increase in the sarcomere stretch which in turn will increase the sarcomere active stress. With a higher active stress, the ventricle will develop a higher systolic pressure which will eventually increase the MAP and CO. Growth model Based on our previous work (Rondanina and Bovendeerd 2020), we define the growth stimulus to measure a difference in the sarcomere mechanics between the current state and the homeostatic state (hom). A generic growth stimulus S ( ; ) is designed to be a function of a stress loading measure L ( ) or a strain loading measure L ( ) . We investigate two types of stress stimuli based on the mean (Eq. 9a) and maximum stress (Eq. 9b): where T cyc is the cardiac cycle length. As strain stimuli, we consider the sarcomere strain amplitude (Eq. 10a) and the maximum strain (Eq. 10b): The growth stimulus is then converted into growth of the wall volume V wall and the unstressed cavity volume V cav,0 according to the following law for stress-based stimuli: and for strain-based stimuli: where grw is the growth time constant. The sign in Eqs. 11 and 12 is related to the chosen L ( ; ) and it is defined such that any divergence from the homeostatic state of Eq. 8 is correctly balanced by a change in V wall and V cav,0 . The reader might refer to our previous manuscript (Rondanina and Bovendeerd 2020) for an in depth discussion on this model. The combination of four growth stimuli (Eqs. 9 -10) and two growth laws (Eqs. 11a and 12a for V wall , Eqs. 11b and 12b for V cav,0 ) results in sixteen possible combinations that can be evaluated, see Table 1. The four cases in which a strain stimulus drives both cavity and wall growth are labeled as 'strainonly' cases. Similarly, we identify four 'stress-only' cases. The remaining eight cases involve both stress and strain stimuli and are labeled as 'mixed' cases. As in our previous study (Rondanina and Bovendeerd 2020), we found that switching the stimuli for cavity and wall growth did not affect the final grown state. Hence, we evaluate only four cases. Hemodynamic feedback model The hemodynamic feedback is designed in order to maintain mean arterial pressure (MAP) and cardiac output (CO) at the homeostatic level. To achieve this goal, the peripheral resistance ( R P ) and the stressed blood volume ( V sb ) are updated according to the following differential equations: where hem is the feedback time constant. The first equation simply expresses that, given a constant MAP, a drop in CO may be compensated for by a drop in R P (Eq. 5). The second equation is based on the Frank-Starling law: an increase in V sb will increase p mc (Eq. 7) and eventually it will increase the MAP and CO. Parameter settings and simulations performed Homeostatic state Settings of the model parameters are based on our previous work (Rondanina and Bovendeerd 2020) and are listed in Table 2. As first step, we simulate a normal cardiac cycle, from which we extract homeostatic settings of the growth stimuli L ,hom and L ,hom (Eq. 8) and the hemodynamic feedback control CO hom and MAP hom (Eq. 13). For all simulations, we consider the cardiac cycle ( T cyc ) to last 800 ms. Acute state. Second, we introduce three types of valve disease as model perturbation. We simulate AS with a threefold increase of k AV during forward flow ( p LV > p A ) (Eq. 3) (Roger et al. 1997). AR and MR are simulated by a decrease of k AV from 10 6 to 6, when p LV < p A (Eq. 3) and a decrease of k MV from 10 6 to 30, when p LV < p V (Eq. 4), to obtain a regurgitant fraction close to 0.6 (Kleaveland et al. 1988;Nakano et al. 1991;Wisenbaugh et al. 1984). Growth and hemodynamic feedback The valve diseases lead the model in a new mechanical loading state in which the growth stimuli of Eq. 8 and hemodynamic stimuli of Eq. 13 are no longer equal to zero. As a result, the cardiac volumes will change according to Eqs. 11 and 12 to restore the myocardial tissue load L and/or L according to the considered growth stimulus (Table 1). In the presence of hemodynamic feedback, the circulatory parameters R P and V sb will also change to recover the hemodynamic function, represented by CO hom and MAP hom , according to Eq. 13. We analyze our results for the case of growth only, indicated by G, and the combination of growth and hemodynamic feedback, indicated by GH. We assume that cardiac growth, since it requires a volumetric structural change, is a slower process compared with the hemodynamic feedback. For this reason, the constant grw is set to 32 ms and hem 16 ms, making the hemodynamic feedback twice as fast as the cardiac growth. Model evaluation We quantify cardiac growth with the LV end diastolic volume index (EDVI), the LV mass index (MI) and the relative wall thickness (RWT). The EDVI and MI are defined as the end diastolic volume ( V max cav ) and LV mass divided by the body surface area, which is set to 2 m 2 (Lang et al. 2015), while RWT is computed as ratio between wall thickness and cavity radius both at end diastole. Following the classification proposed by Gaasch and Zile 2011, we identify dilated configurations, having EDVI higher than 79 ml/m 2 , and hypertrophic cases, with MI higher than 105 g/m 2 . Moreover, we identify the geometry as eccentric if RWT is lower than 0.32, normal if RWT is between 0.32 and 0.42, and concentric if RWT is higher than 0.42. To evaluate the models, we compare simulations results with clinical data. Data obtained from Guzzetti et al. (2019), Seldrum et al. (2018), Wisenbaugh et al. (1984) are presented by the mean and standard deviation of the cardiac indexes EDVI, MI and RWT, as shown in the left panels of Figs. 3, 5 and 7. Data from (Barbieri et al. 2019a, b) are presented in terms of clinical occurrence, see the right panels of the same figures. Results As we adopted model parameter settings for the healthy state from (Rondanina and Bovendeerd 2020), we find the same homeostatic state identified by a cardiac output ( CO hom ) of 5.2 l/min and a mean arterial pressure ( MAP hom ) of 12.2 kPa. Maximum and minimum cavity volume ( V max cav and V min cav ) are 154 ml and 87 ml, respectively, and a maximum LV pressure ( p max cav ) is 18.2 kPa. These values lead to a homeostatic state characterized by local tissue loads of L avg ,hom of 19.2 kPa, L max ,hom of 59.3 kPa, L amp ,hom of 0.12 and L max ,hom of 0.17. Aortic stenosis Acute state In the acute state AS leads to a decrease in MAP and CO around -20% as shown in Fig. (2). Despite the decrease in MAP, p max cav is increased to 21.3 kPa, due to the increased pressure drop over the stenotic valve. At tissue level, this increase is reflected in a positive value of both stress stimuli. V max cav remains about the same at 163 ml, but V min cav increases to 108 ml, causing L max to remain close to zero, but L amp to decrease. Growth only cases With growth only and no hemodynamic feedback, indicated by G in Fig. (2), the strain-only case 1-2, with L amp driving wall growth and L max driving cavity growth (see Table 1), displays a decrease of V wall towards zero and an unbounded increase of V cav,0 . For these volumes, the model of hemodynamics could not be solved. The other cases show stable growth, where the controlled strain measure is fully restored and the remaining stress and strain stimuli are decreased with respect to their values in the acute case. In the stress-only cases, model 4-3, with L max driving wall growth and L avg driving cavity growth, did not yield stable growth, mostly due to an unbounded increase of V cav,0 . The other cases show stable growth, where the controlled stress measure is fully restored and the remaining stress and strain stimuli are small. In the mixed cases, the controlled L and L are restored to their homeostatic levels, while the other stimuli tend to be reduced as well. LV wall volume decreases in most of the 10 stable cases, while the cavity volume decreases, except for the strain-only case with case 2-2. Growth and hemodynamic feedback While local tissue load is restored in the other growth only cases, according to the controlled stimulus, LV hemodynamic function is not. Adding hemodynamic feedback, as indicated by GH in Fig. 2, leads to restoration of hemodynamic function in all 10 stable cases, identified by S CO = S MAP = 0 (Fig. 2, right panel). The hemodynamic feedback does not solve the instabilities in the growth only models 1-2 and 4-3. For case 1-2, the influence of the hemodynamic feedback is not significant. For case 4-3, the change in hemodynamic parameters (peripheral resistance R P and stressed blood volume V sb ) allows to simulate more growth steps, but both cavity and wall volume display unbounded growth eventually. In the strain-only case 1-1, this is achieved by large changes (more than 50%) in hemodynamic parameters. For all the remaining cases, the changes are within 20%. Regarding the cardiac volumetric change ( V wall and V cav,0 ), the strain-only case with case 2-2 converges at an increase of 300% for both volumes. For all the remaining cases, cavity volume decreases while wall volume increases, with changes being below 50%. Eventually adding hemodynamic feedback tends to increase the non-controlled stress and strain stimuli in strain-driven growth. For the stress-driven and mixed cases, non-controlled stimuli remain fairly constant. Comparison with clinical data In Fig. 3, we compare model output with clinical data. The left panels show how clinical data are characterized by a decrease in end diastolic volume index EDVI and an increase in relative wall thickness RWT, while left ventricular mass index MI shows no significant change (Guzzetti et al. 2019). EDVI and MI are predicted fairly well in all simulations, except for the strain-based model 2-2. RWT is generally underestimated without hemodynamic feedback, but improves when adding it. Strain-based models 1-1 and 2-2 do not yield realistic results for RWT. The right side of Fig. 3 show clinical data on EDVI, MI and RWT in terms of prevalence in the patient population (Barbieri et al. 2019a). It shows that growth upon AS is most clearly apparent in MI and RWT, while not reflected at all in EDVI. Again observations on RWT are captured best by the stress-only and mixed models along with the strain only model for case 2-1, especially with the addition of hemodynamic feedback. Aortic regurgitation Acute State In the acute case, AR leads toward a decrease in MAP and CO around 20% (Fig. 4, right). The regurgitant valve causes an increase of V max cav to 180 ml, while V min cav decreases to 83 ml causing both strain stimuli to increase. The minor drop in p max cav to 17 kPa causes both stress stimuli to remain close to zero (Fig. 4, left panel). Growth only cases With strain-only feedback, the case 1-2 does not converge due to a decrease of V wall toward zero and an unbounded increase of V cav,0 . In the cases driven by one stimulus only (1-1 and 2-2), the non-controlled stimuli tend to increase. Case 2-1 causes all stimuli to approach zero. In the stress-only feedback, the case 4-3 does not converge due to a decrease of LV volumes toward zero. The remaining cases show stable growth, with the controlled stress measure fully recovered while the strain stimuli remain almost unchanged compared with the acute state. In the mixed cases, the controlled stimuli return to zero, while the others are close to zero. Strain-only and mixed cases have an increase in V wall and V cav,0 while with the stress-only cases we do not obtain significant changes, except with case 3-4 which is also characterized by an increased V wall and V cav,0 . In Fig. (4,right panel), we see how the strain-only case 2-1 and the mixed cases have a recovered hemodynamic function. In the other models, hemodynamic function is still decreased. Growth and hemodynamic feedback Hemodynamic function is restored in all stable growth cases upon adding hemodynamic feedback. The hemodynamic feedback however does not solve the instabilities in the growth only models 1-2 and 4-3, which are characterized by a similar divergence as observed for the growth only cases. In the cases where hemodynamic function was restored already in the growthonly cases, changes in circulatory parameters R P and V sb are about zero. Cases 2-2 and 3-4, that already showed improvement in hemodynamic function in the growth-only situation, require small changes in R P and V sb . The remaining cases 1-1, 3-3 and 4-4 require changes inR P and V sb of 15-30%. Regarding LV volumetric growth, we observe an increase in V wall and V cav,0 for all 10 stable cases, except for a decrease in V cav,0 obtained with the stress-only stimuli for cases 3-3 and 4-4. Comparison with clinical data The left panel of Fig. 5 shows that clinical data are characterized by an increase in end diastolic volume index EDVI and left ventricular mass index MI, with a decrease of the relative wall thickness RWT (Seldrum et al. 2018;Wisenbaugh et al. 1984). In the strainonly models, these observations are best captured in case Fig. 3 Aortic stenosis (AS) case for the acute state (Acute), the growth only cases (G) and the cases with growth and hemodynamic feedback (GH). Results are grouped by a strain stimulus only, a stress stimulus only, and a mixed stimulus of both stress and strain. For ease of notation, on the horizontal axis the four stimuli are denoted by: (1) sarcomere strain amplitude S amp , (2) maximum strain S max , (3) average sarcomere stress S avg and (4) maximum stress S max , see also Tab. (1). The figure shows the left ventricular end diastolic volume index (EDVI), mass index (MI) and relative wall thickness (RWT). On the three panels, patient data are presented as mean with standard deviation (Guzzetti et al. 2019), while on the right side patient data are represented as clinical occurrence in percentage (Barbieri et al. 2019a). The left ventricle is considered dilated if EDVI > 79 ml/m 2 , hypertrophic if MI > 105 g/m 2 , and with an eccentric geometry, with RWT < 0.32, normal geometry, with 0.32 ≤ RWT ≤ 0.42, and concentric geometry, with RWT > 0.42 (Gaasch and Zile 2011). The dashed lines identify the homeostatic level 1-1. In the stress only models, adding hemodynamic feedback improves the results for MI but worsens those for RWT, while the EDVI remains almost unchanged. The mixed models show good agreement for EDVI, but overestimate MI and fail to predict the decrease in RWT. The right side of Fig. 5 shows clinical data on EDVI, MI and RWT in terms of prevalence in the patient population (Barbieri et al. 2019b). It shows that result of all growth models agree with the clinical observations that EDVI and MI are increased, indicating dilated hypertrophic hearts. As RWT shows no significant clinical pattern, it cannot be used to judge the quality of the growth models. Mitral regurgitation Acute state In the acute state, MR leads to a decrease in MAP and CO of about 20%, as shown in Fig. 6. The backflow through the mitral valve causes a decrease in p max cav at 15 kPa and V min cav at 56 ml, causing negative stress stimuli and a positive L amp . Since V max cav remains approximately the same, L max is about zero. Growth only cases The strain-only case 1-2 does not converge because of a steep increase of V wall . The other strain-only cases show stable growth, with the controlled strain measure fully recovered. The remaining strain and stress stimuli are close to the homeostatic level for cases 1-1 and 2-1, but remain unchanged for case 2-2. In the stress-only cases, model 4-3 did not yield stable growth due to a decrease of both volumes towards zero. The remaining cases show stable growth, with the controlled stress measure fully recovered and the remaining stress stimulus close to homeostatic level, while all remaining strain stimuli are positive. Regarding the mixed cases, the controlled stimuli are fully recovered with the remaining stimuli close to the homeostatic state. With the stress-only models, we observe a general decrease of V wall and an increase of V cav,0 , while for the strain-only and mixed cases we find an increase of both volumes. Growth and hemodynamic feedback The hemodynamic feedback however does not solve the instabilities in the growth only models 1-2 and 4-3. These are characterized by a similar change in volume as in the growth only cases. Restoration of MAP and CO through hemodynamic feedback (GH) requires changes in R P and V sb below 3% for the strainonly and mixed cases, and more pronounced changes in the stress-only case, reaching up to almost 50% for case 3-3 and 4-4. Adding hemodynamic feedback causes an increase of V wall in most converged cases, except for case 1-1. V cav,0 increases with case 2-2 and 3-4, decreases with 3-3 and 4-4, while it remains almost unchanged for the remaining cases. Comparison with clinical data Clinical data in the left panels of Fig. 7 show an increase in end diastolic volume index EDVI and mass index MI, while the relative wall thickness RWT tends to decrease (Seldrum et al. 2018). The observations on EDVI are captured by all the 10 converged simulations. For MI, the addition of hemodynamic feedback helps only the stress models 3-3 and 3-4, while it causes an over-estimation for the strain-only and mixed models. The right side of Fig. 7 shows clinical data on EDVI, MI and RWT in terms of prevalence in the patient population (Barbieri et al. 2019a, b). It shows that growth upon MR is most clearly apparent in MI and EDVI. For these cases, adding the hemodynamic feedback improves the results. Discussion Cardiac growth is one of the adaptation for the heart to respond mechanisms to changes in preload and afterload. In a previous study, (Rondanina and Bovendeerd 2020) we simulated growth in response to valve disease for several combinations of stress and strain based stimuli. In most cases, we observed a decrease in hemodynamic function, expressed in terms of mean arterial pressure (MAP) and cardiac output (CO), between 20 and 40%. In the current study, we evaluate the hypothesis that such a decrease is counteracted by an adaptive response of the circulatory system. Considerations on the methods Hemodynamic regulation is a complex process which involves short-and long-term mechanisms to maintain blood supply and consequently oxygen delivery at an adequate level. It involves hormone synthesis along with the activity of the sympathetic nervous system (Cowley Jr 1992;Dampney et al. 2002;Guyenet 2006;Hall 2015). There is evidence in literature that both MAP and CO are regulated by an adaptation of vasculature resistance R P and blood volume V sb (Cowley Jr 1992;Guyton 1967Guyton , 1981Jacobsohn et al. 1997). In our hemodynamic regulation model, we indeed control MAP and CO, through changes in R P and V sb , but do not aim for a detailed description of the influence of the nervous system. Regarding the speed of growth and hemodynamic feedback (Eqs. 11 and 13), we reasoned that the body shall react first to a change in hemodynamic load with the hormonal and neural response causing vasodilation or vasoconstriction of the peripheral arteries, and hence R P , or changes in renal function, affecting V sb . Cardiac growth would occur at a longer timescale in case of a persisting change in load. Fig. 5 Results for the aortic regurgitation (AR) case, presented according to the format in Fig. 3. On the three panels, patient data are identified as mean with standard deviation, with data collected from Wisenbaugh et al. 1984 for end diastolic volume EDVI and mass index MI while Seldrum et al. (2018) is used for relative wall thickness RWT. On the right side, patient data are represented as clinical occurrence in percentage (Barbieri et al. 2019b) For this reason, our hemodynamic feedback constant hem is smaller than the growth constant grw . The actual values are chosen in order to limit simulation times. Obviously, the real timescale would be much longer, presumably on the order of months. As shown in our previous work (Rondanina and Bovendeerd 2020), these constants might affect the time course of changes in circulatory and cardiac parameters, but they do not interfere with the final ending state of the model. We verified this by varying the ratio grw ∕ hem over a range 1∕16 ≤ grw ∕ hem ≤ 16. We employ a phenomenological growth law, which is common in many growth models (Witzenburg and Holmes 2017). Such models assume that fiber stress or strain (or both) can be sensed by cardiomyocytes, and that these cells respond by growth along or perpendicular to the fiber direction. They do not address the actual processes at (sub-) cellular level. The simplification at this level makes it computationally feasible to evaluate the effect of growth at organ level and to even include adaptation of the circulatory system. In comparison with finite element (FE) models, our model lacks the ability to describe spatially varying growth in response to spatially varying changes in myocardial load, as induced for example by myocardial infarction or conduction disorders. As an advantage, we avoid the numerical problems that may arise in FE models, typically related to distortion of elements during growth or uncertainty on boundary conditions (van Osta et al. 2019). Thus we are better able to test the intrinsic stability of a potential growth law. In addition, the computational load of our model is orders of magnitude less than that of FE models, allowing a quick evaluation of different types of growth laws, and offering more potential for eventual use in the clinic. It has not been established yet what is the most representative stimulus for cardiac growth. In the literature, several models have been proposed with a growth law based on a single stimulus (Kroon et al. 2009) or on multiple stimuli (Arts et al. 1994(Arts et al. , 2005Kerckhoffs et al. 2012b;Taber 1998). In general, these stimuli are either stress-based or strain-based (Bovendeerd 2012;Witzenburg and Holmes 2017), although a mixed stress-strain stimulus has been used as well (Taber and Chabert 2002). Often growth is driven by a stress stimulus upon pressure overload and a strain stimulus during volume overload (Göktepe et al. 2010). Also in our model, we investigate a mixed stress-strain stimulus. We note that stress and strain are linked through constitutive equations, but that the equation for active stress is time dependent. Hence, a full recovery of stress or strain to the homeostatic state does not necessarily imply a recovery of the counterpart strain or stress. As a consequence, a complete recovery of the strain level does not necessarily mean a recovery of the stress level. Considerations on the results The growth only cases in general cause a decrease in hemodynamic function identical to the one found in our previous study (Rondanina and Bovendeerd 2020). The addition of the hemodynamic feedback caused hemodynamic function to be restored to its homeostatic level in all 10 stable stimuli combinations out of the 12 combinations tested. To assess whether the changes in R P and V sb are realistic, we first address clinical observations. The reported range of R P for control cases is between 134.6 ± 29.9 kPa ms/ ml and 169.5 ± 34.5 kPa ms/ml (Ganau et al. 1992;Huang et al. 2011;Remmen et al. 2005). For AS, it is between 118.2 ± 14.3 kPa ms/ml and 194.2 ± 60.3 (Friedrich et al. 1994;Lloyd et al. 2017;Rajani et al. 2010). For AR, it is between 126.4 ± 11.2 kPa ms/ml and 169.5 ± 29.8 kPa ms/ ml. Finally, for MR it is between 147.0 ± 31.0 kPa ms/ml and 159.0 ± 34.0 kPa ms/ml. These data suggest that R P stays within the normal range for the various valve pathologies. As indicator of V sb , we can use the mean circulatory pressure p mc (Eq. 7), that has a normal value of 2.93 ± 1.07 kPa (Lorsomradee et al. 2007). In this case, we observe a general increase of p mc for AS (Carroll et al. 1992;Lloyd et al. 2017;Martinez et al. 2012) and MR (Kainuma et al. 2011), with values from 2.93 ± 0.93 kPa to 5.33 ± 1.33 kPa, but not for AR (Greenberg et al. 1981;Lorsomradee et al. 2007), which values span from 2.53 ± 0.53 kPa to 2.93 ± 0.67 kPa. With respect to the change in cardiac indexes EDVI, MI and RWT, we note that the clinical data considered for Figs. 3, 5 and 7 are in general agreement. Differences occur with respect to EDVI and MI for AS, as well as the RWT for the MR and AR. These differences might be caused by Fig. 7 Results for the mitral regurgitation (MR) case, presented according to the format in Fig. 3. On the three panels, patient data are identified as mean with standard deviation, with data collected from Wisenbaugh et al. 1984 for end diastolic volume EDVI and mass index MI, while Seldrum et al. 2018 is used for relative wall thickness RWT. On the right side, patient data are represented as clinical occurrence in percentage (Barbieri et al. 2019a, b) (Barbieri et al. 2019a, b) which led to secondary effects. Due to a lack of clinical occurrence data for a severe MR, in Fig. 7 both (Barbieri et al. 2019a) and (Barbieri et al. 2019b) are considered. The resulting clinical occurrence refers to moderate MR cases in presence of a severe AS or AR. With the strain-based growth laws, case 2-1 performed best. In line with experimental observations, changes in R P and V sb are small. Cardiac indexes EDVI, MI and RWT are predicted well, except for an overestimation of MI in MR. Case 2-2 yields small changes in R P and V sb as well, but EDVI and MI are overestimated in both AS and AR. Case 1-1 requires unrealistically large changes in R P and V sb in AS and AR, whereas MI is severely overestimated in MR. Finally, case 1-2 did not converge at all. For the stress-based growth laws, case 3-4 performed best. Changes in R P and V sb are small and cardiac indexes are predicted well, except for a large RWT in AR. Cases 3-3 and 4-4 show unrealistic changes in R P and V sb during AR and MR. Finally, case 4-3 did not converge at all. For the mixed stress-strain cases, we first note that the final state for the LV and the circulation is independent of the way the growth stimuli are applied, as was also observed in our previous study. The results of all mixed simulations are similar. Changes in R P and V sb are small, in line with experimental observations. Also changes in cardiac indexes match experimental observations, except for an overestimation of RWT and MI in AR, and an overestimation of MI in MR. In this respect, adding hemodynamic feedback improved prediction of RWT in AS, but worsened prediction of MI in MR. Still, the overall affect of adding hemodynamic feedback in the mixed models is positive, as it restores hemodynamic function to normal, physiologically realistic levels, in particular in the AS and MR scenarios. Since the mixed models are less dependent on the precise nature of the stimulus and because the true nature of the growth stimulus is not known yet, we think that these models are most promising for future research. We note that the comparison of model results with clinical data is not trivial. The amount of change in cardiac indexes and hemodynamic parameters obviously depends on the severity of the disease. We model the AS through a threefold increase of aortic resistance, while AR and MR are characterized by a regurgitant fraction close to 0.6. We verified that a different level of severity did not affect the type of hypertrophy, even though it leads toward a different ending state. While the isolated perturbation in the model facilitates our analysis, at the same time it might not be representative for real clinical cases, where the valve disease might progress and secondary pathologies might play a role. Comparison with other models In the literature, the majority of the studies on modeling growth focus on LV geometry but pay less attention to the circulation. Arts et al. (2005) proposed a model of hemodynamic control in which the blood volume and the peripheral pulmonary resistance were adapted to simulate pressure control. Moreover, the geometry of the vessels was also changed to sustain changes in blood flow. Later, Kerckhoffs et al. (2012a) adopted this model to simulate a left bundle branch block, in which also the cardiac output was regulated by peripheral resistance. Along with these parameters, other candidates for the hemodynamic feedback are the arterial and venous compliance ( C A and C V ), the LV elastance and the heart rate (Beard et al. 2013;Witzenburg and Holmes 2019). Regarding the heart rate, we maintain this parameter constant. We hypothesize that a change of heart rate might be interpreted as an incomplete hemodynamic adaptation rather than a direct consequence of the studied disease. Moreover, in literature we did not find any significant correlation between heart rate and valve disease (Akinboboye et al. 2004;Seldrum et al. 2018). Eventually updates in C A and C V affect cardiac function in a similar manner as an update in V sb : They change the mean circulatory filling pressure (Eq. 7) and affect cardiac function through the Frank-Starling effect. Our analysis is similar to the one proposed by Witzenburg and Holmes (2018) for AS and MR. These authors also combined lumped parameters models of left ventricular and circulatory mechanics with a phenomenologic growth law. They fitted circulatory and growth law parameters to match results from hemodynamic overload studies in dogs and tested to what extent the resulting model predicted growth in independent studies of hemodynamic overload. They describe LV mechanics with a time-varying elastance model, that does not allow for an easy relation between constitutive properties at organ level (describing pressure-volume relations through compartmental parameters 'A,' 'B,' 'E' and ' V 0 ') and tissue level (describing stress-strain relations through material parameters 'a, ' 'b' and 'e'). This relation occurs more naturally in the onefiber model that we use in our study, as shown in Eq. 1a. Consequently, growth-induced changes in cavity and wall volume are also reflected in the LV pressure-volume behavior more naturally. This model also enables computation of local tissue load, with the limitation that fiber stress and strain should be considered as representative spatially averaged values. Hence, it is possible to establish a natural stimulus-effect relation, from tissue load to change in cardiac size. Considering the circulatory system, Witzenburg and Holmes (2018) match acute hemodynamic data from the experiments and prescribe the evolution of resistance R P and the degree of mitral valve regurgitation. In our approach, we prescribe a constant valve pathology and adapt R P and V sb according to our hemodynamic feedback model. Interestingly, Witzenburg and Holmes (2018) find that matching acute changes in hemodynamics is more important than matching the subsequent evolution, suggesting that this evolution involves minor changes as compared to the acute changes. This observation matches with clinical data and supports our selection of the most promising models on the basis of minor changes in R P and V sb . Considering the growth law, Witzenburg and Holmes (2018) investigate one option, considered most promising in an earlier study (Witzenburg and Holmes 2017). In this model, an increase in maximum circumferential strain results in an increase in cavity volume and an increase in maximum radial strain results in an increase in wall volume. In our model, we do not consider maximum radial strain, or its surrogate, minimum fiber strain. The option resembling the one in Witzenburg and Holmes (2018) best would be the strain-based model 2-1 with maximum fiber strain driving wall growth and strain amplitude driving cavity growth. Indeed, we find that this model performs well in the case of AS and MR, investigated by Witzenburg and Holmes (2018). However, our models with a mixed stimulus perform equally well. This confirms the more general conclusion of Witzenburg and Holmes (2017), that the most promising growth laws employ multiple inputs. Limitations and outlook An important limitation of our study is that we considered two strain stimuli and two stress stimuli only. It would be interesting to extend the analysis to more stimuli. For example, minimum sarcomere length could be used as an alternative strain stimulus, to enable better comparison with the study of Witzenburg and Holmes (2018). Our analysis could also be extended to other cardiac conditions, for example the growth of the athlete's heart where presumably cardiac growth occurs homogeneously throughout the wall. As addressed above, to assess growth in conditions that involve spatially varying changes in tissue load, the step towards a finite element model should be made. The findings of our current study might be used to guide the choice of the growth model in the finite element model. Finally, the current model may already form a basis for a tool to predict patient-specific growth in response to spatially homogeneous changes in tissue load, since it is computationally inexpensive. As a first step towards this goal, the model should be tested on its ability to predict growth in individual rather than generic cases, similar to the approach followed by Witzenburg and Holmes (2019). Finally, we focused on growth models that resulted in a stable ending state. While a stable state may be expected to exist clinically for minor valve pathologies, it is unclear whether it would exist for the degree of valve dysfunction used in our simulations. Such data are unavailable since, in the clinical case, potential unbound growth would probably be prevented by valve replacement. Despite these considerations we think the proposed analysis still offers valuable points of reflection. Conclusion We investigated cardiac growth and circulatory adaptation in response to three valve diseases (aortic stenosis, aortic regurgitation and mitral regurgitation). We integrated a lumped multiscale model of LV mechanics and a lumped model of circulatory hemodynamics with a model for tissue growth and hemodynamic feedback. Our study shows the importance of coupling growth with hemodynamic feedback. With our model, we succeeded in restoring the homeostatic state at circulatory level, characterized by pressure and flow, and at tissue level, expressed in various combinations of stress and strain. The results obtained by using a combination of stress and strain stimuli to drive cardiac growth (1) matched clinical observations on cardiac growth well, (2) required only a small, clinically realistic adaptation of the properties of the circulatory system and (3) were fairly insensitive to the exact choice of the chosen mechanics loading measure. Thus, this study suggests to model cardiac growth using a mixed stress-strain stimulus as input, to maintain homoeostatic tissue load, in combination with a model of hemodynamic feedback to maintain cardiac pump function.
11,029.2
2020-05-01T00:00:00.000
[ "Medicine", "Engineering" ]
Non-cooperative and cooperative solutions of government subsidy on public transportation The paper deals with two models of government subsidy given to a public transport operator: (i) the subsidy for buying bus from an appointed public transport manufacturer, and (ii) the subsidy for reimbursing reduced ticket price for passengers. The models are developed to determine the maximum profit for both the public transport operator and the manufacturer. Since we consider two parties – the public transport operator and the manufacturer of the bus, then we use game theoretical approach by considering non-cooperative and cooperative solutions. Furthermore, since the bus is repairable we consider virtual age to model the preventive maintenance and we consider minimal repair to model the corrective maintenance. We analyse both type of subsidy models and give some numerical examples which show the effects of different subsidies to the profit of operator and manufacturer. The result of the numerical examples indicates that reducing ticket price would give a higher profit both to the operator and the manufacturer. Introduction The exesive use of natural resources is becoming prevalent nowadays.It is reported that due to the overusing and over-consuming of this natural resources of the planet, future sustainability of the vital resources is questioned.The Living Planet reported that the human consumption of natural resources is far above the capability of the Earth to replenish the resources.It is more than 30% higher than the Earth can replenish each year.This undoubtly leads to deforestation, degraded soils, polluted air and water, and the collapse many commercial fisheries and the extinction other species (The Guardian, Oct 2008).The world mostly agrees that something needs to be done to resolve this problem.The UNDP agenda of 12th SDG's goal points out that in "[a]chieving economic growth and sustainable development requires that we urgently reduce our ecological footprint by changing the way we produce and consume goods and resources" (UNDP, 2017).Some large parts of energy consumption have gone to the transportation sector.Unwanted consumption in transportation is due to severe traffic congestion.In many urban areas, severe traffic congestion has been blamed to cause air pollution.It is also regarded to hinder economic activity to some extent and wasting unnecessary energy consumption.According to Indonesia's transportation ministry report, vehicles contribute 70% as the source of air pollutant in Jakarta.In 2009 alone there were 9.9 million vehicles in Jakarta, which made the air pollutant in the city was getting worse.In order to reduce the traffic congestion, the Indonesian government has arranged a subsidy policy in public transportation sector.In general there should be an optimal subsidy scheme to reduce emissions in urban cities.Some methods for designing optimal subsidy scheme in complex urban areas are readily available (Qin and Zhang, 2015). A subsidy is often defined as a form of financial aid or support extended to an economic sector aiming of promoting economic and social policy [1].The economic sector could be an institution, bussiness, or individual; and the form of the subsidies could be direct, such as cash grants and interest-free loans, and could be indirect, such as tax breaks, insurance, low-interest loans, accelerated depreciation, rent rebates, and reimbursement [2,3].A subsidy is given by a government to an economic sector or even to a public service institution whenever there is a foreseen benefits generated by the economic sector or the public service institution, directly or indirectly.The benefits might be in the form of economic benefit, health benefit, environmental benefit, or any other recognized benefits.The American Public Transportation Association explored the health impacts of public transportation and found six benefits compared to the use of individual vehicles, i.e. it can make the users are more active physically, it is safer than individual vehicles, it reduces stress, it keeps air cleaner, it saves money of the user, and it provides access to essential needs later in life (http://transloc.com/6-health-benefits-of-publictransportation/). Considering the many benefits of public Makassar (liputan6.com, 2016).This has been done to reduce the use of private or individual vehicles as the transportation mode in the cities.The reduction of the use of private or individual vehicles is expected to reduce the severe level of traffic congestion. An early literature in public transport research has shown that the lack of efficient pricing scheme has been blamed as one of the responsible factors causes the severe traffic congestion in urban areas (Jackson, 1975).Hence, other attempt to increase the number of passengers of the public transport is by making an efficient pricing scheme, e.g. by reducing the ticket of the transportation.The rationale is that a lower public transport ticket price will discourage the use of private and individual vehicles, and hence favouring the use of public transport (Parry and Small, 2009). Levinson and King (2013) pointed out that subsidy should be considered two ways -capital subsidy and operating subsidy.They argued that these two are related, although different enough so that the two should be considered separately.Providing assets (such as giving new buses) is considered as the former form of subsidy while giving reimbursement of reduced ticket price to the operator is regarded as the later form of subsidy.Inline to this argument, the present paper deals with government subsidy model given to the Damri in the aim to increase the use of buses as the main mode of transportation.There are two subsidy models that will be studied, i.e. subsidy given to the Damri for buying buses and subsidy given to the Damri for reimbursing of reduced ticket price. Considering a leader-follower relationship between the Damri and the manufacturer of the buses, we consider non-cooperative solution and cooperative solution in order to maximize profit.In this scheme, the Damri acts as the leader which has the first opportunity in devising policy to maximize profit.The Manufacturer, as a follower, maximizes profit based on the Damri's policy that has been chosen.This paper is organized as follows.Section 2 gives model formulation and section 3 gives solution and analyzes the model.Section 4 gives numerical example to see which model is better and finally conclude in section 5. Mathematical Model The following notations will be used in the model formulation.q : number of passengers per bus per year Q : total number of passengers per year n : bus demand PM : Preventive Maintenance CM : Corrective Maintenance K : bus operating year N : number of PM per bus δ : degree of repair λ0(t) : failure rate without PM λ(t) : failure rate after PM p : ticket price per passenger Cm : expected total cost of CM Cf : cost per CM Cp : expected total cost of PM Cr : manufacture production cost τ : PM's period u : subsidy amount per year w : bus price Ψ d : operator's profit Ψ m : manufacture's profit Ψ : total profit To formulate the model we make the assumption that wholesale the bus purchase price is determined by the manufacturer and the ticket price is determined by the Damri.It is also assumed that every failure item of the bus only need minimal repair so that time between failures is negligible.In the following section we define operator revenue model, operator expenses model, the PM and CM maintenance models, and other related concepts needed in the subsequent sections. Operator Revenue Model In ceteris paribus condition the demand's law said that if the product price increase then the demand will decrease and if the product price decrease then the demand will increase.In this case, if the number of passengers per year is q and ticket price is p, assuming a linear relationship will have a demand function with 1 0 The number of buses per year (n) can be obtained by dividing q with bus capacity m so that   ; , 0 and the number of passengers per year constant, then the total number of passengers is The Damri's revenue, Rd, is obtained from total passengers multiplied by the ticket price In this paper, we will use two government subsidy models.First, subsidy for buying bus from manufacturer and the second one is subsidy for reducing ticket price.For simplicity, we use the index 1 for the first model and the index 2 for the second model. For the first model, the subsidy doesn't influence demand function so   For the second model, the subsidy amount u influences the demand function so , q p u p u       where , 0 u   .Thus, based on equation (3) we have revenue ; The first row is for the 1st model and the second row is for the 2nd model.The function of the number of buses for the 1st model and the 2nd model is respectively given by the first and the second row of the following formula: Operator Expenses Model The main Damri's expenses are the costs of preventive maintenance (PM), the cost of corrective maintenance (CM), and the cost of purchasing the buses.The maintenance is purported to reduce failure intensity.In this paper, we will use the two parameter Weibull failure intensity function λ0(t) with the scale parameter α = (2/θ) 1/2 and the shape parameter β =2.Thus, we have a linear The following sub- sections derive the cost functions for the PM and CM respectively. Preventive Maintenance Cost Model Let the public transport operator undertakes N times PM in K-years period, then the time interval between two PMs, i.e. τ is formulated by So, the failure intensity function of the bus will become for N times of PM is given by Corrective Maintenance Cost Model While operating, bus may have failure at a random time.When failures occur, bus need to be repaired.Every failure is assumed minimally repaired so that the failure intensity just the same as that just before the failure happened. If the cost for a PM is f C , then the expected total amount of PM for K-years operating time is . Bus Purchase Price Another expense for the public transport operator (Damri) is the bus purchasing price w .For the first subsidy model the Damri has received government subsidy for purchasing the buses, so that the Damri needs only to pay 1 w w  .However, for the second model the Damri needs to pay 2 w w  . Operator Profit function The Profit function is the difference between the revenue (4) and the expenses for PM (6), CM (7), and bus purchasing price.Damri's profit function for first subsidy model is given by , , , , , , , , , , . Manufacturefr Profit function If the production cost for every is m c then manufacture's profit function for the first model is 3 The Optimal Solution Non Cooperative Solution In the non-cooperative solution, manufacturer will act as a leader and makes a decision policy first.Damri will then act as a follower and make profit policy based on manufacturer's policy.In the first subsidy model, we determine ticket price 1 p that maximizes profit function (8) by differentiating We substitute ( 12) into ( 9) to obtain To get manufacturer's maximum profit, we determine 1 w so that Next we substitute ( 14) to ( 13) to obtain manufacturer maximum profit . The profit of Damri can be obtain by substituting ( 14) to (12) to have Substitute ( 15) to (8) to obtain . Analogously, we can find the maximum profit for Damri and manufacturer for the second model.The Damri's maximum profit is given by and the manufacturer's maximum profit is Cooperative Solution In cooperative solution, we consider the sum of Damri and manufacturer profit function.For the first subsidy model we have The maximum total profit will be obtained by differentiating this sum with the respect to 1 p , that is By substituting (17) into ( 16), the maximum total profit of Damri and manufacturer is given by Analogously, cooperative solution for the second subsidy model is given by. It is straight forward to have the following propositions for these maximum profits.To illustrate the propositions we give the following numerical example. Numerical Example For example, we have data number of passengers per year q, ticket price p, and subsidy amount per year u as in Table1.From the examples above, clearly we can see that profit from cooperative solution is higher than non-cooperative. In cooperative solution, when subsidy is getting bigger, the first model is better to use than the second one. Conclusion We have studied two mathematical models of government subsidy to public transportation.In this case we applied the theory to the Indonesian bus public transport agency, i.e. the DAMRI.The subsidy is purported to increase people's interest in using bus for their main transportation.This is done as government attempt to reduce the severity of traffic congestion, which has a damaging effect to the environment.We analysed two different kinds of subsidy: (i) the subsidy in purchasing bus from an appointed public transport manufacturer, and (ii) the subsidy for reimbursing reduced ticket price for passengers.Numerical examples show that: a) PM reduces the number of failure which makes the buses operating in a longer time of service, b) Cooperative solution gives a higher profit for both the public transport operator and the manufacturer, c) From manufacturer point of view, the government subsidy in purchasing bus is better, d) From public transport operator, the government the subsidy for reimbursing reduced ticket price for passengers is better. Proposition 1 :Proposition 2 : In both subsidy model, cooperative solution gives total profit of Damri and manufacturer better The degree of repair and the number of PM that make optimum profit are with N is integer.The value 0   means Damri does a perfect repair PM. the profit.The following figures show the result for 3 and 5 years bus operation with D1/2 indicates operator's profit for subsidy model 1/2 and M1/2 indicates manufacturer's profit for subsidy model 1/2. H .06.1.401516/2017and A.K.S. acknowledges support from Universitas Padjadjaran, due to the funding to parts of this work through the scheme of Academic Leadership Grant (ALG) with contract number 855/UN6.3.1/PL/2017. Table 1 . Data number of passengers per year q, ticket price p, and subsidy amount per year u.
3,273.2
2018-01-01T00:00:00.000
[ "Economics" ]
BOUNDARY STABILIZATION OF THE NAVIER-STOKES EQUATIONS WITH FEEDBACK CONTROLLER VIA A GALERKIN METHOD In this work we study the exponential stabilization of the two and three-dimensional Navier-Stokes equations in a bounded domain Ω, around a given steady-state flow, by means of a boundary control. In order to determine a feedback law, we consider an extended system coupling the Navier-Stokes equations with an equation satisfied by the control on the domain boundary. While most traditional approaches apply a feedback controller via an algebraic Riccati equation, the Stokes-Oseen operator or extension operators, a Galerkin method is proposed instead in this study. The Galerkin method permits to construct a stabilizing boundary control and by using energy a priori estimation technics, the exponential decay is obtained. A compactness result then allows us to pass to the limit in the system satisfied by the approximated solutions. The resulting feedback control is proven to be globally exponentially stabilizing the steady states of the two and three-dimensional Navier-Stokes equations. Introduction. Let Ω be a bounded and connected domain in R d (d = 2, 3), with a boundary Γ of class C 2 , and composed of two connected components Γ l and Γ b such that Γ = Γ l ∪ Γ b , in order to impose two different boundary conditions specified in (1).In particular, the boundary Γ b is the part of Γ, where a Dirichlet boundary control in feedback form has to be determined. We denote by • | • and • = • L 2 (Ω) , the scalar product and norm in L 2 (Ω), respectively.Moreover, if u ∈ L 2 (Ω) is such that ∇ • u ∈ L 2 (Ω), then we denote the normal trace of u in H − 1 2 (Γ) by u • n, where n denotes the unit outer normal vector to Γ. We consider a stationary motion of an incompressible fluid described by the velocity and pressure couple (v s , q s ), which is the solution to the stationary Navier-Stokes equations In this setting, ν > 0 is the viscosity, f s is a function in L 2 (Ω), v b belongs to V 3 2 (Γ) defined as V 3 2 (Γ) = u ∈ H 3/2 (Γ) : Γ u • n dζ = 0 .Recall [17] that a solution (v s , q s ) to ( 1) is known to exist in H 2 (Ω) × (H 1 (Ω) ∩ L 2 0 (Ω)).For T > 0 fixed, let Q = [0, T [×Ω, Σ l = [0, T [×Γ l and Σ b = [0, T [×Γ b and consider a trajectory (u, q) solution of the non stationary Navier-Stokes equations with x = (x, y, z) if d = 3.Consequently, the couple (v = u − v s , p = q − q s ) satisfies the following non stationary system in Ω. ( In order to stabilize the unsteady solution u of (2), for a prescribed rate of decrease σ > 0, we need to find a control u b such that the components v of the solution (v, ∇p) to the boundary value problem (3) satisfies the exponential decay: for a constant C > 0 independent of v 0 (x).It's worth noticing that, in the present paper, we let C = 1. The control u b (t) is called a feedback if there exists a mapping F : and the corresponding feedback law in (5) is pointwise in time.However, the feedback law may be chosen in a different manner, for example as where F 0 is a mapping belonging to L(X(Ω), U(Γ b )), but in that case, the feedback law is not pointwise in time.The spaces X(Ω) and U(Γ b ) will be defined accordingly. Pointwise feedback laws are usually needed in engineering applications as they are more robust with respect to perturbations in the models.Different approaches have been pursued in the past, which first determine a linear feedback law by solving a linear control problem for the linearized system of equations (for example the Oseen system) and then use this linear feedback law in order to stabilize the original non linear system (for example the Navier-Stokes system).In such a framework, several significant questions have to be addressed.First, do we obtain a pointwise feedback law able to stabilize the linearized system?Secondly, by assuming that F is a pointwise (in time) feedback law able to stabilize the linear system in X(Ω), does F also stabilize the nonlinear system for v 0 (x) in a subspace of {u ∈ L 2 (Ω) : ∇ • u = 0}, with v 0 (x) small enough ?Finally, assuming that the existence of a feedback law stabilizing the linear system is proved, is it possible to obtain a well posed equation characterizing F , for example a Riccati equation, which can be numerically solved by classical methods? These questions of stabilizing the Navier-Stokes equations with a boundary control have been first addressed by A.V. Fursikov in [14,15], where stability results for the two and three-dimensional Navier-Stokes equations are proved by employing an extension operator.With an adequate extension procedure for the initial velocity condition v 0 (x) in (3), which requires the knowledge of the eigenfunctions and the eigenvalues of the Oseen operator, the author obtains a boundary control of the form with k ≥ 1.However, if the feedback controls are well characterized, the corresponding laws are not pointwise in time. In [24], as far as the two-dimensional case is concerned, J.-P.Raymond has obtained boundary feedback control laws, pointwise in time, where the feedback controller is determined by solving an algebraic Riccati equation obtained via the solution of an optimal control problem with where 0 < ε < 1/4 and m ∈ C 2 (Γ).Unfortunately, the three-dimensional case is more demanding in terms of velocity regularity, as explained in [23], and it cannot be treated in the same manner as the two-dimensional case.Indeed, in the three-dimensional case the feedback controller needs to satisfy F (v) belonging to H 1/4+ε/2 (0, ∞; L 2 (Γ)) with 1/2 ≤ ε, and in the particular case 1/2 < ε, the space , implying that the velocity v has to satisfy the initial compatibility condition v 0 | Γ = F (v 0 ).This is the reason why the feedback law used in [24] cannot be employed in the three-dimensional case, and why this difficulty has been overcome in [23] by introducing a time dependent feedback law in an initial transitory time interval.In order to obtain a stabilization result via the Riccati approach, particular spaces of initial conditions have to be employed that are given in [3]. The study, performed in [23], also improves in some way the results obtained in [8,9], where a tangential boundary stabilization of two and three-dimensional Navier-Stokes equations is employed with both Riccati-based and spectral-based (tangential) feedback controllers.In [9], for the three-dimensional case which is highly demanding in terms of velocity regularity, the existence of boundary feedback laws, pointwise in time, is established by solving an optimal control problem with a cost functional involving the L 2 (0, ∞; H 3/2+ε (Ω)) norm of the velocity field, for some 0 < ε small enough.However, such a feedback law cannot be characterized by a well posed Riccati equation, as shown in [9], and the numerical calculation of the feedback control thus becomes problematic.In [23], for the three-dimensional Navier-Stokes system, J.-P.Raymond chooses a functional involving a very weak norm of the state variable which leads to a well posed Riccati equation. Recall in [23], a time dependent feedback law in an initial transitory time interval was introduced.As mentioned in [2], the problem of finding a time independent feedback controller satisfying v 0 | Γ = F (v 0 ), for a sufficiently large class of initial conditions v 0 , is not obvious.This problem has been examined in [2] for the two and three-dimensional case, and it has led to search for solutions u b satisfying an extended system composed of the evolution system coupled with the original Navier-Stokes equations, where the feedback controller F now acts on the pair (v, u b ) and ∆ B is the vector-valued Laplace Beltrami operator.The space X(Ω) is now defined as , the oprerator F is found from a well-posed Riccati equation and the controller u b , localized on an arbitrary small part of Γ, can be obtained. In the purpose of stabilizing the Navier-Stokes equations around a stationary state, the feedback control laws are determined by solving a Riccati equation in most of the studies cited above [2,3,7,8,9,23,24], except in the Fursikov's papers [14,15].The Riccati equation is obtained via the solution of an optimal control problem and it is stated in a space of infinite dimension.Although our study is only concerned with the construction of boundary controllers, the Riccati approach described above, stated in a space of infinite dimension, applies as well to the case of internal control [5,11]. In the case the feedback controller lies in an infinite-dimensional space, an optimal control problem has to be solved, involving the minimization of an objective functional.In practice, the control is calculated through approximation via the solution of an algebraic Riccati equation, which is computationally expensive.Consequently, the use of finite-dimensional controllers may be more appropriate to stabilize the Navier-Stokes equations.Such an approach is performed in [10], in the case of an internal control, and in [1,7,8,9,22], in the case of a boundary control.Recall the Riccati equation is stated in a space of infinite dimension in [7,8,9].In [1,10,22], the authors search for a boundary control u b of finite dimension of the form where (ϕ j ) j=1,2,3,...,N is a finite-dimensional basis obtained from the eigenfunctions of some operator and ū = (u 1 , u 2 , u 3 , . . ., u N ) is a control function expressed with a feedback formulation.In [22], where d = 2, the feedback control is obtained from the solution of a finite-dimensional Riccati equation stated in R n c ×n c , where n c is the dimension of the unstable space of the Oseen operator.The same approach is then extended in [1] for the three-dimensional case.However, in [10,22] the minimal value of N is a priori unknown while in [1], N is greater or equal to the maximum of the geometric multiplicities of the unstable modes of the Oseen operator.Finally, finitedimensional stabilizing feedback laws of the form of ( 7) are obtained in [6] and [4], in the case of internal and boundary control, respectively.Instead of employing the Riccati approach, a stochastic-based stabilization technique is employed in [6] which avoids the difficult computation problems related to infinite-dimensional Riccati equations.The procedure employed in [4] ressembles the form of stabilizing noise controllers designed in [6].In all the above-mentioned studies, a linear feedback law is first determined by solving a linear control problem for the linearized system of equations and then this linear feedback is used in order to stabilize the original non linear system.However, such a procedure imposes to choose the initial velocity small enough.Further, the employed methods (e.g. the Riccati approach) require to search for the control u b and the initial condition in sufficiently regular spaces, depending on whether d = 2 or d = 3.For example, in [4, Theorem 2.3], we have in the case d = 2 and, for v 0 ∈ X(Ω), with v 0 X(Ω) < ρ and ρ sufficiently small, the function v satisfies the following stability estimate v X(Ω) ≤ Ce −σt v 0 X(Ω) , for all t ≥ 0 and for some σ > 0, but the value of C is not precisely given.Note that, in the case d = 3, no control is proposed in [4] to stabilize the non linear Navier-Stokes equations.Further, in [1, Theorem 2], we have , where P is the Leray projector, and , and stability estimates are also obtained. In this paper, a new approach is proposed.Instead of obtaining the feedback law by first solving a linear control problem for the linearized system of equations, eventually via the resolution of a Riccati equation, an extended system is considered.Indeed, in (3) the boundary control u b is rewritten on the form The quantity α(t) is a priori unknown.In order to stabilize (3), with u b = α(t)g(x) on Σ b , by employing energy a priori estimation technics, the quantity α(t) is found to satisfy the relation where f is a polynomial in α(t) of degree 2. Note that α(t) depends nonlinearly on v and hence α(t), which reads α(v(t)), satisfies a nonlinear feedback law.Such a feedback, pointwise in time, ressembles to (5) but the mapping F is nonlinear here. The system (3) is then extended by adding (10), and the extended system, namely (3) and (10) with u b = α(t)g(x) on Σ b , is then solved in order to determined α(t), leading to the determination of the boundary control u b .Such a boundary representation of u b is also employed in [21] in the two-dimensional case, where a linear feedback control dα(t)/dt is obtained via the solution of a Riccati equation stated in a space of infinite dimension.In the present paper, however, the quantities α(t), and hence u b , are computed at the discrete level.Further, contrary to (7) and [21], where u j (t), j = 1, 2, 3, . . ., N , and dα(t)/dt, respectively, are linear feedbacks, α(t) is nonlinear here and it is thus calculated through a Galerkin procedure instead of being the solution of a finite-dimensional Riccati equation, for example. Note that the Galerkin procedure first consists of building a sequence of approximated solutions via an adequate Galerkin basis.Because the energy bounds are not sufficient to pass to the limit in the weak formulation, additional bounds are obtained.A compactness result then permits to pass to the limit in the system satisfied by the approximated solution, leading to the existence of at least one weak solution.Such a procedure relies on technics previously introduced in [19], but it is worth to note that the work performed in [19] is not related to a stabilization problem. The approach proposed in this paper has several advantages.First, the stabilization result in (4), i.e. v(t, x) ≤ C e −σt v 0 (x) , for t ∈ (0, ∞), is obtained with C = 1 and for an arbitrary initial data v 0 belonging to implying less regularity on v 0 than in the case of the previous studies cited above, for example see (9).Further, the regularity results are independent of d and they are thus obtained in the two and three-dimensional case as well. The paper is organized as follows.In section 2, the notations and mathematical preliminaries are introduced.The stabilization problem is formulated in Section 3, and the existence of the solution of the nonlinear Navier-Stokes system is established and the existence analysis is carried out by applying the Galerkin method.Finally, some concluding remarks complete the study in Section 4. Notation and Preliminaries. 2.1.Function Spaces.Several spaces of free divergence functions are now introduced: ) be the space of trace functions that, if extended by zero over Γ, belongs to , the solution of (3) coupled with (10) is searched in The following lemma [19], will be used in the sequel. Lemma 2.2.There exists a constant C b > 0 such that, for all (v, α) ∈ W (Q), we have We now define an Hilbertian basis for the space W (Q). 2.2.An Hilbertian basis for the space W (Q). Let {z j , λ j , j = 1, 2, 3, • • • } be the eigenfunctions and eigenvalues of the following spectral problem for the Stokes operator: As shown in [25], 0 , and {z j } forms an orthonormal basis in V 0 (Ω) verifying: The space W (Q), defined in (14), is then rewritten as where w satisfies the following system Since g satisfy Γ b g • n dζ = 0, system (19) hence admits a unique solution (w, q) ∈ V(Ω) × L 2 0 (Ω), where L 2 0 (Ω) is the pressure space with zero mean value: Note that the existence and uniqueness of (w, q) in ( 19) can be deduced from [25]. 2.3.Linear Forms.In order to define a weak form of the Navier-Stokes equations, we introduce the continuous bilinear forms and the trilinear form: By integration by parts, the following properties hold true Thanks to Hölder inequality, we obtain 3. Stability Result. 3.1.The stabilization Problem.In order to stabilize the non stationary Navier-Stokes System (3), we choose to search the solution v in the form v = z+αw, where z ∈ V 0 (Ω), and α and w satisfy ( 10) and ( 19), respectively.We then have v = αg on Γ b as z = 0 on Γ.Consequently, the state (v, p) satisfies the following extended coupled system: where with σ 0 > 0 is a constant, λ 1 is the smallest positive eigenvalue of ( 16) and Recall that α is a priori unknown and thanks to (23-f), it satisfies a nonlinear feedback law leading to search for α(v(t)).Because (23-f) is independent of x, α(v(t)) is a function of t only.For the sake of simplicity, α(v(t)) is written α in the sequel. 3.2.The variational formulation.We first state to consider the variational formulation of the extended Navier-Stokes system. Note that the rate of decrease σ(t) depends on the control α and σ 0 may be regarded as an accelerator. Proof.Let us begin with the proof of the stability estimates followed by the existence result. A priori estimates. Taking Let us estimate the terms in the left-hand side of (30).According to ( 20)-( 22), we obtain Using ( 24) and ( 31)-( 33) in (30), leads to 1 2 Due to (19), we have ∇w, ∇z = 0 and from (34) we deduce 1 2 Since and using v = z + αw, we obtain from (35) 1 2 For all σ 1 such that 0 < σ 1 ≤ σ = νλ 1 − ∇v s ∞ , we have 1 2 and omitting the second term in the left hand side of (37) leads to Multiplying ( 38) by e 2σ(t) , where σ(t) = σ 1 t + σ 0 t 0 α 2 (s)ds ≥ 0, we obtain d dt e 2σ(t) v 2 ≤ 0 and consequently, By omitting the third term in the left hand side of (37) we deduce 1 2 and integrating from 0 to t yields Since v = z + αw, we substitute w 2 α 2 + 2α w, z = v 2 − z 2 in the two last terms in the right hand side of (34), and this leads to 1 2 Integrating (41) from 0 to t yields and employing (39) and (40) we obtain Because σ(t) = σ 1 t + σ 0 t 0 α 2 (s)ds, we have σ(t) ≥ σ 1 t, and hence Therefore, we obtain the a priori estimate 3.4.Existence.The proof of the existence follows a standard procedure.In a first step a sequence of approximate solutions using a Galerkin method is built.A compactness result from [20] allows us to pass to the limit in the system satisfied by the approximated solutions. 3.4.1.The Galerkin Method.For all m ∈ N, we define the space W m as: where w 0 = w and φ im w i and we define the following finite-dimensional problem where δ ij defined the Kronecker symbol and Moreover this solution satisfies : where C is a positive constant independent of m. Proof.We rewrite (44) in terms of the unknown φ im , i = 0 • • • m, and we obtain Because the matrix with elements w i , w j (0 ≤ i, j ≤ m) is nonsingular, (47) reduces to a nonlinear system with constant coefficients where X ij , Y ijk , Z ij , ∈ R.Then, there exists T m (0 < T m ≤ T ) such that the nonlinear differential system (48) has a maximal solution defined on some interval [0, T m ]. In order to show that T m is independent of m, it is sufficient to verify the boundedness of φ im , and hence the boundedness of the L 2 -norm of v m independently of m.Following the same procedure as for the derivation of the a priori estimates (39) and (43), yields Consequently, according to (49-a), we obtain T m = T . Moreover, a consequence of the a priori estimates (49) is that (v m ) m is bounded in L 2 (0, T ; V(Ω)) and L ∞ (0, T ; H(Ω)).Therefore, for a subsequence of v m (still denoted by v m ), the estimates in (49) yield the following weak convergences as m tends to ∞ : Nevertheless, the convergences in (50) are not sufficient to pass to the limit in the weak formulation (44), because of the presence of the convection term.Consequently, we need to obtain additional bounds in order to utilize the compactness theory on the sequence of approximated solution (v m ) m . Additional bounds. As in [20], let us assume that B 0 , B and B 1 are three Hilbert spaces such that Let us recall the following identity about the Fourier transform of differential operators: , for a given γ > 0, and let us define the space . We also define H γ (0, T ; B 0 , B 1 ), as the space of functions obtained by restriction to [0, T ] of functions of H γ (R; B 0 , B 1 ).Further, we recall the following result [20]: Lemma 3.4.Let B 0 , B and B 1 be three Hilbert spaces such that B 0 ⊂ B ⊂ B 1 and B 0 is compactly embedded in B. Then for all γ > 0, the injection H γ (0, T ; B 0 , B 1 ) → L 2 (0, T ; B) is compact. For small enough ε, this lemma is used later with The main result of the present section, based on utilizing Lemma 3.4, is furnished by the following lemma: We denote by vm the extension of v m by zero 0 for t < 0 and t > T , and v m the Fourier transform with respect to time of vm .It is classical that since vm has two discontinuities at 0 and T , in the distributional sense, the derivative of vm is given by where δ 0 , δ T are Dirac distributions at 0 and T , and BOUNDARY FEEDBACK STABILIZATION OF THE NAVIER-STOKES 13 After a Fourier transformation, (51) gives where v m and u m denote the Fourier transforms of vm and ūm respectively.Since we already know that v m is uniformly bounded in L 2 (0, T, V(Ω)), it remains to prove that We have that vm satisfies where ).We now apply the Fourier transform to the equation (53) and take ( v m , φ 0m ) as a test function, it yields 2iπτ where G m , G 0 m , G 1 m and H m are respectively the Fourier transform with respect to time of G m , G 0 m , G 1 m and H m .Note that where F m is the Fourier transform with respect to time of φ 0m v m 2 .Thanks to lemma 2.2, we have By using (55) in (54) and taking the imaginary part of (54) leads to Note that in the sequel, C stands for different positive constants.We now prove that the right hand side of (56) is bounded.First, we have and thanks to the energy estimate (49) satisfied by v m , G m and G s m remain bounded in L 1 (R; V (Ω)) and the functions G m , G s m are bounded in L ∞ (R; V (Ω)).Consequently, we have and the second line of ( 56) is hence bounded.We now show that the first four terms in the right hand side of (56) are bounded. where C stands for different positive constants.For 0 < γ < 1 4 , we now estimate the norm Now, applying Lemmas 3.4 and 3.5, there is a subsequence of (v m ) m∈N which converges strongly in L 2 (0, T, H(Ω)). 3.4.3.Passage to the limit.The compactness result obtained in the previous section implies the following strong convergence (at least for a subsequence of v m still denoted v m ) v m → v strongly in L 2 (0, T ; L 2 (Ω)).This convergence result together with (50) enable us to pass to the limit in the following weak formulation, obtained from (44) by multiplication by ϕ ∈ D(]0, T [) and integration by parts with respect to time Using the weak estimates (50) leads to for the linear terms.Further, since v m converges to v in L 2 (0, T ; V(Ω)) weakly, and in L 2 (0, T ; L 2 (Ω)) strongly, we can pass to the limit in the nonlinear term to obtain Using Lemma 2.2 and according to (49-a), φ 0m ∈ L ∞ (0, T ).Then for a subsequence of φ 0m (still denoted by φ 0m ): As far as the right hand side of (61) is concerned.Let us notice that the convergence of v m in L 2 ([0, T ] × Ω) implies its convergence in L 1 (0, T ; L 2 (Ω)).Hence Due to lemma 2.2, we have and φ 0m is then a Cauchy sequence in L 1 (0, T ) and Further, according to (63) we have φ 0 = α ∈ L ∞ (0, T ) from [12,Proposition II.1.26]. Since v m and φ 0m are bounded in L ∞ (0, T ), using (64) and ( 65) we obtain from [12, Corollaire II.1.24],for all p ∈]1, +∞[.Now we can pass to the limit in the following terms: Passing to the limit in (61) then gives for all v = v j , ∀j = 0, 1, 2, • • • , m.By linearity, equation (69) holds true for all v combination of finite v j and by density, for any element of W (Q). Finally, it remains to retrieve the stabilized problem (23), which requires to prove the existence of pressure. 4. Concluding remarks.In this work the exponential stabilization of the two and three-dimensional Navier-Stokes equations in a bounded domain is studied around a given steady-state flow, using a boundary feedback control.In order to determine a feedback law, an extended system coupling the Navier-Stokes equations with an equation satisfied by the control on the domain boundary is considered.We first assume that on Σ b (a part of the domain boundary), the trace of the fluid velocity is proportional to a given velocity profile g.The proportionality coefficient α measures the velocity flux at the interface, it is an unknown of the problem and is written in feedback form.By using the Galerkin method, α is determined such that the Dirichlet boundary control u b = αg is satisfied on Σ b , and the stabilizing boundary control is built.The resulting nonlinear feedback control is proven to be globally exponentially stabilizing the steady states of the two and three-dimensional Navier-Stokes equations.This feedback control was shown to guarantee global stability in the L 2 -norm.Finally, in order to take into account (23-f) in the variational formulation, the test functions, for example v, need to be written on the form v = αg.This requires to construct a finite-element basis which allows such a requirement and hence at least one element of the basis, for example w, such that w = g on Γ b .A number Thanks to lemma 2.2 and estimate (49), φ 2 0m and F m = φ 0m v m 2 are bounded in L 1 (R), and hence φ 2 0m and F m are bounded in L ∞ (R) with: sup Thanks to the energy estimate (49-a) satisfied by v m , we have v m (T ) ≤ C and v m (0) ≤ C. Inequation (56) thus finally reduces to
6,617.8
2014-02-01T00:00:00.000
[ "Engineering", "Mathematics" ]
Astaxanthin Pretreatment Attenuates Hepatic Ischemia Reperfusion-Induced Apoptosis and Autophagy via the ROS/MAPK Pathway in Mice Background: Hepatic ischemia reperfusion (IR) is an important issue in complex liver resection and liver transplantation. The aim of the present study was to determine the protective effect of astaxanthin (ASX), an antioxidant, on hepatic IR injury via the reactive oxygen species/mitogen-activated protein kinase (ROS/MAPK) pathway. Methods: Mice were randomized into a sham, IR, ASX or IR + ASX group. The mice received ASX at different doses (30 mg/kg or 60 mg/kg) for 14 days. Serum and tissue samples at 2 h, 8 h and 24 h after abdominal surgery were collected to assess alanine aminotransferase (ALT), aspartate aminotransferase (AST), inflammation factors, ROS, and key proteins in the MAPK family. Results: ASX reduced the release of ROS and cytokines leading to inhibition of apoptosis and autophagy via down-regulation of the activated phosphorylation of related proteins in the MAPK family, such as P38 MAPK, JNK and ERK in this model of hepatic IR injury. Conclusion: Apoptosis and autophagy caused by hepatic IR injury were inhibited by ASX following a reduction in the release of ROS and inflammatory cytokines, and the relationship between the two may be associated with the inactivation of the MAPK family. Introduction Hepatic ischemia reperfusion (IR) injury generally occurs in hemorrhagic shock, liver transplantation and other medical conditions and is a pathophysiological process influencing liver function after hepatic resection and severe trauma [1,2]. Whether hepatic IR injury can be effectively avoided is important in limiting progress in hepatobiliary surgery. Though the concept of IR was first proposed in 1960 by Jennings [3], there are still no effective prevention methods due to its complicated mechanism, although studies have shown that its development is related to liver Kupffer cells, reactive oxygen species (ROS), calcium overload, and inflammatory cytokines [4][5][6]. Research demonstrated that Toll-like receptor (TLR) stimulation by IR may play an important role in the development of new therapeutic strategies in liver Kupffer cells [7][8][9]. Therefore, the therapy of hepatic IR has become a focus of attention in the medical community. Ischemia reperfusion is a multifactorial process and includes major oxidative stress induced by ischemia and hypoxia [10]. Under normal physiological conditions, the production and elimination of oxygen free bases is dynamically balanced. However, endothelial cells and Kupffer cells activated by oxidative stress can generate large numbers of ROS through nicotinamide adenine dinucleotide phosphate (NADPH) in the membrane, and ROS damage liver cells by changing the permeability of the cell membrane, causing lipid peroxidation or directly increasing neutrophil microcirculation [10][11][12]. In addition, cytokines such as TNF-α and IL-6, released by activated Kupffer cells and aggregated neutrophils, also play a key role in IR injury [13]. TNF-α promotes swelling of endothelial cells to activate ROS, while IL-6 can induce hepatocyte injury to produce C-reactive protein, α-trypsin and fibrinogen which are associated with the MAPK family, and the PI3K/Akt and HMGB1 pathways [14][15][16]. Kohli and colleagues found that the death of liver sinusoidal endothelial cells and hepatocytes was by apoptosis, and intrinsic apoptosis induced by the explosion of ROS and inflammatory cytokines resulted in a mitochondrial energy metabolism disorder which may lead to hepatic IR injury [2,17]. However, the ratio of pro-apoptotic Bax and anti-apoptotic Bcl-2, located in the outer membrane, in mitochondrial apoptosis determines cell survival and death by controlling the opening and closing of the mitochondrial permeability transition pore (MPTP), as described by Selzner, Imahashi and colleagues [18,19]. Autophagy, another type of programmed cell death, has significant differences in biochemical pathways and morphology compared with apoptosis. Recent studies have found that autophagy can be regulated by a molecular mechanism and coordination with apoptosis to promote cell death [20]. Forty years ago, Sybers and colleagues first observed an increase in autophagic vacuoles following IR [21]. In addition, autophagy induced by cardiac ischemia and strengthened by reperfusion injury was found in rabbits by Decker et al. [22]. These studies provided a strong basis for the role of autophagy in hepatic IR. In 1998, Liang et al. demonstrated that elimination of the autophagy gene, Beclin-1, could inhibit the replication of the Sindbis virus when it was combined with Bcl-2, the decline of which promoted autophagy [23]. Therefore, the mechanism of hepatic IR injury may be further elucidated by evaluating the interaction between autophagy and apoptosis. As autophagy and apoptosis are induced by the release of ROS, this is important in the IR injury process, and scavenging ROS may be a target when screening drugs. Astaxanthin (3,3′-dihydroxy-β, β′-carotene-4,4′-dione, ASX), a carotenoid which is found in fresh-water microalgae and marine organisms including phytoplankton and fish, has higher antioxidant activity than lutein, α-carotene and β-carotene [24]. The anti-lipid oxidation, anti-inflammation, anti-diabetes and anti-cancer effects of ASX have aroused increased attention both nationally and internationally [25][26][27]. Research has shown that ASX can exert strong antioxidant activity by quenching singlet oxygen and purifying free radicals in ulcer and hepatic stellate cells [28,29]. In alveolar epithelial cells, ASX blocked ROS generation and dose-dependently inhibited apoptosis through a mitochondrial signaling pathway, as shown by Song and colleagues [30]. ASX treatment has been shown to have therapeutic properties, as U937 cells were protected from oxidative stress caused by lipopolysaccharide thus reducing ROS production [31]. In other system diseases, tissues and cells, ASX also effectively resisted oxidative stress via the inhibition of ROS release caused by UVB in human keratinocytes, neutrophils treated with free fatty acids and high glucose, and by alloxan in diabetic rats [32][33][34]. With respect to IR, Lu, Shen and Lauver showed the protective effects of ASX on brain and myocardial injury following IR [35][36][37]. However, in hepatocellular injury following IR, Gulten et al. found that ASX treatment significantly decreased the conversion of xanthine dehydrogenase (XDH) to xanthine oxidase (XO), which reduced the level of oxidative stress [38]. In another study, Ping et al. illustrated that cryptotanshinone alleviated liver IR injury via anti-mitochondrial apoptosis [39]. It is unknown whether ASX can protect hepatocytes by directly reducing ROS and the pathways mediating the interaction between ROS, apoptosis and autophagy. The aim of the present study was to determine the protective mechanism of ASX on hepatic IR injury. It is hypothesized that ASX pretreatment can attenuate ROS production induced by IR to down-regulate apoptosis and autophagy and achieve liver function protection and rapid injury recovery. ASX Had No Effect on Normal Liver Tissue Before validation of the protective effects of ASX on hepatic IR injury, we first determined the influence of ASX on normal liver tissue. The same number of mice were given the same volume of saline, olive oil or ASX (30 mg/kg or 60 mg/kg) for 14 days, respectively. Serum and liver tissues were obtained from the mice to examine liver function, pathology and markers related to damage (Bcl-2, Bax, Beclin-1 and LC3) using biochemical methods, PCR detection and western blot. The results showed that the expression of serum liver enzymes in the four groups was close to normal levels ( Figure 1A), while at the gene and protein levels, differences in relevant autophagic and apoptotic indicators were not obvious ( Figures 1B,C). Finally, we identified pathological and morphological changes in the ASX groups. The cell structures showed small disturbances, which may have been due to the influence of drug metabolism, and liver function and protein expression in the related pathways showed no significant effects ( Figure 1D). Thus, ASX and olive oil had no significant effect on normal liver tissue. ASX Pretreatment Ameliorates Hepatic IR Injury Including Liver Enzymes and Pathology ALT and AST are important indicators of liver dysfunction and are found in the hepatocyte cytoplasm. When necrosis occurs in liver cells, ALT and AST may rise rapidly and are sensitive measurements. In our experiment, 2 h, 8 h and 24 h were the main observation time points for measuring ALT, AST and pathological changes. The results showed that the enzymes in the IR group were significantly increased compared with the sham group at each point, and this was particularly evident at 8 h. Following ASX treatment (30 mg/kg or 60 mg/kg), ALT and AST decreased in a dose-dependent manner ( Figure 2A). These findings suggested that ASX can protect liver function and this effect was dependent on dose. To illustrate this, we used HE staining to observe pathological changes in liver tissue sections. At 2 h, only a small amount of nuclear enrichment with no significant necrosis was observed in the IR group. However, disorganized liver form, complete destruction of cell structure, disordered lobular structure and marked hepatocyte necrosis were seen at 8 h and 24 h. The low ASX concentration (30 mg/kg) reduced the necrotic area, while the high ASX concentration (60 mg/kg) showed a greater protective effect ( Figure 2B). ASX Reduced the Release of Inflammatory Factors Including TNF-α and IL-6 Ischemia reperfusion of tissues can promote the release of oxygen free radicals and related inflammatory factors (TNF-α, IL-6) which aggravate the injury of endothelial cells leading to liver microcirculation disturbance via chemotaxis [2]. Therefore, we selected ASX at the dose of 60 mg/kg to determine TNF-α and IL-6 in the serum and tissues of mice, respectively, using ELISA, real time PCR, western blot and histochemical staining techniques. The results showed that the expression of TNF-α and IL-6 increased in the IR group not only in plasma but in the tissues. The ELISA results on serum levels demonstrated that TNF-α and IL-6 showed an increasing trend and peaked at 8 h. ASX reduced the release of TNF-α and IL-6 at each time point ( Figure 3A). Figure 3B,C shows the gene and protein levels of TNF-α and IL-6. Compared with the sham group, the gene and protein levels of TNF-α and IL-6 increased in the IR group with the greatest effect of ASX seen at 8 h. We selected tissue slices at the 8 h period for immunohistochemical staining. The integral optical density of yellow granules was used to show severity, which was consistent with the mRNA and protein expression described above ( Figure 3D). The results for plasma content, mRNA levels, protein expression and tissue staining provided strong evidence that ASX inhibited the production of inflammatory cytokines, such as TNF-α and IL-6, in serum and tissue. ASX Alleviated Apoptosis and Autophagy by Reducing the Bax/Bcl-2 Ratio Autophagy and apoptosis are preconditions which cause necrosis of the liver. Interrelation and distinction between them commonly play a key role in the ischemia reperfusion process. Therefore, we amplified the gene and assessed the protein expression of Bcl-2 and Bax during apoptosis, and Beclin-1 and LC3 during autophagy. ASX promoted the level of anti-apoptotic Bcl-2 but inhibited the pro-apoptotic Bax, leading to a decline in the ratio of Bax and Bcl-2 as shown by real-time PCR and western blot. Autophagy-related proteins, such as Beclin-1 and LC3, significantly increased due to disordered energy metabolism after IR and ASX reduced the damage caused by autophagy-induced cell necrosis ( Figure 4A,B). These results were verified by histochemical staining ( Figure 4C). In addition, detection of autophagosomes was carried out by electron microscopy ( Figure 4D). Compared with the sham group, the autophagic vacuoles of the IR group were markedly increased. Following ASX administration, the agglutinated chromatin, damaged mitochondria and autophagy corpuscles were not easily seen. Taken together, these results demonstrated that ASX inhibited apoptosis and autophagy to protect hepatocytes from necrosis. ASX Attenuates ROS/MAPK Signal Pathways by Inhibiting the Phosphorylation of P38 MAPK, ERK and JNK Mitogen-activated protein kinase (MAPK) is expressed in all eukaryotic cells including P38 MAPK, ERK and JNK subgroups that control cell growth, differentiation, apoptosis and stress reactions to the environment [13]. The production of reactive oxygen species (ROS) after IR injury and the oxidative stress reaction are essential for apoptosis. We used the fluorescent probe, DHE, histochemical staining, and western blot to detect ROS and proteins related to the MAPK family. As shown in Figure 5A, ROS were dyed bright red by DHE compared with the sham group and showed strong expression in the IR group. After ASX treatment, reductions in bright red dot-like substances indicated less ROS production. In addition, P38 MAPK, ERK, JNK and their phosphorylation levels were measured to calculate the proportion of protein phosphorylation ( Figure 5B). The results showed that the phosphorylation level significantly increased after IR, which was consistent with the expression of ROS. Following ASX administration, a downward trend in p-P38 MAPK, p-ERK and p-JNK expression in liver tissue was observed, which was consistent with immunohistochemical staining ( Figure 5C). These results suggested that ASX inhibited ROS generation and the phosphorylation of key proteins related to the MAPK family, which may be an essential pathway in mitigating cell apoptosis and autophagy. Discussion The mechanism of hepatic IR has not been clarified due to its complexity, but increasing evidence shows that the production of ROS and inflammatory cytokines are key factors in inducing liver damage [40]. ASX, a natural antioxidant from marine organisms, may improve the outcome of liver transplantation. We demonstrated that ASX and its solvent had no negative effects on liver function and pathology in normal liver tissue. Thus, the effect of ASX on ROS and cytokines induced by hepatic IR injury requires further investigation. The sharp rise in ALT and AST is an important manifestation of our successful IR model. Serum levels of liver enzymes declined significantly, which was consistent with structural disorders, and HE staining showed the presence of necrosis. We selected an ASX dose of 60 mg/kg to assess TNF-α and IL-6 which mediate cell apoptosis and necrosis at the gene and protein levels. ASX inhibited both TNF-α and IL-6, which has been described by previous researchers. Kupffer cells, known as hepatic macrophages, are activated by IR to produce ROS, TNF-α, IL-6 and other highly reactive molecules that stimulate T and B cells and mediate the adhesion of white blood cells, platelets and liver sinusoidal endothelial cells, causing aggravation of the hepatic microcirculation [41,42]. In addition, the activation of NF-κB by TNF-α also increased inflammation leading to injury [43]. These effects resulted in ALT and AST being released from ruptured cell cytoplasm, and their plasma levels corresponded with the extent of cellular damage [44]. ASX can scavenge radicals at the outer and inner parts of the cell membrane, due to its unique molecular structure with hydroxyl and keto moieties on each ionone ring [45,46]. ASX provides protection by reducing oxygen free radicals and pro-inflammatory factors. However, the pathways which play a role in ASX hepatocyte protection by avoiding oxidative stress have not yet been clarified. Mitogen-activated protein kinase (MAPK), originally purified from fat cells by Ray and Sturgill in 1988, has been described as four distinct subgroups: P38 MAPK, extracellular signal-regulated kinase (ERK), c-jun N-terminal kinase (JNK) and big MAPK 1 (BMK1) [47,48]. The first three subgroups are usually activated concurrently in animals. In resting cells, the MAPK family, distributed in the cytoplasm, is activated by phosphorylation or inactivated by MAPK phosphatases (MKPs), a class of dual-specificity phosphatases (DSPs) as negative regulators in MAPK pathway, in order to migrate to the nucleus and cell membrane, resulting in the regulation of gene transcription [49,50]. P38 MAPK is an important component of the MAPK family which can be activated by extracellular stress, including oxidative stress, pro-inflammatory cytokines, ultraviolet radiation, and heat shock [14,[51][52][53]. P38 activation also stimulates monocyte-macrophages to produce TNF-α, IL-6 as well as increased nitric oxide (NO) accelerated ROS. Inhibition of P38 has a significant protective effect on multiple systems. Kim et al. showed that ASX inhibited apoptotic cell death in neural progenitor cells via modulation of the P38 and MEK signaling pathways [51]. Systemic inhibition of P38 MAPK weakened ongoing pulmonary inflammation, as described by Nick and colleagues [53]. Another effective advantage of MAPK inhibition has been shown in type II diabetes, Crohn's disease, acute colitis and other systems [52,54,55]. In hepatic IR injury, P38 MAPK in addition to ERK and JNK participates in the injury induced by ROS and cytokines. Koike and Hashimoto reduced IR injury following heart, lung and liver transplantations by adding P38 MAPK inhibitors [56,57]. In our study, we also conducted comprehensive examinations of mRNA and protein levels. The results showed that activated phosphorylated P38 MAPK (p-P38 MAPK) increased due to ROS and cytokines caused by IR, and its phosphorylation reduced after ASX treatment. This shows that ASX inhibited the P38 MAPK pathway by reducing ROS and TNF-α. Activation levels of ERK and JNK have also been consistently validated in previous studies [13,14,[58][59][60]. We consider that ASX weakened phosphorylation of the MAPK family and provided protection by scavenging ROS and inactivating Kupffer cells which release inflammatory factors. How does inhibition of P38 MAPK protect the liver from damage? Studies have found that P38 not only enhanced cleaved caspase-8 mediated apoptosis induced by TNF-α, but also had an effect on the Bcl-2 family resulting in increased intrinsic apoptosis [47]. The release of ROS and cytokines hindered mitochondrial energy metabolism, and stimulated related enzymes that could phosphorylate P38 MAPK. Phosphorylated P38 MAPK transferred Bax located in the cytoplasm of normal hepatocytes to mitochondria, forming dimers that had a direct influence on the membrane permeability transition pore (MPTP) [61,62]. Open MPTPs triggered mitochondrial permeability transition (MPT) to release cytochrome C and then activate caspase-9 resulting in apoptosis. ERK and JNK pathways activated by TNF-α and IL-6 can phosphorylate Bcl-2 (inactive form) to prevent inhibition of cell apoptosis [63,64]. These three pathways eventually interact with each other to activate cleaved caspase-3, leading to liver cell apoptosis [65]. The effect of ASX on the elimination of ROS and TNF-α was evaluated by PCR, western blot and immunohistochemistry staining. The key proteins in the MAPK family, such as P38, ERK, JNK, Bcl-2, Bax, and Caspases, were determined and their expression levels assessed. In addition, autophagy was found to be involved in hepatic IR injury. A reduction in related oxidative stress factors decreased the activity of Bax in mitochondrial transfer, which mediated free active Bcl-2. Bcl-2 containing Bcl-2 homology 3 (BH3)-only domains can be combined with Beclin-1 to inhibit autophagy [66]. Inhibition of autophagy limited the conversion of LC3-I to LC3-II and the formation of autophagosomes and P62, and the autophagy-related transporter combined with mature LC3 via its special ubiquitin-binding domains was detected, as shown in Figure 6 [67]. Thus, we observed increased Beclin-1 and LC3 and reduced P62 in the ASX-treated group. Figure 6. Mechanism of action of astaxanthin. In hepatic IR, astaxanthin weakened phosphorylation of the MAPK family and provided protection by scavenging ROS and inactivating Kupffer cells which release inflammatory factors. MAPK pathways activated by TNF-α and IL-6 can not only activate capase-8 but also phosphorylate Bcl-2 to induce caspase-9 activation. Inactive Bcl-2 released Beclin-1 that enhanced autophagy. Thus, astaxanthin inhibits the release of ROS and cytokines during hepatic IR to offer protection by inhibiting apoptosis and autophagy. In summary, we preliminarily found that activation of the MAPK family mediated by ROS and cytokines which induced apoptosis and autophagy could be inhibited by ASX during hepatic IR injury in mice. The link between related drugs and pathways requires further investigation in order to identify safer and effective therapies for hepatic IR injury due to its complex mechanism. Animals All experimental protocols carried out on mice were approved by the Animal Care and Use Committee at Shanghai Tongji University. Male Balb/C mice weighing 20-25 g (6-8 weeks old) were housed in a clean room at a constant temperature (22-25 °C) and a 12 h:12 h light-dark cycle. The mice were purchased from Shanghai SLAC Laboratory Animal Co. Ltd. (Shanghai, China) and given food and water ad libitum. The study was performed in accordance with the standards established by the recommendations of the Guide for the National Science Council of the Republic of China. Experimental Design The mice were randomly distributed to one of six groups as follows: Group I, control group (n = 8): mice received saline by gavage Group II, oil group (n = 8): mice received olive oil by gavage Group III, ASX group (n = 16): mice received ASX (30 mg/kg or 60 mg/kg) by gavage Group IV, sham group (n = 18): mice received olive oil by gavage without IR Group V, IR group (n = 18): mice received olive oil by gavage before IR Group VI, IR + ASX group (n = 36): mice received ASX (30 mg/kg or 60 mg/kg) by gavage before IR ASX was dissolved in olive oil to obtain a suspension and stored in the dark at 4 °C. The mice were given saline, olive oil or ASX for 14 consecutive days. All mice in Group I, II, III and IV were killed after 14 days, while, six mice were randomly selected from Group IV, V and VI (6 mice in each dose) were sacrificed at 2 h, 8 h or 24 h after IR. Serum and liver tissues were obtained for further experiments. Mouse Model of Hepatic Ischemia-Reperfusion Injury A mouse model of 70% hepatic warm ischemia was established according to a previously reported method [68]. Mice were fasted for 24 h before surgery, but allowed water. The mice were anesthetized by an intraperitoneal standard dosage (1.25%) of sodium pentobarbital (Nembutal, St. Louis, MO, USA). The abdominal viscera were identified after entering the intra-abdominal cavity along the medioventral line. The middle and left hepatic lobes, and then, the first porta hepatis were dissociated from adjacent structures using wet cotton swabs. Hepatic ischemia was achieved by occluding the hepatic artery, portal vein and bile duct using vascular clamps to obtain 70% tissue ischemia. When the hepatic lobe showed a color change from crimson to light red, the incision was covered with humid saline gauze for 60 min. The abdominal incision was then sewn without vascular clamps. To maintain a constant body temperature, we applied an animal body temperature maintenance instrument (ZS Dichuang, Beijing, China). ALT and AST Enzyme-Activity and Cytokine Measurements Sera were isolated by centrifuging at 4500× g for 10 min after storing for 4-5 h at 4 °C. Fifty microliters aliquots of serum were placed in Eppendorf (EP) tubes and stored at −80 °C. Serum ALT and AST were measured by an automated chemical analyzer (Olympus AU1000, Tokyo, Japan). The plasma levels of TNF-α and IL-6 were assessed using ELISA kits, according to the manufacturer's protocol. Histopathological Evaluation The removed liver tissue was fixed with 4% paraformaldehyde for at least 24 h, and then dehydrated using different concentrations of ethanol. After immersion in xylene, the samples were waxed in paraffin blocks to prevent crystallization. Sections 5 μm thick were cut and stored at room temperature. The sections were stained with hematoxylin-eosin (HE) for observation under a light microscope. Immunohistochemical Staining The slices were dried in a drying oven for 2 h and then immersed in dimethyl benzene for dewaxing. Different concentrations of ethanol were used to dehydrate the prepared sections. After washing with phosphate buffer solution (PBS) three times, antigen retrieval was then performed with citrate buffer by heating to 95 °C for 10 min and cooling to room temperature (four cycles). In order to inhibit endogenous peroxidase activity, we added 3% hydrogen peroxide to the sections at room temperature for 20 min and then blocked them with 5% BSA solution. The liver slices were then incubated with antibodies directed against Bcl-2 (1:100), Bax (1:500), Beclin-1 (1:100), LC3 (1:500), p-JNK (1:50), p-ERK (1:100), p-P38 MAPK (1:50) for 24 h at 4 °C and with a secondary antibody (1:50) for 1 h at 37 °C after washing. A diaminobenzidine (DAB) kit was used to show granular brown substances which were observed and captured by a microscope with a digital camera (Leica Wetzlar, Germany). Image Pro Plus Software 6.0 (Media Cybernetics, Silver Spring, MD, USA) was used to calculate the integrated optical densities (IOD) of the sections. SYBR Green Real-Time Polymerase Chain Reaction (PCR) Approximately 100 mg of tissue were removed from liquid nitrogen and triturated in mortars soaked with diethyl pyrocarbonate (DEPC) treated water. Trizol, chloroform and isopropyl alcohol were added to the tissues for extraction of total RNA. After determination of the purity and concentration, a reverse transcription kit (TaKaRa Biotechnology, Dalian, China) was used to transcribe RNA into cDNA. We used a 10 μL reaction volume containing primers and related enzymes for SYBR Green quantitative real-time PCR using a 7900HT fast real-time PCR system (Applied Biosystems, NewYork, NY, USA), according to the Premix EX Taq protocols (TaKaRa Biotechnology, Dalian, China). Gene expression was calculated on the basis of the solubility curve and the ratios of target genes. All primers used in the experiments are shown in Table 1. Transmission Electron Microscopy Liver tissues were prefixed by immersion in 3% glutaraldehyde and 0.2% mol/L sodium cacodylate for 4 h and fixed in 1% osmium tetroxide for 1 h. Autophagosomes were observed by transmission electron microscopy (LEO 906E, Oberkochen, Germany). Reactive Oxygen Species (ROS) of Liver Tissue Assay The ROS fluorescent probe, dihydroethidium (DHE), can penetrate the cell membrane and is oxidized by ROS to produce a red fluorescence. Fresh liver tissues were fixed with 4% formaldehyde and then dehydrated using 30% sucrose solution overnight at 4 °C. The tissues were embedded using opti-mum cutting temperature compound (OCT) and cut into sections 5 μm thick. After adequate washing, the sections were incubated with DHE (10 μM) for 60 min in the dark. The sections were then washed with PBS three times, for 10 min each time. Red light stimulated by green light was acquired by the fluorescence microscope for calculation of the red area. Statistical Analysis All data are presented as the mean ± standard deviation (SD) and analyzed using SPSS 20.0 software (Chicago, IL, USA). Differences among groups were detected by one-way analysis of variance (ANOVA) using the Student-Newman-Keuls (SNK) method. p < 0.05 was considered statistically significant. Conclusions Pretreatment with astaxanthin attenuated hepatic ischemia reperfusion-induced apoptosis and autophagy via the ROS/MAPK pathway in mice, and the relationship between the two may be associated with the reduction of inflammatory cytokines.
6,050.8
2015-05-27T00:00:00.000
[ "Biology", "Medicine" ]
Recognition of post-learning alteration of hippocampal ripples by convolutional neural network differs in the wild-type and AD mice Evidence indicates that sharp-wave ripples (SWRs) are primary network events supporting memory processes. However, some studies demonstrate that even after disruption of awake SWRs the animal can still learn spatial task or that SWRs may be not necessary to establish a cognitive map of the environment. Moreover, we have found recently that despite a deficit of sleep SWRs the APP/PS1 mice, a model of Alzheimer’s disease, show undisturbed spatial reference memory. Searching for a learning-related alteration of SWRs that could account for the efficiency of memory in these mice we use convolutional neural networks (CNN) to discriminate pre- and post-learning 256 ms samples of LFP signals, containing individual SWRs. We found that the fraction of samples that were correctly recognized by CNN in majority of discrimination sessions was equal to ~ 50% in the wild-type (WT) and only 14% in APP/PS1 mice. Moreover, removing signals generated in a close vicinity of SWRs significantly diminished the number of such highly recognizable samples in the WT but not in APP/PS1 group. These results indicate that in WT animals a large subset of SWRs and signals generated in their proximity may contain learning-related information whereas such information seem to be limited in the AD mice. Results In earlier work 19 we found a major deficit of SWRs occurring in APP/PS1 mice during slow-wave sleep, before and after learning session of a spatial memory task (Fig. 1a1) although the mice were able to consolidate spatial reference memory in a similar way as WT group (Fig. 1a2). These findings may question the role of SWRs in memory processing in this AD model. Interestingly, whereas SWRs generated before and after learning have different properties in the WT animals, no such differences have been so far found in the APP/PS1 group, further suggesting a lack of their involvement in memory processing. Searching for the putative learning-induced alteration of SWRs in the present study we test the capability of the CNN algorithms to recognize pre-and postlearning SWRs in both groups of animals. We used time windows of LFP signals recorded in CA1, containing SWRs (ripple-centered intervals, RCIs, see Fig. 1b). As explained in the Material and Methods, 2000 RCIs were randomly chosen as a fixed testing pool of RCIs from the data recorded in each experimental condition (i.e. before or after learning) (Fig. 1c). From the remaining RCIs, a training pool was built using 200 randomly chosen RCIs from each animal in each experimental condition (Fig. 1d). After CNN's training, we tested its performance on the testing pool of RCIs (Fig. 1e,f). This procedure was repeated in 10 sessions, in each of which different training pools and the same testing pool were used. In this experimental algorithm, an individual RCI can be correctly or incorrectly classified in each of 10 sessions. A percentage of correct classifications of a given RCI will therefore indicate a probability to be correctly recognized by CNN, which we define as accuracy rate (AR) attributed for a specific RCI. The average AR of RCIs for a given animal was calculated. Moreover, to better characterize the CNN's performance the percentage of highly recognizable RCIs (HR RCIs), that is RCIs correctly classified in at least 8 over 10 sessions, was calculated for each animal. The average AR and HR RCIs were obtained for each group of animals. Recognition of a post-learning alteration of RCIs is effective in WT but not APP/PS1 mice. We first tested the capability of the CNN to classify RCIs in the control group of wild-type animals (N = 6). As illustrated in Fig. 2a, CNN was able to correctly identify a large population of RCIs in the testing pools. Indeed, the average AR in the WT group was equal to 66.29%, and 46.39% of RCIs were highly recognizable (Fig. 2b). By contrast to a relatively good ability of the CNN to recognize RCIs in the WT group, the average AR for the APP/PS1 group was practically equal to the chance level (50%, see red line in Fig. 2a) and significantly lower than in WT animals (p = 0.0001, t = 6.0435, t-test, N = 6) (Fig. 2a). Moreover, the percentage of HR RCIs among all tested RCIs was equal only to 13.47% and was also significantly lower than in the WT group (p = 0.0006, t = 4.9811, t-test, N = 6) (Fig. 2b). The results of this deep learning experiment suggested that some learningrelated information might be carried by SWRs in the WT group and therefore recognized by the CNN whereas in the APP/PS1 animals such information might be lacking. The difference in the pre-and post-learning SWRs' intrinsic frequency matters. As previously shown, SWR's intrinsic frequency increased significantly after a learning session in the WT but not in APP/PS1 mice 19 . It was, therefore, possible that the CNN was able to recognize RCIs in WT animals due to the difference in SWR's intrinsic frequency between "before" and "after" learning (cf. yellow and blue bars, respectively, Fig. 3a1-2) that was existing in the training and testing data pools. By contrast, CNN might fail to recognized RCIs in the APP/PS1 group due to an absence of such difference (cf. yellow and blue bars, Fig. 4a1-2). In order to test this hypothesis, we analyzed correlations between a given SWR's intrinsic frequency and AR of RCI containing this SWR. We found a relatively high value of correlation coefficient R 2 between SWR's frequency and AR of RCIs belonging to "before learning" for each WT animal (R 2 ranging from 0.1311 to 0.3685, N = 6) with regression lines showing negative slopes (slopes ranging from -1.28 to -2.61%/Hz, N = 6) (Fig. 3b1). This indicates that the lower the SWR's frequency the higher was the RCI's probability to be identified as belonging to the "before" data set. A reverse tendency was expressed in the set of RCIs recorded "after" learning (slopes ranging from 1.7 to 2.4%/Hz, R 2 ranging from 0.1212 to 0.4117, N = 6): here the higher the frequency of SWR, the higher the probability of the RCI to be correctly recognized (Fig. 3b2). This is illustrated also for the pooled set of RCIs in the WT group (red regression lines, Fig. 3b1-2 and color-coded bars, Fig. 3c1-2), showing that correlation between SWR's frequency and AR of RCI belonging to "before" (slope = -2.03%/Hz, R 2 = 0.3268, see also Fig. 3c1) and "after" learning (slope = 1.84%/Hz, R 2 = 0.3274, Fig. 3c2) was opposite. Altogether, these results indicate that in the WT group the CNN developed a successful strategy of classification of RCIs that was based on the intrinsic frequency of SWRs generated "before" and "after" learning. Such a strategy could not be developed in the APP/PS1 mice since SWRs generated in this group have similar frequency distributions in the two experimental conditions (Fig. 4a1-2). Indeed, all regression lines illustrating the correlation between SWR's frequency and RCI's recognizability for APP/PS1 animals (N = 6) were characterized by a low value of the slope and correlation coefficient both in the set of data corresponding to "before" (slope ranging from − 0.53 to 0.2%/Hz; R 2 ranging from 0.0001 to 0.0362) and "after" learning (slope ranging from − 0.21 to 0.09%/Hz; R 2 ranging from 0.0002 to 0.0168) (Fig. 4b1,b2, respectively). Similar lack of correlation between SWR's frequency and AR of RCI was found also for the pooled set of RCIs in the APP/PS1 group ("before": slope = 0.06%/Hz, R 2 = 0.0004, "after": slope = 0.05%/Hz, R 2 = 0.0003), as illustrated in Fig. 4b1-2 (red regression lines) and in Fig. 4c1 www.nature.com/scientificreports/ In the next step, we asked whether the CNN is able to classify RCIs belonging to WT mice if the training pool ( Fig. 5a1) is modified by requiring equal distribution of SWR's frequency corresponding to "before" and "after" learning (see the area of overlapping blue and yellow bars, Fig. 3a1), similarly as it was expressed in the APP/ PS1 animals. (cf. Figs. 4a1, 5a1). We kept the testing pool unchanged, so SWR's frequency distributions "before" and "after" were different as in the previous deep learning experiment (Fig. 5a2). In this condition, we expected that the CNN's strategy could be based on some (unknown) features of RCIs, possibly related to the processing of the information encoded during learning, that are different from SWRs intrinsic frequency. Successful not frequency-based discrimination strategy in WT mice. As expected, the correlation between the AR of RCI and SWR's frequency declined in this deep learning experiment. Indeed, all regression lines illustrating this relationship for individual animals in the WT group were characterized by low values of slopes and correlation coefficients R 2 for RCIs belonging to "before" (slope ranging from − 1.39 to 0.33%/ Hz, R 2 ranging from 0.00001 to 0.1014) and "after" learning (slope ranging from − 0.17 to 0.76%/Hz, R 2 ranging from 0.0015 to 0.0626) ( Fig. 5b1-2). Also the pooled data set of RCIs did not express any correlation with SWR's frequency in both experimental conditions (see red regression lines, Fig. 5b1-2), as indicated by low values of slopes ("before": = − 0.16%/Hz; "after": = 0.14%/Hz) and correlation coefficients ("before": = 0.0023, "after": = 0.002) as well as by the color-code representation in Fig. 5c1-2. In this deep learning experiment, the CNN performed at the lower level than when trained on the data containing full information of SWRs' frequency, but still relatively well. Indeed, the value of the average AR as well as the percentage of HR RCIs in the WT group decreased only by ~ 5-10% compared to the previous experiment (AR: 66.29 + /− 0.03% (full frequency), 63.58 + /− 0.025% (equalized frequency), p = 0.0615, t = 2.9714, t-test with Bonferroni correction; HR RCIs: 46.39 + /− 5.56% (full frequency), 40.69 + /-4.67% (equalized frequency), p = 0.0485, t = 3.1831, t-test with Bonferroni correction). Importantly, both the value of AR and percentage of HR RCIs were still significantly higher than the performance of the CNN in the APP/PS1 group (AR: p = 0.0004, t = 5.8035; HR RCIs: p = 0.0004, t = 5.8035, t-test with Bonferroni correction) (Fig. 6). The distribution of recognized RCIs differs in the two groups of animals. So far, only highly recognizable RCI, correctly classified in at least 8 sessions, were analyzed. To illustrate completely the CNN's performance in the three deep learning experiments described above we calculated the percentage of RCIs correctly classified in any number of sessions (from 0-not recognized in any session, to 10-recognized in all sessions) (Fig. 7). As illustrated, the CNN's performance in the two deep learning experiments on the WT group, with full and equalized SWR's frequency distribution in the training pool, was similar (see, blue solid and dotted lines, respectively, Fig. 7). Indeed, in both experiments the percentage of RCIs correctly classified increased monotonically with the number of sessions considered (see from n = 0-10, Fig. 7). By contrast, the distribution characterizing performance of the CNN in the APP/PS1 group had a different shape, suggesting a random process of RCIs classification. Indeed, the distribution was fairly symmetrical, with most of RCIs being classified correctly 4-6 times (see the maximal value of the red solid curve, Fig. 7), which implicates the same number of incorrect classifications. Altogether these results show that the CNN trained on RCIs containing information about the difference of the pre-and post-learning SWRs' intrinsic frequency developed a successful classification strategy that was based on this difference (or on some RCIs' features which occurrence was correlated with SWRs' frequency). www.nature.com/scientificreports/ While trained on the data set not containing information about the difference of the SWRs' intrinsic frequency in the two experimental conditions, the CNN developed a different strategy of RCIs' classification, also relatively successful, that was based on some unknown features of the RCIs, that were different before and after learning and independent of SWRs' frequency. Contrary to a good and flexible CNN's ability of correct discrimination of pre-and post-learning RCIs in the WT group the CNN failed to classify RCIs belonging to the APP/PS1 animals, which suggested a lack of learning-dependent information in this pool of data. Elimination of the head and tail of RCI has an opposite effect in the two groups of mice. As demonstrated by the recent study, some memory processing may occur in the close vicinity of SWRs (see 4). It was therefore interesting to test whether removing the information carried in the neighborhood of SWRs may Accuracy rate (%) Figure 3. Accuracy rate of RCIs is correlated with the intrinsic frequency of SWRs in the WT mice. The post-learning increase of SWRs' frequency was present both in the training (a1) and the testing pools of RCIs (a2). AR was correlated with SWRs' frequency in individual animals (b1-2) as well as in the pooled set of RCIs (c1-2). Notice a positive correlation between SWR's frequency and AR of RCI belonging to "after" learning (see regressions lines in b2, color bars in c2) and a reverse relationship that was expressed "before" learning (see regressions lines in b1, color bars in c1). Regression lines indicated in black-correspond to individual animals, in red-to pooled RCIs. Color code is expressed in the percentage of AR (c1-2). (Fig. 8b). These results indicate that in the APP/PS1 group signals occurring www.nature.com/scientificreports/ prior and posterior to SWRs contained no information that could be useful in the discrimination of pre-and post-learning RCIs and eliminating them enable the CNN to improve a classification strategy. Importantly, while the CNN was analyzing only the content of SWRs, although the resulting AR was similar to the level expressed in APP/PS1 mice (p = 0.076, t = 2.607, t-test with Bonferroni correction) the percentage of HR RCIs was still significantly higher in the WT compared to APP/PS1 animals (p = 0.049, t = 2.869, t-test with Bonferroni correction) (Fig. 8). Moreover, the distribution of correctly classified RCIs did not change considerably and was still gaussian-like shaped in the APP/PS1 group and monotonically increasing with the number of sessions in the WT group (see dashed blue and dashed red lines, respectively, Fig. 7). Discussion In this study we compare the performance of CNN algorithms to discriminate pre-and post-learning RCIs in the two groups of related animals, namely, the APP/PS1 mice which are a model of Alzheimer disease (AD) and their littermate kin (wild-type, WT). We found that whereas CNN efficiently recognized RCIs as belonging to pre-or post-learning sessions in the WT group, it failed to do so in the APP/PS1 mice. Indeed, whereas the mean AR of SWRs belonging to the WT group was equal to 66%, in the APP/PS1 animals it was at the chance level (50%), suggesting that CNN's www.nature.com/scientificreports/ classification process was randomized in this group. This was further supported by a difference between the two groups in the distribution of RCIs correctly recognized in a given number of sessions. Whereas in APP/PS1 animals this distribution had a gaussian-like shape, indicating a random process of RCIs' classification, in the WT group the number of recognized RCIs increased with their recognizability (i.e. with the number of sessions in which they were correctly recognized) (Fig. 7). Consequently, in the WT group, the number of highly recognizable RCIs was equal to almost 50% of tested RCIs whereas in APP/PS1 group this amount was less than 15%. These findings indicated that the CNN developed a successful strategy to discriminate pre-and post-learning RCIs in the WT group. By contrast, it was unable to find learning-related features of RCIs in the APP/PS1 mice, which were nevertheless successful in memorizing a spatial task 19 . However, since the intrinsic frequency of pre-and post-learning SWRs differed in the WT but not APP/PS1 mice we performed another deep learning experiment, in which we tested the CNN's discrimination capability after being trained on RCIs with equalized pre-and post-learning frequency distributions of SWRs. It is noteworthy that the CNN's performance in this experiment was still considerably better than in the APP/PS1 group (Fig. 6). These results demonstrated that the CNN was able to develop two different classification strategies in the WT group, based on RCIs' features which occurrence was correlated, or not correlated, with the SWRs' frequency, depending on the training data pool, but failed to find a successful algorithm allowing to discriminate pre-and post-learning RCIs in the APP/PS1 mice. This reinforces the idea that in the APP/PS1 mice, although the hippocampal neural networks are still able to generate SWRs, these latter seem to be incapable to be altered by learning. Whereas a clear difference in the CNN's ability to discriminate pre-and post-learning RCIs was found between the two groups of animals, reducing the information content of RCIs by removing pre-and post-SWR signals www.nature.com/scientificreports/ had a puzzling effect on the CNN's performance. Indeed, in the first two deep learning experiments, the CNN analyzed a full content of RCI of the duration of 256 ms, in which SWR occupied only approximately 25%. This proportion was chosen in order to test a putative contribution of signals generated in the close vicinity of SWR on the CNN's performance. As reported recently, disrupting neural activity following SWRs by light stimulus disrupted hippocampal-dependent memory, suggesting that these signals contribute to spatial learning 21 . Moreover, a cortical-hippocampal-cortical loop of information transmission, possibly involved in memory consolidation, was recently postulated around the time of sleep SWRs, in which patterns of cortical activity was found to be predictable for subsequent SWRs activity, that in turn initiated activation of multiple cortical areas 22,23 . Obviously, cortical signals were not involved in the CNN classification process in our experiments. Interestingly, however, in the WT group, we did found a negative effect of the elimination of CA1 signals proceeding or following SWRs on the CNN's performance, indicating their alteration by a process of learning. By contrast, in APP/PS1 animals such elimination resulted in a slight but significant improvement of the CNN's performance (Fig. 8a). This indicates that the pre-and post-SWRs signals in the APP/PS1 mice were not useful in the CNN classification process which in turn could be upgraded by reducing the amount of information to be processed. Interestingly, removing SWRs from RCIs with the remaining content conserved resulted in the complete incapability of CNN to perform a correct classification in the WT group (AR at the chance level, data not shown). This suggests that in WT, but not APP/PS1 animals, a synergy between SWR per se and surrounding signals occurs and can be recognized by the CNN whereas in APP/PS1 mice only a very content of SWRs preserved a relevant information that can be used in the classification task. In the WT group highly recognizable SWRs, that is SWRs correctly classified in at least 8 of 10 testing sessions and possibly altered by learning, constituted not more than 50% of all the population. There are some factors that may influence this proportion. First, apparently not all SWRs generated after a given learning session must carry information about the session. Indeed, only 10 to maximally 40% of SWRs were found to be involved in a replay of activity patterns representing past experience [24][25][26] . Second, the data pools corresponding to "before" and "after" learning were collected from all 6 days of learning in the 8-arm maze. In days 2-6, "before" learning, the animals were placed in experimental conditions similar to previously experienced, with the same view of the experimental room and the maze. This might activate previously formed memories of a visit in the maze and mark the content of some RCIs, which thus could become somehow similar to those generated "after" learning. Finally, the most striking results obtained in our study is the amount of RCIs that have been correctly recognized using just LFP signals as belonging to pre-or post-learning sessions in the WT group. More than 40% of RCIs were found to be highly recognizable even if the information concerning SWRs' frequency available during CNN's training was not complete and might have been misleading and almost 50%-when the information was fully available. Whereas it is hippocampal spiking during SWRs which represents previous experience, no spiking patterns could have been recognized by the CNN in LFP signals recorded in this study and possibly used as a key signature of post-learning RCIs. Therefore there must exist some features of RCIs, different from SWR's intrinsic frequency and spiking patterns, that are profoundly altered after learning sessions in approximately half of RCIs. What these features are and how they are related to the memory processing remains to be revealed in the future study. Material and methods Animals and surgery. The electrophysiological data analyzed in this study were taken from our previous work 19 . The mice used were double transgenic APP/PS1 mice-a mouse model of Alzheimer Disease (AD) combining cognitive and amyloid pathologies starting at 4 months old as previously reported 27 and their wildtype (WT) littermates. The APP/PS1 model resulting from the crossing of 2 lines of commercial simple transgenic mice: APPswe, Tg2576, and PS1dE9 has received ethical authorization # 3804 and 21377 from CEEA50, Bordeaux. The genotypes of animals were controlled by a polymerase chain reaction of tail biopsy. Data were collected from 6 APP/PS1 and 6 WT females (8-9 months old). The animals were held in the animal facility as described in 19. Although electrodes were implanted in different cortices as well as in dorsal hippocamp in the present study only electrophysiological activity recorded in the CA1 area was analyzed. For the implantation stereotaxic surgery under deep isoflurane anesthesia was used. Electrodes, consisting of insulated tungsten wire (diameter 35 μm, California Fine Wires), were implanted using stereotaxic coordinates into the CA1 region of left and right hippocampus (AP: + 2.0 mm, L: −/ + 1.5 mm (left or right hemisphere), VD: − 1.05 mm ), reference and ground electrodes were implanted into the cerebellum. The electromyogram (EMG) electrode was inserted into the neck muscles. The animals were housed individually during 3-4 weeks after surgery before the beginning of recordings and behavioral sessions (see 19 for more details). Experimental procedures complied with official European Guidelines for the care and use of laboratory animals (directive 2010/63/UE) and were approved by the ethical committee of the University of Bordeaux (protocol A50120159 and A16323). Before starting spatial memory tests mice were gradually food restricted to maintain their body weight at 85% of their ad libitum body weight. During the course of experiments, their access to water remained free. All procedures took place during the light cycle. Behavioral experiments. Spatial memory of the animals was tested in an eight-arm radial maze, as described previously 19 . Briefly, to provide a spatial hippocampal-dependent learning task various distal cues were positioned on the walls of the experimental room. Mice were familiarized with the radial maze and its environment during two days of habituation. As illustrated in Fig. 1a1, at the start of each daily experimental procedure, food-restricted mice stayed in the home cage with the connector plugged into the recording system for 90 min. Then, they were disconnected and placed into the maze where 3 arms were baited with food rewards. The trial was ended when all the rewards were eaten. Each animal performed six trials per daily session. After www.nature.com/scientificreports/ performing the trials animals were placed back to their home cage where they stayed 90 min connected to the recording system. The same procedure was used for 6 consecutive days. Data acquisition and processing. During the recording session, the mouse head connector was linked to amplifiers by a soft cable allowing free motions of the animal. Behavior was tracked with a video camera. Neurophysiological and EMG signals were acquired at 40 kHz on a 128-channel Plexon system and stored on a PC for offline analysis. Data were down-sampled to 2000 Hz using Matlab's ' decimate' procedure. Identification of REM and slow-wave sleep (SWS) was performed by visual inspection using EMG, spectrograms, delta/theta ratios as well as video-recording, as previously described 19 . Filtering of the signals was performed using Chebyshev Type II filter (order 4). Episodes of SWRs were detected in signals filtered in the 100-250 Hz frequency band. Envelopes of the narrow-band filtered signal, calculated using the Hilbert transform were z-scored. SWRs bouts were identified as epochs in which the envelope exceeded 2 SDs of the signal and reached 5 SDs, with the time points of the 2 SDs-crossing taken as the onset and offset of the SWR. SWRs spaced less than 20 ms apart were merged and longer than 100 ms were discarded. SWRs intrinsic frequency was calculated using a Hilbert transform of the signal. Deep learning experiments. To test the capability of the deep learning algorithms to recognize pre-and post-learning SWRs we used selected time windows of LFP signals recorded in CA1 during SWS, while the animal stayed in the home cage, before and after the session of a spatial memory task. As illustrated in Fig. 1b each selected interval of the duration of 256 ms was centered at a single SWR. These ripple-centered intervals (RCIs) were not overlapping and separated by at least 10 ms from neighboring SWRs. RCIs pool from the data recorded in a given experimental condition (before or after learning) in all animals of a given group (WT or APP/PS1) was used to generate testing and training pools of RCIs (Fig. 1c,d). First, 2000 RCIs were randomly chosen as the testing pool for each experimental condition. From remaining RCIs a specific training data set was extracted using 200 randomly chosen RCIs from each animal in a given experimental conditions (1200 for a group and condition). After 900 epochs of training CNN on this training pool (Fig. 1e), its performance was tested on the testing pool (Fig. 1f). Such a procedure is defined as one deep learning session. Since the training of CNN in one session involved certain random processes in initializing the weights of CNN, collecting RCIs in batches, and updating gradient descents, for one deep learning experiment we performed 10 sessions as random cross-validation, each consisting of the CNN's training with a new set of randomly chosen RCIs and a test on the same fixed data set (Fig. 1f). CNN. The general approach of a classification task in machine learning is to build a model M w with a weight vector w that gives a prediction vector y while receiving an input vector x, that is, y = M w (x) . In our case, the vector x stands for the input RCI, and the vector y is a probability distribution vector indicating the classification of x belonging to "after" learning (AL) or "before" learning (BL). A loss function L y,ỹ is chosen to calculate the error between the prediction y and the ground truth ỹ corresponding to the class to which x actually belongs. To minimize the loss, one may search a suitable weight vector w by applying the gradient descent (GD) method to the loss: where t is the iteration index, ∇ w is the gradient operator with respect to the variable w, and α is the learning rate that controls the convergence speed for the iteration. In our work, we chose the CNN algorithm to be the model classifying pre-and post-learning SWRs. Indeed, the CNN performed much better, with a greater than 10% average accuracy rate, than other deep learning algorithms, such as fully connected networks 28,29 or recurrent neural networks( 30-32 ). One could simply construct a CNN model as layers of modules/functions taking the output of the previous layer as its input, i.e., a composition of functions 33 . Accordingly, the CNN we used consists of three kinds of modules, namely, convolutional layers, pooling operations, and fully connected layers (Fig. 9a). Each module is a function taking vectors as its input and output. A convolutional layer acts as following where x i , y i , and w i correspond to the components of the input, output, and weight vectors of the convolutional layer, respectively. The pooling operation we adopt is max pooling (Fig. 9a) which acts as. where x i and y i also stand for the components of the input and output vectors, respectively. The fully connected layer is simply outputting the result of the input vector multiplied by the weight matrix: y k = m x k,m w m . A bias term is usually added to the output of convolutional layers and fully connected layers, and then followed by a rectified linear unit (ReLU) (Fig. 9a) defined by where b k stands for the bias and y k stands for the component of the output vector. These terms make the whole CNN model nonlinear, which is essential to achieve a general classification task. www.nature.com/scientificreports/ In this work, the CNN was designed as 16 convolutional layers in total, with 8 max pooling operations performed between convolutions, 2 fully connected layers equipped with rectifier functions, and 1 fully connected layer equipped with the softmax function (Fig. 9a). The softmax function converts the output vector into a probability distribution vector whose kth component is defined by Figure 9. CNN. (a) The CNN constructed of convolutional layers, max pooling layers, and fully connected layers returns a prediction vector indicating the classification of the input RCI belonging to "after" learning (AL) or "before" learning (BL). (b) The CNN's classification task is composed of the preparation of training data, the training of CNN for classification, and the evaluation of the trained CNN. Training CNN for a batch of RICs requires the calculation of GD of the loss of predicted results, which in turn adjusts CNN's weights and biases to improve the performance of CNN. After one epoch of training throughout all batches of RCIs, we redistribute RCIs randomly into batches and initiate the next generation of batch training. We stop training CNN after 900 epochs and then test its classification accuracy. www.nature.com/scientificreports/ within which the component of the maximal softmax is referred to as the predicted class of the input x. The batch size is equal to 40 and the loss function is defined accordingly by where y m ,ỹ m denots the usual inner product of the ground truth vector ỹ m and the probability distribution vector y m generated from the outcomes of the CNN. The loss function quantifies the performance. The loss for the best recognition of RCIs is equal to 0. The loss is descended gradually by taking gradient updates on the weights and biases of CNN, as explained by the flow chart of training CNN (Fig. 9b). The study is reported in accordance with ARRIVE guidelines. www.nature.com/scientificreports/
7,135.8
2021-10-28T00:00:00.000
[ "Medicine", "Computer Science" ]
Stacked Sparse Auto-Encoders (SSAE) Based Electronic Nose for Chinese Liquors Classification This paper presents a stacked sparse auto-encoder (SSAE) based deep learning method for an electronic nose (e-nose) system to classify different brands of Chinese liquors. It is well known that preprocessing; feature extraction (generation and reduction) are necessary steps in traditional data-processing methods for e-noses. However, these steps are complicated and empirical because there is no uniform rule for choosing appropriate methods from many different options. The main advantage of SSAE is that it can automatically learn features from the original sensor data without the steps of preprocessing and feature extraction; which can greatly simplify data processing procedures for e-noses. To identify different brands of Chinese liquors; an SSAE based multi-layer back propagation neural network (BPNN) is constructed. Seven kinds of strong-flavor Chinese liquors were selected for a self-designed e-nose to test the performance of the proposed method. Experimental results show that the proposed method outperforms the traditional methods. Introduction In recent years, with the improvement of people's living standard, people pay more and more attention to food safety. Chinese liquors industry is a unique traditional light industry in China. However, counterfeit and fake liquor problems have been plaguing the industry and consumers. How to identify Chinese liquors quickly and accurately is an urgent problem to be solved. Traditional testing instruments, such as chromatograph and spectrometer, are expensive, bulky and inconvenient to carry, and difficult to realize rapid detection [1]. Electronic nose (e-nose) is a portable and rapid instrument inspired by olfactory systems of mammals, which has been widely used in food safety, environmental monitoring and disease diagnosis [2][3][4]. Data processing procedure of a traditional e-nose mainly consists of several steps: pre-processing, feature extraction (generation and reduction) and classification. Each step has plenty of optional methods or algorithms, and different choices may lead to different identification results. For example, Jing et al. [5] presented a new combination method for Chinese liquor classification. In the step of feature generation, ten features were selected based on information theory. In the procedure of feature reduction, they have tested performance of two methods: kernel entropy component analysis (KECA) and kernel principle component analysis (KPCA). Finally, they presented a multi-linear classifier and used the back propagation neural network (BPNN) as well as the linear discrimination analysis (LDA) for comparison. More recently, Jia et al. [6] proposed a new hybrid algorithm for Chinese liquors classification, in which man-made features were reduced using a combined KECA-LDA technique and the extreme learning machine (ELM) was applied as a classifier. Before they found this optimal KECA-LDA-ELM combination algorithm, they have tried some other combination algorithms, such as KECA-BPNN, KECA-ELM, and KECA-LDA-BPNN. Obviously, in the traditional design of an e-nose, obtaining optimal combination method is complex and time consuming. The motivation of our work is to simplify the traditional data-processing steps for e-noses using deep learning (DL) techniques. The concept of deep learning (DL) was first presented by Hinton and Bengio in 2006 [7,8], to solve the optimization problems of deep network structure. The stacked sparse auto-encoder (SSAE) was a kind of DL model, which was presented by Hinton and Ranzato et al. [9,10]. In the past few years, DL has been successfully applied to image recognition [11], speech recognition [12], and target recognition [13]. DL techniques break the limitation of layer numbers and overcome the gradient dilution and the local minimum problems in traditional neural networks [14,15]. To our knowledge, DL methods have not been widely used for data processing of e-noses. Längkvist and Loutfi [16] applied deep belief networks (DBN) and conditional restricted Boltzmann machine (CRBM) for an e-nose to identify the special bacteria in blood and agar. Längkvist et al. [17] used two unsupervised feature learning methods: stacked restricted Boltzmann machines and stacked auto-encoders, for fast classification of meat spoilage markers. More recently, Liu et al. [18] implemented the DL technique to tackle the sensor drift problem and then to improve the classification performance of the machine olfaction system. As far as we know, there is no report on DL based e-noses for Chinese liquors recognition. In this paper, we present a stacked sparse auto-encoder (SSAE) [19] based DL approach for Chinese liquors classification. We use the SSAE to learn inherent features automatically from the response curves of gas sensors in unsupervised manner. After that, the learned features are used for training a BPNN to classify different brands of liquors. The proposed method does not need the steps of preprocessing and feature extraction. In the experimental procedure, seven brands of strong-flavor Chinese liquors were selected to test the effectiveness of our proposed method. The results of the SSAE based method were compared with those of stacked auto-encoders (SAE) [20] based BPNN [21] and SSAE based support vector machine (SVM) [22] as well as two kinds of traditional methods. The remainder of this paper is organized as follows: Section 2 presents the method description, including spare auto-encoder, SSAE based BPNN method and traditional methods applied to the data processing of the e-nose we designed. Section 3 describes the experiments and results. Finally, conclusions are given in Section 4. Sparse Auto-Encoder As an unsupervised learning algorithm, an auto-encoder network consists of three layers: input layer, hidden layer and output layer [23]. It makes the output layer equal to the input layer, which minimizes the reconstruction error to extract a best expression of the hidden layer. The auto-encoder network based data processing for e-noses consists of two steps (as shown in Figure 1): Firstly, the original e-nose data is encoded from the input layer to the hidden layer (encoding): where y = y (1) y (2) · · · y (n) T denotes the feature expression of the hidden layer; m and n are the total number of the nodes in the input and hidden layers, respectively. W is the weight matrix and b represents the bias vector. S(×) is the sigmoid function, which is expressed as S(×) = 1/(1 + e −× ). Secondly, the feature expression y is decoded from the hidden layer to the output layer (decoding): The feature expression y is decoded to obtain the reconstructed vector z = z (1) z (2) · · · z (m) T , where b stands for the bias vector. The sparse auto-encoder [24] is an extension of the auto-encoder, which introduces sparse restrictions to the hidden nodes, in order to control the number of activated neurons. The system complexity and parameters can be reduced due to less number of activated neurons, so that the sparse auto-encoder can learn better features [25]. The cost function of the sparse auto-encoder under the entire data set is expressed as: The sparse auto-encoder [24] is an extension of the auto-encoder, which introduces sparse restrictions to the hidden nodes, in order to control the number of activated neurons. The system complexity and parameters can be reduced due to less number of activated neurons, so that the sparse auto-encoder can learn better features [25]. The cost function of the sparse auto-encoder under the entire data set is expressed as: The first term is the square root error indicating the difference between the input and the output. The second term is the weight decay term used to solve the over-fitting problem, where  is the weight attenuation coefficient and ij w is the weight corresponding to the input node i and the hidden node j. The last term is the sparse penalty term, where  stands for the sparse target value,  is the weight of the sparse penalty item, and j  is the average activation quantity of the hidden unit j. The back-propagation algorithm is adopted to obtain the parameters W and b by minimizing the cost function J [21], where the stochastic gradient descent approach is used for training. The parameters W and b in each iteration process can be updated as: where  is the learning rate. A forward propagation algorithm is used to compute the average activation quantity j  to get the error, and then the back-propagation algorithm is applied to update the parameters W and b. SSAE Based BPNN (SSAE-BPNN) An SSAE is usually built with multiple sparse auto-encoders. Figure 2 shows an SSAE composed by two sparse auto-encoders, where the hidden layer of the first sparse auto-encoder is treated as the input layer of the second sparse auto-encoder. A greedy layer-wise unsupervised algorithm [18] is used to train each sparse auto-encoder independently. After the SSAE is trained to learn the features, the multi-layer BPNN is used for classification. Instead of direct utilization of learned features of SSAE, the parameters of SSAE are used to initialize the BPNN to get the features for classification. The steps of training the SSAE based BPNN are described as follows: The first term is the square root error indicating the difference between the input and the output. The second term is the weight decay term used to solve the over-fitting problem, where λ is the weight attenuation coefficient and w ij is the weight corresponding to the input node i and the hidden node j. The last term is the sparse penalty term, where ρ stands for the sparse target value, β is the weight of the sparse penalty item, and ρ j is the average activation quantity of the hidden unit j. The back-propagation algorithm is adopted to obtain the parameters W and b by minimizing the cost function J [21], where the stochastic gradient descent approach is used for training. The parameters W and b in each iteration process can be updated as: where ε is the learning rate. A forward propagation algorithm is used to compute the average activation quantity ρ j to get the error, and then the back-propagation algorithm is applied to update the parameters W and b. SSAE Based BPNN (SSAE-BPNN) An SSAE is usually built with multiple sparse auto-encoders. Figure 2 shows an SSAE composed by two sparse auto-encoders, where the hidden layer of the first sparse auto-encoder is treated as the input layer of the second sparse auto-encoder. A greedy layer-wise unsupervised algorithm [18] is used to train each sparse auto-encoder independently. After the SSAE is trained to learn the features, the multi-layer BPNN is used for classification. Instead of direct utilization of learned features of SSAE, the parameters of SSAE are used to initialize the BPNN to get the features for classification. The steps of training the SSAE based BPNN are described as follows: (1) Pre-training: The two sparse auto-encoders are trained in succession, and the SSAE parameters (W and b, cf. Equations (1) and (2)) are obtained after the pre-training step; (2) Constructing BPNN: The encoder layers of SSAE is taken as the first three layers of the multi-layer BPNN, and the third layer (the second hidden layer of BPNN) is connected with the output layer which corresponds to the types of Chinese liquors in our study through Softmax regression (see Figure 3); (3) Initialization: The network parameters of the first three layers are initialized with the pre-training parameters; (4) Fine-tuning: The forward propagation algorithm is used to train the BPNN network, and then the parameters are fine-tuned using the back-propagation algorithm with the labels. (1) Pre-training: The two sparse auto-encoders are trained in succession, and the SSAE parameters (W and b, cf. Equations (1) and (2)) are obtained after the pre-training step; (2) Constructing BPNN: The encoder layers of SSAE is taken as the first three layers of the multilayer BPNN, and the third layer (the second hidden layer of BPNN) is connected with the output layer which corresponds to the types of Chinese liquors in our study through Softmax regression (see Figure 3); (3) Initialization: The network parameters of the first three layers are initialized with the pretraining parameters; (4) Fine-tuning: The forward propagation algorithm is used to train the BPNN network, and then the parameters are fine-tuned using the back-propagation algorithm with the labels. (1) Pre-training: The two sparse auto-encoders are trained in succession, and the SSAE parameters (W and b, cf. Equations (1) and (2)) are obtained after the pre-training step; (2) Constructing BPNN: The encoder layers of SSAE is taken as the first three layers of the multilayer BPNN, and the third layer (the second hidden layer of BPNN) is connected with the output layer which corresponds to the types of Chinese liquors in our study through Softmax regression (see Figure 3); (3) Initialization: The network parameters of the first three layers are initialized with the pretraining parameters; (4) Fine-tuning: The forward propagation algorithm is used to train the BPNN network, and then the parameters are fine-tuned using the back-propagation algorithm with the labels. Data Processing Procedures Based on SSAE-BPNN and Traditional Methods Two traditional methods are used to compare with the SSAE-BPNN method. The response curves of ten gas sensors are sampled at a frequency of 100 Hz. The first traditional method is presented by Jing et al. [5]. The data processing procedure is described as follows: Firstly, the original response curves are preprocessed by wavelet denoising filters and conductivity normalization. Secondly, ten features for each response curve are chosen, they are (1) the time reaching the response maximum value; (2) root mean square; (3) arithmetic mean; (4) geometric mean; (5) harmonic mean; (6) the maximum value of the first-order derivative; (7) the time when the maximum value of the first-order derivative is reached; (8) the average differential; (9) the integration of the response curve when it reaches the maximum value and (10) the mean curvature. After feature generation, the dimension of the original feature space for ten response curves is 100, and then the dimension is reduced to 20 using KECA. Finally, the classification of Chinese liquors is implemented using BPNN and SVM. The procedure of the second traditional method can be illustrated by the upper part of Figure 4. Firstly, the original response curves of ten gas sensor are preprocessed using the Savitzky-Golay wave filter [26]. Secondly, we select five features for each response curve, including (1) the maximum value, (2) the maximum value of the first-order derivative, (3) the response value corresponding to the maximum of first-order derivative, (4) the minimum value of the first-order derivative and (5) the maximum value of the second-order derivative. The dimension of the original feature space for ten response curves is 50. Then, the number of features is reduced to 19 using the PCA method [27]. The extracted features are applied to classify the Chinese liquors using BPNN and SVM. Data Processing Procedures Based on SSAE-BPNN and Traditional Methods Two traditional methods are used to compare with the SSAE-BPNN method. The response curves of ten gas sensors are sampled at a frequency of 100 Hz. The first traditional method is presented by Jing et al. [5]. The data processing procedure is described as follows: Firstly, the original response curves are preprocessed by wavelet denoising filters and conductivity normalization. Secondly, ten features for each response curve are chosen, they are (1) the time reaching the response maximum value, (2) root mean square, (3) arithmetic mean, (4) geometric mean, (5) harmonic mean, (6) the maximum value of the first-order derivative, (7) the time when the maximum value of the first-order derivative is reached, (8) the average differential, (9) the integration of the response curve when it reaches the maximum value and (10) the mean curvature. After feature generation, the dimension of the original feature space for ten response curves is 100, and then the dimension is reduced to 20 using KECA. Finally, the classification of Chinese liquors is implemented using BPNN and SVM. The procedure of the second traditional method can be illustrated by the upper part of Figure 4. Firstly, the original response curves of ten gas sensor are preprocessed using the Savitzky-Golay wave filter [26]. Secondly, we select five features for each response curve, including (1) the maximum value, (2) the maximum value of the first-order derivative, (3) the response value corresponding to the maximum of first-order derivative, (4) the minimum value of the first-order derivative and (5) the maximum value of the second-order derivative. The dimension of the original feature space for ten response curves is 50. Then, the number of features is reduced to 19 using the PCA method [27]. The extracted features are applied to classify the Chinese liquors using BPNN and SVM. Decision output Classification results The features selection for traditional methods is usually manual, which is time-consuming and hard to be appropriate. Due to this reason, we presented a deep learning method to learn the features automatically instead of manual choice. Compared with the relatively complicated procedures of traditional method (see the upper part of Figure 4), the structure of the proposed SSAE-BPNN method (see the lower part of Figure 4) is quite concise. Firstly, we directly use the SSAE method to learn the features from the down-sampled response curves. Then the SSAE based BPNN method is applied to classify the Chinese liquors. Experimental Materials Seven kinds of strong-flavor Chinese liquors were used in the experiments, including Bianfenghu (BFH), Bainianwanjiu (BNWJ), Hongjinjiu (HJJ), Lanjinjiu (LJJ), Luzhoulaojiao (LZLJ), Mianzhudaqu (MZDQ), Niulanshan (NLS). Table 1 provides the detailed information for the seven kinds of Chinese liquors. The features selection for traditional methods is usually manual, which is time-consuming and hard to be appropriate. Due to this reason, we presented a deep learning method to learn the features automatically instead of manual choice. Compared with the relatively complicated procedures of traditional method (see the upper part of Figure 4), the structure of the proposed SSAE-BPNN method (see the lower part of Figure 4) is quite concise. Firstly, we directly use the SSAE method to learn the features from the down-sampled response curves. Then the SSAE based BPNN method is applied to classify the Chinese liquors. Table 1 provides the detailed information for the seven kinds of Chinese liquors. Electronic Nose The self-designed e-nose system [28] for Chinese liquors identification consists of three parts: liquor dynamic evaporation and sampling device, sensor chamber reaction device, control and data acquisition system (shown in Figure 5). A physical picture of the e-nose experimental platform is shown in Figure 6. Through the hardware and software design, the e-nose system can realize automated sampling scheme with a friendly user interface. After setting up the experimental condition parameters and dropping the liquor sample, the sampling process is implemented automatically. The sensor array consists of ten types of metal oxide semiconductor (MOS) sensors, which are TGS series (TGS2602, TGS2611 To solve this problem, a reservoir computing approach could be used to overcome the slow temporal dynamics of MOS sensor arrays, allowing identification and quantification of chemicals of interest continuously and reducing measurement delays [29]. Electronic Nose The self-designed e-nose system [28] for Chinese liquors identification consists of three parts: liquor dynamic evaporation and sampling device, sensor chamber reaction device, control and data acquisition system (shown in Figure 5). A physical picture of the e-nose experimental platform is shown in Figure 6. Through the hardware and software design, the e-nose system can realize automated sampling scheme with a friendly user interface. After setting up the experimental condition parameters and dropping the liquor sample, the sampling process is implemented automatically. The sensor array consists of ten types of metal oxide semiconductor (MOS) sensors, which are TGS series (TGS2602, TGS2611 The experimental procedure is like this: 5 μL for each kind of Chinese liquors was dripped into the inlet of the evaporator chamber using a pipette gun. Then the evaporator chamber was heated to Electronic Nose The self-designed e-nose system [28] for Chinese liquors identification consists of three parts: liquor dynamic evaporation and sampling device, sensor chamber reaction device, control and data acquisition system (shown in Figure 5). A physical picture of the e-nose experimental platform is shown in Figure 6. Through the hardware and software design, the e-nose system can realize automated sampling scheme with a friendly user interface. After setting up the experimental condition parameters and dropping the liquor sample, the sampling process is implemented automatically. The sensor array consists of ten types of metal oxide semiconductor (MOS) sensors, which are TGS series (TGS2602, TGS2611 The experimental procedure is like this: 5 μL for each kind of Chinese liquors was dripped into the inlet of the evaporator chamber using a pipette gun. Then the evaporator chamber was heated to The experimental procedure is like this: 5 µL for each kind of Chinese liquors was dripped into the inlet of the evaporator chamber using a pipette gun. Then the evaporator chamber was heated to a constant temperature of 65 • C. After the liquid liquors were evaporated into gas, the sample was inhaled to the reaction chamber with an air pump, and the gas was contacted with the sensor array. Finally, the response curves were saved. After each experiment, the evaporator chamber and the reaction chamber were cleaned for 3 min and 1 min, respectively. The experimental process was repeated 30 times for each kind of Chinese liquors and totally 210 groups of data were obtained. Parameters Setting for SSAE-BPNN DeepLearn Toolbox [30] is an open source software library that includes popular machine learning and artificial intelligence techniques, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Stacked Auto-encoders (SAE) and Deep Belief Networks (DBN), etc. We utilized ANN and SAE algorithms in DeepLearn Toolbox to realize the SSAE-BPNN and added SVM algorithm to SAE in DeepLearn Toolbox to realize the SSAE-SVM. Figure 7 presents the recorded response curves of ten gas sensors with a sampling frequency of 100 Hz. The recorded time for each gas sensor was 364 s and the initial dimension of 100 Hz data (364,000) was too large as the input of the proposed method. So, the analyzed data were collected with an interval of 1 s, i.e., the recorded response data were down-sampled from 100 Hz to 1 Hz for the SSAE method to learn from. As shown in Figure 7, when the sampling rate was down-sampled to 1 Hz, the overall trend of response curves was not changed, which almost made no effect on the feature learning. Therefore, the total number of the analyzed data for each experiment was 3640 and then we chose 3640 nodes in the input layer of the constructed SSAE-BPNN. a constant temperature of 65 °C. After the liquid liquors were evaporated into gas, the sample was inhaled to the reaction chamber with an air pump, and the gas was contacted with the sensor array. Finally, the response curves were saved. After each experiment, the evaporator chamber and the reaction chamber were cleaned for 3 min and 1 min, respectively. The experimental process was repeated 30 times for each kind of Chinese liquors and totally 210 groups of data were obtained. Parameters Setting for SSAE-BPNN DeepLearn Toolbox [30] is an open source software library that includes popular machine learning and artificial intelligence techniques, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Stacked Auto-encoders (SAE) and Deep Belief Networks (DBN), etc. We utilized ANN and SAE algorithms in DeepLearn Toolbox to realize the SSAE-BPNN and added SVM algorithm to SAE in DeepLearn Toolbox to realize the SSAE-SVM. Figure 7 presents the recorded response curves of ten gas sensors with a sampling frequency of 100 Hz. The recorded time for each gas sensor was 364 s and the initial dimension of 100 Hz data (364,000) was too large as the input of the proposed method. So, the analyzed data were collected with an interval of 1 s, i.e. the recorded response data were down-sampled from 100 Hz to 1 Hz for the SSAE method to learn from. As shown in Figure 7, when the sampling rate was down-sampled to 1 Hz, the overall trend of response curves was not changed, which almost made no effect on the feature learning. Therefore, the total number of the analyzed data for each experiment was 3640 and then we chose 3640 nodes in the input layer of the constructed SSAE-BPNN. To alleviate the possible over-fitting problems caused by small samples, we want to use a simple network structure to implement deep learning [30]. To achieve better classification results, we have carried out tests with different number of hidden layers and hidden-layer nodes for the SSAE. During the tests, the number of nodes in each hidden layer was set to 200, and the iteration times for both the pre-training and fine-tuning procedures were set to 10. Other parameters' settings were listed in Table 2. The test results showed that the classification rate cannot be effectively increased with an increase in the number of hidden layers. As shown in Figure 8, when the hidden-layer numbers are set 3 and 4, the classification accuracy are 91.71% and 91.28%, respectively, which are lower than 93.04% obtained with two hidden layers. Therefore, the number of the hidden layers for the SSAE was set to 2 in our subsequent study. To alleviate the possible over-fitting problems caused by small samples, we want to use a simple network structure to implement deep learning [30]. To achieve better classification results, we have carried out tests with different number of hidden layers and hidden-layer nodes for the SSAE. During the tests, the number of nodes in each hidden layer was set to 200, and the iteration times for both the pre-training and fine-tuning procedures were set to 10. Other parameters' settings were listed in Table 2. The test results showed that the classification rate cannot be effectively increased with an increase in the number of hidden layers. As shown in Figure 8, when the hidden-layer numbers are set Figure 9 illustrates the influence of the number of hidden-layer nodes on classification accuracy, where the horizontal ordinate is the number of the 1st hidden-layer nodes and the legend represents the number of the 2nd hidden-layer. When the numbers of two hidden-layer nodes were changed, the classification accuracy of different choice was given. The optimal settings for the number of nodes in the 1st and the 2nd hidden-layers were 200 and 100, respectively. Considering that there are seven kinds of Chinese liquors tested in the experiment, the number of nodes in output layer was set as 7. Figure 10 shows the influences of the sparse target value  and the sparse weight  on the classification accuracy, from which the optimal settings for the both values were set to 0.01 and 0.1, respectively. Using the similar choice method, we set the learning rates for SSAE ( 1  ) and BPNN ( 2  ) to 0.1 and 1, respectively. All the parameter settings for the SSAE-BPNN are listed in Table 2. Figure 9 illustrates the influence of the number of hidden-layer nodes on classification accuracy, where the horizontal ordinate is the number of the 1st hidden-layer nodes and the legend represents the number of the 2nd hidden-layer. When the numbers of two hidden-layer nodes were changed, the classification accuracy of different choice was given. The optimal settings for the number of nodes in the 1st and the 2nd hidden-layers were 200 and 100, respectively. Considering that there are seven kinds of Chinese liquors tested in the experiment, the number of nodes in output layer was set as 7. Figure 9 illustrates the influence of the number of hidden-layer nodes on classification accuracy, where the horizontal ordinate is the number of the 1st hidden-layer nodes and the legend represents the number of the 2nd hidden-layer. When the numbers of two hidden-layer nodes were changed, the classification accuracy of different choice was given. The optimal settings for the number of nodes in the 1st and the 2nd hidden-layers were 200 and 100, respectively. Considering that there are seven kinds of Chinese liquors tested in the experiment, the number of nodes in output layer was set as 7. Figure 10 shows the influences of the sparse target value  and the sparse weight  on the classification accuracy, from which the optimal settings for the both values were set to 0.01 and 0.1, respectively. Using the similar choice method, we set the learning rates for SSAE ( 1  ) and BPNN ( 2  ) to 0.1 and 1, respectively. All the parameter settings for the SSAE-BPNN are listed in Table 2. Figure 10 shows the influences of the sparse target value ρ and the sparse weight β on the classification accuracy, from which the optimal settings for the both values were set to 0.01 and 0.1, respectively. Using the similar choice method, we set the learning rates for SSAE (ε 1 ) and BPNN (ε 2 ) to 0.1 and 1, respectively. All the parameter settings for the SSAE-BPNN are listed in Table 2. Figure 9 illustrates the influence of the number of hidden-layer nodes on classification accuracy, where the horizontal ordinate is the number of the 1st hidden-layer nodes and the legend represents the number of the 2nd hidden-layer. When the numbers of two hidden-layer nodes were changed, the classification accuracy of different choice was given. The optimal settings for the number of nodes in the 1st and the 2nd hidden-layers were 200 and 100, respectively. Considering that there are seven kinds of Chinese liquors tested in the experiment, the number of nodes in output layer was set as 7. Figure 10 shows the influences of the sparse target value  and the sparse weight  on the classification accuracy, from which the optimal settings for the both values were set to 0.01 and 0.1, respectively. Using the similar choice method, we set the learning rates for SSAE ( 1  ) and BPNN ( 2  ) to 0.1 and 1, respectively. All the parameter settings for the SSAE-BPNN are listed in Table 2. SSAE-BPNN Considering that the samples number is small, we adopted the way of cross-validation to evaluate the performance of the proposed SSAE-BPNN and the traditional methods, where the cross-validation could also eliminate the over-fitting problems [31]. The 210 groups of data were randomly divided into ten groups, i.e., a ten-fold cross validation was used. When any one group was used as the testing set, the other nine groups were used as the training sets. The average of ten classification accuracy was taken as the final result. In this work, we adopted the 10-fold cross-validation rather than the leave-one-out cross-validation because the latter cost more time and had the same performance with the 10-fold validation. The training process was divided into two steps. The first step is an unsupervised pre-training process, and the second step is a supervised fine-tuning process. Figure 11 presents the relationship between the iteration times and the classification accuracy, from which we can see that when the iteration times is 10, the classification accuracy exceeds 90%. With an increase in the iteration times, the accuracy increases slowly. When the iteration times reaches 90, the highest classification accuracy of 96.67% can be obtained. Figure 12 shows the relationship between the mean squared errors (MSE) and the iteration times during the fine-tuning process. When the iteration times reaches 90, MSE converges to 0. SSAE-BPNN Considering that the samples number is small, we adopted the way of cross-validation to evaluate the performance of the proposed SSAE-BPNN and the traditional methods, where the crossvalidation could also eliminate the over-fitting problems [31]. The 210 groups of data were randomly divided into ten groups, i.e. a ten-fold cross validation was used. When any one group was used as the testing set, the other nine groups were used as the training sets. The average of ten classification accuracy was taken as the final result. In this work, we adopted the 10-fold cross-validation rather than the leave-one-out cross-validation because the latter cost more time and had the same performance with the 10-fold validation. The training process was divided into two steps. The first step is an unsupervised pre-training process, and the second step is a supervised fine-tuning process. Figure 11 presents the relationship between the iteration times and the classification accuracy, from which we can see that when the iteration times is 10, the classification accuracy exceeds 90%. With an increase in the iteration times, the accuracy increases slowly. When the iteration times reaches 90, the highest classification accuracy of 96.67% can be obtained. Figure 11. Influence of iteration times on classification accuracy. Figure 12 shows the relationship between the mean squared errors (MSE) and the iteration times during the fine-tuning process. When the iteration times reaches 90, MSE converges to 0. SSAE-BPNN VS SAE-BPNN and SSAE-SVM In order to demonstrate the function of sparsity in the feature learning process, the performance of SAE and SSAE has been compared. As shown in Figure 13 and Table 3, at the same iteration times, the classification accuracy of SAE-BPNN is lower than that of SSAE-BPNN. In addition, the performance of BPNN and SVM was also compared. An SSAE-SVM was also constructed for comparison. The SSAE-SVM used the features learned by the SSAE (the second hidden-layer output of the SSAE) as the input of the SVM classifier. For a fair comparison, here the SSAE was kept the same as the one used in the SSAE-BPNN, but the features learned by the SSAE was not fine-tuned during training for the SVM. The test results show that SSAE-BPNN is superior to SSAE-SVM, see Figure 14 and Table 4. From Tables 3 and 4, we can find that the cross-validation interval of confidence for SSAE-BPNN is lower than both the SAE-BPNN and SSAE-SVM, which indicates that the SSAE-BPNN has a better classification precision. SSAE-BPNN VS SAE-BPNN and SSAE-SVM In order to demonstrate the function of sparsity in the feature learning process, the performance of SAE and SSAE has been compared. As shown in Figure 13 and Table 3, at the same iteration times, the classification accuracy of SAE-BPNN is lower than that of SSAE-BPNN. In addition, the performance of BPNN and SVM was also compared. An SSAE-SVM was also constructed for comparison. The SSAE-SVM used the features learned by the SSAE (the second hidden-layer output of the SSAE) as the input of the SVM classifier. For a fair comparison, here the SSAE was kept the same as the one used in the SSAE-BPNN, but the features learned by the SSAE was not fine-tuned during training for the SVM. The test results show that SSAE-BPNN is superior to SSAE-SVM, see Figure 14 and Table 4. From Tables 3 and 4, we can find that the cross-validation interval of confidence for SSAE-BPNN is lower than both the SAE-BPNN and SSAE-SVM, which indicates that the SSAE-BPNN has a better classification precision. SSAE-BPNN VS Traditional Methods The classification results are shown in Table 3. The first traditional method (the KECA based, cf. [5]) has been applied to process the first-generation e-nose data (see Figure 15), and obtained good classification results. However, when it was utilized to the second-generation e-nose data (see Figure 7), the results were not satisfactory. Thus, the second traditional method (the PCA based) was used for comparison. To get information about the data distribution, the projected three-dimensional PCA score plot using the second traditional method is shown in Figure 16. The recognition results based on the two traditional methods and the SSAE based method are presented in Table 5. The second PCA based traditional method is superior to the KECA-based method, but it is still inferior to our proposed SSAE-BPNN method. The employed time of each method is also listed in Table 5. It could be seen that the traditional methods cost more time due to the man-made feature extraction process. Although the SSAE-BPNN costs more time than the SSAE-SVM, it is worth to obtain better results. SSAE-BPNN VS Traditional Methods The classification results are shown in Table 3. The first traditional method (the KECA based, cf. [5]) has been applied to process the first-generation e-nose data (see Figure 15), and obtained good classification results. However, when it was utilized to the second-generation e-nose data (see Figure 7), the results were not satisfactory. Thus, the second traditional method (the PCA based) was used for comparison. To get information about the data distribution, the projected three-dimensional PCA score plot using the second traditional method is shown in Figure 16. The recognition results based on the two traditional methods and the SSAE based method are presented in Table 5. The second PCA based traditional method is superior to the KECA-based method, but it is still inferior to our proposed SSAE-BPNN method. The employed time of each method is also listed in Table 5. It could be seen that the traditional methods cost more time due to the man-made feature extraction process. Although the SSAE-BPNN costs more time than the SSAE-SVM, it is worth to obtain better results. Figure 15. Response curves of the first generation e-nose platform. Conclusions A novel deep learning (DL) based data-processing method has been proposed for electronic noses (e-noses) to classify different brands of Chinese liquors. The proposed method utilizes a stacked sparse auto-encoder (SSAE) to directly learn the features from the gas sensors' response data, and the learned result is then used to construct a new BPNN, i.e. SSAE-BPNN. By combining the SSAE with a support vector machine (SVM), an SSAE-SVM is also constructed. The SSAE based methods do not need the tedious and complicated steps used in traditional methods for e-noses, such as preprocessing and feature extraction (generation and reduction). To verify and compare the classification performance of the proposed method as well as the traditional methods, seven kinds of strong-flavor Chinese liquors have been used as experimental materials using the self-designed e-nose. The results show that the SSAE-BPNN method achieves the highest 96.67% classification accuracy, which is superior to the results of SSAE-SVM and two kinds of traditional methods. It is important to emphasize that traditional methods can obtain good classification results by optimizing the data-processing procedure, e.g. combinatorial feature extraction methods and classifiers. However, due to many choices in combination, the optimization process is normally Figure 15. Response curves of the first generation e-nose platform. Conclusions A novel deep learning (DL) based data-processing method has been proposed for electronic noses (e-noses) to classify different brands of Chinese liquors. The proposed method utilizes a stacked sparse auto-encoder (SSAE) to directly learn the features from the gas sensors' response data, and the learned result is then used to construct a new BPNN, i.e. SSAE-BPNN. By combining the SSAE with a support vector machine (SVM), an SSAE-SVM is also constructed. The SSAE based methods do not need the tedious and complicated steps used in traditional methods for e-noses, such as preprocessing and feature extraction (generation and reduction). To verify and compare the classification performance of the proposed method as well as the traditional methods, seven kinds of strong-flavor Chinese liquors have been used as experimental materials using the self-designed e-nose. The results show that the SSAE-BPNN method achieves the highest 96.67% classification accuracy, which is superior to the results of SSAE-SVM and two kinds of traditional methods. It is important to emphasize that traditional methods can obtain good classification results by optimizing the data-processing procedure, e.g. combinatorial feature extraction methods and classifiers. However, due to many choices in combination, the optimization process is normally Conclusions A novel deep learning (DL) based data-processing method has been proposed for electronic noses (e-noses) to classify different brands of Chinese liquors. The proposed method utilizes a stacked sparse auto-encoder (SSAE) to directly learn the features from the gas sensors' response data, and the learned result is then used to construct a new BPNN, i.e., SSAE-BPNN. By combining the SSAE with a support vector machine (SVM), an SSAE-SVM is also constructed. The SSAE based methods do not need the tedious and complicated steps used in traditional methods for e-noses, such as preprocessing and feature extraction (generation and reduction). To verify and compare the classification performance of the proposed method as well as the traditional methods, seven kinds of strong-flavor Chinese liquors have been used as experimental materials using the self-designed e-nose. The results show that the SSAE-BPNN method achieves the highest 96.67% classification accuracy, which is superior to the results of SSAE-SVM and two kinds of traditional methods. It is important to emphasize that traditional methods can obtain good classification results by optimizing the data-processing procedure, e.g., combinatorial feature extraction methods and classifiers. However, due to many choices in combination, the optimization process is normally empirical and complicated. The proposed SSAE based DL method can not only simplify the data processing process, but also can obtain good classification results. In the future work, to realize rapid and on-line liquors recognition based on e-noses systems, we will try other deep neural network approach, like deep reservoir computing algorithm, to solve the problems of the slow temporal dynamics of sensor arrays.
9,496.2
2017-12-01T00:00:00.000
[ "Computer Science", "Engineering", "Materials Science" ]
A Computationally Efficient Method for Polyphonic Pitch Estimation —This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly-used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach I. INTRODUCTION Polyphonic pitch estimation plays an important role in music signal analysis.It can be essentially used for the detection of musically relevant features such as melody and harmony [1].In the case of content-based music retrieval, the "automatic" extraction of melody information is a crucial element for any music retrieval system [2].Another potential application is assisting the structured audio coding [3,4]. A number of approaches have been proposed in the literature.Klapuri proposed a polyphonic pitch estimation algorithm based on an iterative method [5], which was further explored for music transcription [6].In such method, first the predominant pitch of concurrent musical sound is estimated.Then the spectrum of the sound with the predominant pitch is estimated and subtracted from the mixture.The estimation and subtraction is repeated iteratively on the residual signal. Recognizing a note in note-mixtures is a typical pattern recognition problem.Therefore, some approaches transform the polyphonic pitch estimation into a pattern recognition problem, which is then solved by employing machine learning methods such as neural networks [7,8] and support vector machines [9,10].Other methods such as Bayesian inference [11,12,13], sparse coding [14], and nonnegative matrix factorization [15] have also been investigated.More detailed reviews on the state of the art of polyphonic pitch estimation can also be found in [16]. The aim of this article is to describe a computationally efficient method for polyphonic pitch estimation.The method consists of time-frequency analysis and post-process phases.For both phases, novel techniques are used to increase computational efficiency.In the post-process phase, neither iterative processing nor machine learning is needed.First, a preliminary estimation is used to find all possible pitch candidates, which may include extra estimations.Then the incorrect estimations are removed according to the spectral irregularity and knowledge of the harmonic structures.The postprocess phase mainly involves pick-peaking, addition and subtraction operations, and the computational overload is negligible.Accordingly, the computational cost of the method chiefly depends on the time-frequency analysis part.The constant-Q Fast Resonator Time-Frequency Image (RTFI) has been selected as the basic time-frequency analysis tool.RTFI is employed here mainly because it can be implemented by the simplest filter banks.In addition, fast implementations of such filter banks can also further improve the computational efficiency. As a result, the overall approach is 3 times faster than real time on a standard PC equipped with a 2.0 GHz Pentium processer.The method was also evaluated in the multiple fundamental frequency frame level estimation task of MIREX 2007 [17].The achieved results demonstrate the high performance and computational efficiency of the new approach.The method was the fastest and ranks third place in overall performance of the 16 submitted systems.Compared to the state-of-the-art approaches, it is more than 13 times faster, and has only slightly worse performance.(the accuracy of state-of-the-art method is 60.5%, whereas our method's accuracy is 58.2%). The paper is organized as follows.Section II briefly introduces a new time-frequency analysis tool called Resonator Time-Frequency Image (RTFI), and the motivation to select Fast RTFI constant-Q analysis.Section III describes a new polyphonic pitch estimation method.Notably, Subsection III.C explains the novelty of the proposed method.Section IV describes the experimental setup and reports the performance evaluation, and Subsection IV.F compares the method with other state-of-the-art methods evaluated in MIREX 2007.Finally, Section V summarizes the main results and discusses possible extensions and future work. A. Frequency-dependent Time-frequency Analysis A Frequency-Dependent Time-Frequency (FDTF) analysis may be defined as follows: Unlike the STFT, the window function w of an FDTF may depend on the analytical frequency ω. This means that time and frequency resolutions can be tuned according to the analytical frequency. Equation (1) can also be expressed as: where Equation ( 1) is more suitable to express a transform-based implementation, whereas equation (2) leads to a straightforward implementation of a filter bank with impulse response functions expressed by equation (3). A novel time-frequency representation, known as the Resonator Time-Frequency Image (RTFI), has been developed.Its main feature is that it selects a first-order complex resonator filter bank to implement a frequency-dependent time-frequency analysis.This was chosen due to the flexibility with regards to time and frequency resolution, and the simplicity and computational efficiency of an implementation based on first order filters. B. Resonator Time-Frequency Image The Resonator Time-Frequency Image (RTFI) can be described as follows: In the above equations, I R denotes the impulse response of the first-order complex resonator filter with oscillation frequency ω, and the factor r(ω) before the integral in the equation ( 4) is used to normalize the gain of the frequency response when the resonator filter's input frequency is the oscillation frequency.The decay factor r is dependent on the frequency ω and determines the exponent window length and the time resolution.It also determines the bandwidth (i.e. the frequency resolution). Since the RTFI has a complex spectrum, it may be expressed as follows: where A(t,ω) and φ(t,ω) are real functions.The energy of the signal may then be given by 2 ) , ( ) , ( In this work, it is proposed to use the first order complex resonator digital filter bank to implement a discrete RTFI.To reduce the memory requirements needed to store the RTFI values, the RTFI is separated into different time frames and the average RTFI values are calculated in each frame.Finally the average RTFI energy is used to track the time-frequency characteristics of the music signal.The average RTFI energy spectrum can be expressed as follows: where M is the number of sample in the time frame, g is the index of frame, dB() converts the value to decibels and the ratio of M to sampling rate is the duration time of the frame in the averaging process.RTFI(n, f k ) denotes the value of the discrete RTFI at sampling point n and frequency f k , and J g denotes the frame which begins at the J g th sample of the analyzed signal. C. Multi-resolution Fast RTFI The Fast RTFI is used to reduce the redundancy in computation.In some cases it is not necessary to keep the same sampling frequency of the input for every filter in the filter bank.For the filters with lower center frequencies, the sampling rate can be decreased.In the fast implementation, the filter bank is separated into different octave frequency bands.The inputs of the filter banks in the same frequency band maintain the same sampling rate.The input signal is recursively low-pass filtered and down sampled by a factor of 2 from the highest to the lowest frequency band according to the scheme depicted in Figure 1. This section has briefly introduced the basic idea behind RTFI analysis.A more detailed description of the discrete RTFI and its fast implementation can be found in [18,19]. D. Motivation for Selecting Constant-Q Time-Frequency Analysis Resolution is a key factor of any time-frequency analysis.In the following, it is explained how it may be reasonable to select a nearly constant-Q resolution for a general-purpose music analysis system.Using the MIDI (Music Instrument Digital Interface) note numbers, the fundamental frequency and corresponding partials of a music note k' can be described as: Supposing that the energy of every music note is mainly distributed over the first 10 partials, thus 0 ) ( ' ≈ m k f Energy for m≥11, the frequency ratio between the partials of one note and the fundamental frequencies of other notes can be expressed as follows: This means that the first 10 partials always overlap with another fundamental frequency.Since the fundamental frequencies follow an exponential law (9), most of the energy is concentrated in frequency bins that are evenly spaced on a logarithmic axis.This is the reason for which the required resolution is constant-Q. E. Motivation for Selecting Fast RTFI to Implement Constant-Q Time-Frequency Analysis The proposed method is manly used for polyphonic pitch tracking, where a joint time-frequency analysis is first needed.Either filter bank or constant-Q transform can be used to compute constant-Q time-frequency spectrum.As RTFI is implemented by the simplest filter bank, it is faster than any other filter-bank-based implementation.The Fast RTFI is also compared with transform-based implementations as follows. So as to use a constant-Q transform for a joint time-frequency analysis, the time signal needs to be cut into different frames and then a constant-Q transform is performed in each frame [20].It is assumed that the pitch tracking can report pitches every 10ms, so the time interval between two successive frames is set as 10ms.To perform a constant-Q time-frequency analysis for a 1-second signal, the constant-Q transform needs to be calculated 100 times, and the required number of complex multiplies can be expressed as: where Q is the constant ratio of frequency to resolution, f s is the sampling rate, f min is the lowest analytical frequency, N 1 is the number of octave bands, and N 2 is the number of frequency components in one octave band.A fast constant-Q transform has been proposed in [21].It employs an FFT to calculate constant-Q transform.When the fast constant-Q transform is used for time-frequency analysis of a 1-second signal, the required number of complex multiplies can be roughly expressed as: ) log( 100 For the Fast RTFI analysis of a 1-second signal, the required number of complex multiplies can be roughly obtained as: In the proposed method, the constant-Q factor Q is set as 17, the lowest analysis frequency f min is 26 Hz, the number of octave bands N 1 is 9, and the number of frequency components in one octave band is equal to 120.Accordingly, for constant-Q analysis of a 1-second signal, Fast RTFI implementation needs approximately 240*f s complex multiplies, constant-Q transform implementation needs approximately 24900*f s , and fast constant-Q transform implementation needs approximately 2000*f s . The comparison clearly suggests that Fast RTFI implementation is also much faster than transformbased implementation for a constant-Q time-frequency analysis. A. System Overview ARTFI denotes the input RTFI average energy spectrum, k = 1, 2, 3, … is the frequency index on the logarithmic scale, the second term in the right hand part of the equation denotes the moving average of ARTFI , and M 1 is the length of the window for calculating the moving average. Similarly, preliminary estimates of the possible multiple pitches are found by a simple peak-picking procedure in a relative pitch energy spectrum, which is obtained from the RTFI average energy spectrum.Then a confidence measure is employed to remove pitch candidates whose harmonic Fig. 2. System overview of new polyphonic pitch estimation method components are not strongly represented.Finally, the pitches are found by investigating the spectral irregularity of the remaining candidates.These five steps are described in detail in the following subsections. B. Detailed Description 1) Time-frequency Processing Based on the RTFI Analysis: In the first step, the Fast RTFI is used to analyze the input music signal and to produce a timefrequency energy spectrum.The input sample is a monaural music signal frame at a sampling rate of 44.1 kHz.All 1080 filters are used.The centre frequencies are set on a logarithmic scale.The centre frequency difference between two neighbouring filters is equal to 0.1 semitone, and the analyzed frequency range is from 26 Hz up to 13 kHz.Then, the time-frequency energy spectrum of the input frame is used to obtain an RTFI average energy spectrum according to equation ( 8).This RTFI average energy spectrum is used as the only input vector for later processing.An integer k is used to denote the frequency index on a logarithmic scale, and f k denotes the corresponding frequency value expressed in Hz in the equation: Equation ( 14) has been derived from the fundamental frequencies of musical notes on the western music scale.One example for the input RTFI average energy spectrum of a piano note is provided in Figure 3. 2) Extraction of Harmonic Components: In the second step, the input RTFI average energy spectrum is first transformed into the relative energy spectrum according to the expression (13). Figure 3 shows the RTFI energy spectrum and its moving average.The relative energy spectrum RES(f k ) is a measure of the energy spectrum for the k th frequency bin, relative to the energy spectrum over a frequency range near the k th frequency bin. If there is a peak in the relative energy spectrum at the k th frequency index and the value RES(f k ) is larger than a threshold A 1 , it is likely that there is a harmonic component at the frequency index k.The corresponding value RES(f k ) is assumed to be a measure of confidence in the existence of the harmonic component. 3) Preliminary Estimations of Pitch Candidates: In the third step, based on the harmonic grouping principle, the input RTFI average energy spectrum is first transformed into the pitch energy spectrum (PES) and the relative pitch energy spectrum (RPES) as follows: where M 2 is the length of the window for calculating the moving average, and L is a parameter that denotes how many low harmonic components are together considered as important evidence for Fig. 3.The input RTFI energy spectrum, moving average and the corresponding relative energy spectrum of a piano polyphonic note consisting of two concurrent notes with fundamental frequencies 82 Hz and 466 Hz determining the existence of a possible pitch.Similar techniques have been proposed for pitch estimations by some researchers.In [22], the authors propose a polyphonic pitch estimation approach by summing harmonic amplitudes.There are two main differences between the method described in this paper and the approach introduced in reference [22].First, the reference approach is based on the STFT spectrum, whereas the proposed method employs an RTFI constant-Q spectrum.Secondly, the reference approach directly sums harmonic amplitudes and does not use a decibel scale, whereas the new method produces a pitch energy spectrum by summing the harmonic energies on a decibel scale.Our experiments demonstrate that directly summing the harmonic energies yields lower estimation performances. In practical implementations, instead of using equation ( 15), the pitch energy spectrum on a logarithmic scale can easily be approximated by the following expression (here L is less than 10): 1, the deviation between the approximate and ideal values of the pitch energy spectrum can be considered negligible for practical purposes. There are two assumptions made when determining a preliminary estimate of the possible pitches from the relative pitch energy spectrum.If there is a pitch with fundamental frequency f k, in the input signal, there should be a peak centred around the frequency f k in the relative pitch energy spectrum, and the peak value should exceed a threshold A 2 .Both assumptions are consistent with real music examples when a suitable threshold A 2 is selected.Only the low-pitch notes may have very faint first harmonic components that cannot be reliably extracted.Based on these observations, some assumptions concerning the extracted harmonic components can be made for determination of whether an extracted pitch is correct.For example, if there is a pitch with a fundamental frequency higher than 82 Hz, either the lowest three harmonic components or the lowest three odd harmonic components of this pitch should all be present in the extracted harmonic components.If there is a pitch with a fundamental frequency lower than 82 Hz, four of the lowest six harmonic components should be present in the extracted harmonic components. In two typical cases, the extra estimated pitches can be removed based on the above assumptions. In the first case, the extra pitch estimation is caused by a noise peak in the preliminary pitch estimation.In the second case, the harmonic components of an extra estimated pitch are partly overlapped by the harmonic components of the true pitches.In such a case, the non-overlapped harmonic components become important clues to check the existence of the extra estimated pitch.If a polyphonic set of notes contains two concurrent music notes C5 and G5, for example, the fundamental frequency ratio of the two notes is nearly 2/3.Then, it is probable that there is an extra pitch estimation on the C4 note, because its even harmonics are overlapped by the odd harmonics of C5, and the C4 note's third, sixth, ninth, … harmonic components are nearly overlapped by the G5 note's odd harmonics.However, the C4's first, fifth, and seventh harmonic components are not overlapped, so the extra C4 estimation can be easily identified by checking the existence of the first harmonic component based on the above assumption. 5) Determining the Existence of the Pitch Candidate by the Spectral Irregularity: By means of the previous steps, the extra incorrect estimations centered around the pitches whose note intervals are 12, 19, or 24 semitones higher than the identified true pitches.In such a case, the fundamental frequencies of the extra estimated pitches are placed 2, 3 or 4 times the frequency of a true pitch, and the harmonic components of each extra pitch are completely overlapped by the true pitch.For example, consider when two of the estimated pitch candidates are the notes with fundamental frequencies F 0 and 3F 0 .Here the difficulty is to determine if the note with the fundamental frequency 3F 0 is an incorrect extra estimation caused by the overlapped frequency components of the lower frequency music note.This is the most difficult case in the polyphonic pitch estimation problem.However, such a problem can be solved by investigating spectral irregularity. The spectral value difference between two neighbouring harmonic components is small and random in most cases.But when a music note with the fundamental frequency F 0 is mixed with another note with the higher integer ratio fundamental frequency nF 0 , then the corresponding spectral value of every n th harmonic component will become clearly larger than the neighbouring harmonic components.If there are two estimated pitch candidates that have fundamental frequencies of F 0 and F' 0 ( F' 0 ≈ nF 0 ), and a frequency ratio that is approximately an integer n, then the proposed method employs the following two steps to determine if the higher pitch with the fundamental F' 0 occurs. First, the energy spectrum of the first 10n corresponding harmonic components with the fundamental frequency F 0 is calculated by an RTFI analysis with uniform resolution.The RTFI average energy spectrum of the harmonic components can be expressed as ARTFI H (k), k=1, 2, 3, …, (10n), where k denotes the harmonic component index. The second step is composed of the following operations.The Spectral Irregularity (SI) is calculated using the expression: )) 2 According to our observations, if two of the estimated pitch candidates have the fundamental frequencies, F 0 and F' 0 for which (F' 0 ≈ nF 0 ) and if the higher pitch does not occur, then SI(n) is usually small.On the other hand, if the higher pitch does occur, then the overlapped harmonic components are often strengthened so that SI(n) results in a larger value.When SI(n) is smaller than a given threshold, the overlapped higher pitch candidate is removed.The threshold is determined by experiments on a training database.In practical examples, most incorrect extra estimates caused by the overlapping of harmonic components are placed at a low integer multiple of the frequency of the true pitch.Consequently, the new method proposed in this paper only consider cases for which the fundamental frequency ratio of two pitch candidates is equal to 2, 3 or 4. C. Novelty of the Proposed Method In this subsection, the novelty and promising features of the proposed method is outlined.In the time-frequency processing part, the Fast RTFI constant-Q time-frequency analysis is first employed for polyphonic pitch tracking.As explained in Section II.E, it is much more computationally efficient than other implementations. In the post-process phase, the developed method first estimates pitch candidates by peak-picking from the relative pitch energy spectrum.Since the sounds with integer fundamental frequency ratio can produce very similar peak patterns in a pitch energy spectrum, usually an extra incorrect estimation has an integer ratio to the fundamental frequencies of an identified pitch.This problem mainly arises from the coinciding frequency partials between Western polyphonic music notes. The state-of-the-art method solves the problem by employing iterative estimation and cancelation schema [5].The basic idea is to first find a predominant pitch, and estimate the spectrum of the predominant pitch.Then the estimated spectrum is cancelled from the mixture and produces residual signals before the next estimation.The estimation and cancellation is repeated iteratively on the residual signal.It may also involve the process of estimating the polyphonic number of the analyzed sound. So as to solve the problem of coinciding frequency partials, the basic idea of the new proposed method is completely different from the state-of-art approach introduced above.The proposed method provides a much simpler solution to the problem and does not require to implement an iterative procedure or to estimate the polyphonic number.In the new method, the preliminary estimation finds all possible pitch candidates.Then some pitch candidates are removed if their harmonic components are not enough represented in the energy spectrum.Finally, if fundamental frequencies between any two pitch candidates have an integer ratio, the spectral irregularity is calculated to remove the pitch candidate, which is considered to be an error estimation caused by coinciding frequency partials from a lower pitch. By employing these new techniques, the proposed method is more computationally efficient, but presenting comparable performance with the other state-of-art methods. A. Performance Evaluation Criteria Three criteria were used to evaluate the performance of the polyphonic pitch estimation methods; "Precision", "Recall", and "F-measure".Given a reference fundamental frequency, if there is an estimation that is equal to or presents an error of no more than 3% deviation from the reference fundamental frequency, it is considered to be a correct detection.Otherwise, it is considered as a false negative (FN).Any estimation that deviates by more than 3% from all reference fundamental frequencies is considered to be a false positive (FP).Precision, Recall, and F-measure can be defined according to the following expressions: where N CD , N FP , and N FN denote the total number of correct detections, false positives and false negatives, and P and R denote the values of precision and recall, respectively.In addition, the Overall Accuracy, as defined in [9], is also used for the performance comparison with other state-of-art methods. B. Setting the Method Parameters The real performance of an estimation method may be overestimated when parameters have been optimally selected to fit the test data.So as to prevent such occurrence, separate training and test datasets have been constructed. It is quite difficult to record a large number of polyphonic samples from different musical instruments and label their polyphony content.A preferred method is to produce the polyphonic samples by mixing real recorded monophonic samples of different music instruments. In these experiments, two different monophonic sample sets were used to create the training and test dataset.The monophonic sample set I consisted of a total of 755 monophonic samples from 19 different instruments, such as piano, guitar, winds, strings, and brass, etc.To obtain fairer evaluation results of practical cases, the monophonic sample set II was used to generate the test dataset. Compared to set I, the monophonic samples in Set II, for the same type of instrumentation as samples in Set I, were played by different performers and instruments from different instrument manufacturers.Set II included 23 different instrument types, a total of 690 monophonic samples in the five octave pitch range of 48 Hz to 1500 Hz. All the monophonic samples in Set I and Set II were selected from the RWC instrument sound database [23].Every instrument sample was recorded at three levels of dynamics (forte, mezzo, piano) across the total range of that instrument.Generally speaking, different instruments play with different strengths.Accordingly, instead of being normalized, the natural amplitudes of the monophonic samples were kept in order to construct polyphonies by different energy ratios.The high number of polyphonic samples was generated by randomly mixing these different monophonic samples.These polyphonic samples were generated by first selecting an instrument and then a random note from the instrument's playing range.Based on the monophonic sample set I, a total of 11,000 polyphonic samples with the polyphony from two to six note mixtures were generated for the training dataset. Similarly, monophonic set II was used to generate 11,000 polyphonies for the test dataset.The size of every polyphonic subset in the training and test datasets is described in Table 2.All the following test experiments were performed on the whole test dataset, which was classified into five different subsets according to the polyphony number of the mixed polyphonic samples. The described method has eight different parameters: L, M 1 , M 2 A 1 , A 2 and the thresholds of spectral irregularity.These parameters were tuned on the training dataset.The different parameter values were selected by a heuristic method.Table 3 reports the values, which were tried for different parameters. About 15,000 parameter combinations were tried.Values that yielded the best average F-Measure on the training dataset were selected, and parameters were fixed when the method was evaluated on the test dataset. C. Performance and Robustness The method was tested on the test dataset and achieved F-measures of 89%, 87%, 84%, 81%, and 78% respectively on polyphonic mixtures ranging from two to six simultaneous sounds.In order to test the robustness, pink noise was added into the polyphonic mixtures with different Signal-to-Noise ratios.The pink noise was generated in the frequency range of 50 Hz to 10K Hz.The Signal-to-Noise refers to the ratio between the clean input signal power and the added pink noise power. Figure 6 shows the F-measure of the new method with different levels of added pink noise, where a value of 1 for the F-measure indicates optimal performance.In general, the method is robust, even in cases of severe noise levels.The tested samples were classified into five different sample subsets according to the polyphony number of the mixed polyphonic samples.For example, in Figure 6, the F-measure corresponding to the polyphony number 2 denotes the F-measure value estimated on the sample subset, in which every polyphonic sample consists of a two-note mixture. D. Comparison Experiments with/without Applying Relative Spectrums In the described method, the relative spectrums (relative energy spectrum and relative pitch energy spectrum) have been used.A comparison experiment has been made to evaluate how the application of relative spectrums improves the method's performance.The method was tested for every polyphony sample subset of the test dataset.The test results of the method with or without applying the relative spectrum are reported in Table 4.The results demonstrate that the application of the relative spectrum improves the method's performance. E. Tradeoff between Recall and Precision Precision is the percentage of the transcribed notes that are correct, and Recall is the percentage of all the notes that are found.There is inherent tradeoff between Precision and Recall.Depending on applications, better Precision or better Recall is preferred.For example, in some music transcription systems, the extra incorrect estimations in the result are very harmful, so better Precision is preferred. However, if the output result will be used for further improvement with the combination of some higher level knowledge, better Recall is preferred. The tradeoff between Precision and Recall can be controlled by adjusting the thresholds A 1 , A 2 and the thresholds of spectral irregularity.In this method, harmonic components need to be extracted from the relative energy spectrum by peak-picking.Although the peaks with larger values have higher probability to represent harmonic components, there may still be some large peaks which represent noise.Thus, only the peaks with values larger than the threshold A 1 are considered to represent harmonic components.When A 1 is set to a small value, more true harmonic components may be extracted, but more noise peaks are also incorrectly assumed to be harmonic components.As a result, more true notes may be found, but the incorrect estimation are also increased.Therefore, when A 1 is set low, the method will get better Recall at the cost of lower Precision.Similarly, if thresholds A 2 and the thresholds of spectral irregularity are set low, estimation performance will probably have better Recall.Otherwise, the estimation performance will have better Precision.Figure 7 shows the estimation performance (F-measure, Recall, Precision) of this method with two different parameter sets.Compared with the left image (small parameter values), the Precision shown in the right image (large parameter values) increases at the price of a lower Recall. F. MIREX 2007 Results -Performance Comparison to Other State-of-art Methods In order to compare our technique with other state-of-art approaches, the new method was submitted to the multiple fundamental frequency frame level estimation task of MIREX 2007 [17].In this evaluation task, there were 28 test files, each of which had a 30-second duration.These files consisted of 20 real recordings, 8 synthesized from RWC samples.The summary results of the first 8 methods in the rank are reported in Table 5.In the evaluation, our method (labeled as team 'ZR') was ranked third in the 16 submitted approaches.However the difference of results between our method and the best method (team 'RK') was really minor, whereas our method was approximately 13 times faster than the best method (team 'RK').The algorithm has been implemented as Matlab M-files and MEX-files.The execution time on a 2 GHz Pentium processor is about one third of the time duration of a monaural audio recording. V. CONCLUSION AND FUTURE WORK In this article, a computationally efficient and robust method has been proposed to estimate pitches in real polyphonic music.Compared to the state-of-art approach, the proposed method is conceptually simple and much faster, and presents comparable performance.In the method, the pitch estimation process can be separated into three consecutive stages.In order to show how each stage improves the performance, the method was run on the test dataset, and the result in each stage is reported.First, the preliminary estimation aims to find all possible pitch candidates.About 95% of true notes were successfully found.Then the method removes the pitch candidates, which do not have sufficient harmonic components in the energy spectrum.In this stage, the total performance F-measure is improved from 33% to 63%.Finally, possible remaining ambiguities (such as an integer ratio between fundamental frequencies) are partially solved by investigating the spectral irregularity.The final stage increases the F-measure from 63% to 83%. Approximately 30% of all errors are octave errors, and about 18% of all errors are due to confusion between notes with the fundamental frequency ratio of 1/3.These errors are mainly caused by coinciding frequency partials from a lower pitch.This result demonstrates that the coinciding issue can be further investigated by combining the method with analysis of temporal features which were not yet exploited.The harmonic components from the same instrument sound source often present similar temporal features, such as a common onset time, amplitude modulation and frequency modulation.The harmonic relative frequency components with similar temporal features should have a higher probability of representing the same note than those with different temporal features. For example, a polyphonic note combination may consist of two notes, A3 and A4, where A3 is played by piano and the note A4 is played by violin.It is very difficult to make a polyphonic estimation for this case, because the harmonic components of A4 are completely overlapped by the even harmonic components of A3.However, such a difficult case may be resolved by using temporal features.As shown in Figure 8, the blue lines denote the first four odd harmonic components of A3, and the red/magenta lines denote the first four even harmonic components of the note A3.It can be clearly seen that the energy spectrums of the first four even harmonic components have different temporal features than the first four odd harmonic components.This difference indicates that the even harmonic components are probably shared with another musical note.The remaining errors are mainly related to the fact that the timbres of the instruments may differ greatly from each other.Therefore, assumptions concerning the spectral harmonic characteristics are unlikely to be suitable for all instruments.Further improvements to the approach could be achieved by developing efficient instrument recognition algorithms.The recognition algorithms could first automatically estimate the dominant instrument type for the music signal being analyzed; then the method can use the known spectral harmonic characteristics of that instrument type.In this situation, instrument recognition need only be sufficiently accurate as to place the musical signal in the class of an instrument with similar spectral harmonic characteristics. Figure 2 Figure2provides an overview of the new polyphonic pitch estimation method.It can be Figure 4 4 ) Figure 4 illustrates the relative pitch energy spectrum of a violin example, which consists of four Fig. 4 . Fig.4.Relative Pitch Energy Spectrum of a violin example consisting of four concurrent notes with the fundamental frequencies 266Hz, 299Hz, 353Hz and 403Hz. Figure 5 Fig. 5 . Figure 5 illustrates the RTFI average energy spectrum of the first 30 harmonic components of two Fig. 6 . Fig.6.F-Measure of test results of the proposed method with a clean signal or various levels of added noise. Fig. 7 . Fig.7.F-Measure, Recall and Precision results for the proposed method with different parameters. Figure 8 Figure 8 Energy Changes of Harmonic Components of a Polyphonic Note with Two Polyphonies A3 and A4 Table 5 :Results of Multiple Fundamental Frequency Frame Level Estimation Task of MIREX 2007 issue is only partially solved by identifying the spectral irregularity.In future work, this frequency
7,859.6
2009-01-01T00:00:00.000
[ "Computer Science" ]
Identification of the probability density of the sum of the signal with gaussian mixture distribution and gaussian white noise the probability density of a process that is a sum of white noise and a signal with a Gaussian mixture distribution is in this Using estimates σ 2 , γ 4 and γ 6 , and the identification algorithm the probability density of the payload signal was Introduction The problem of analyzing a process comprising signal and noise is common in engineering practice and is most often encountered when solving optimal signal reception tasks [1]. Here, processing of the received signals is typically based on the methods of mathematical statistics. A classic example of an optimum receiver is a receiver of known (deterministic) signals in the presence of white Gaussian noise. The main difficulties arise when the signal is not deterministic, but stochastic, particularly if it is necessary not only to detect the signal, but also to determine its characteristics and estimate its parameters [2]. Detecting and estimating the characteristics of fluctuation signals in the presence of noise is one of the applications. It should be noted, that the quality of the solution of this problem depends on the correct choice of signal and noise models. One of the models that describe the distribution of a fluctuation signal is a mixture of distributions [3], in [4], a mixture of distributions is used to describe the probability density of a speech signal. In this paper we propose a solution of the problem of obtaining the probability density of the process that is the sum of white Gaussian noise and signal with a Gaussian mixture distribution, i.e. where ( ) s t is the signal, ξ( ) t is Gaussian white noise. Also taken into consideration cumulant coefficients of the process (1) and their usage for solving problems of detection and recover probability density of payload signals. To obtain the probability density of the process ξ( ) t we mast use the convolution formula which describes relation between probability densities of each process and sum of that processes. As noise ξ( ) t , we will consider a Gaussian process with zero mean, its probability density being described by the following expression: Direct determination of the probability density (2) is not always a trivial task, and, in many cases, obtaining an analytical expression is not possible. The probability density Let us consider the case where the probability density can be obtained directly from the expression (2). Let the distribution of the process ( ) s t be described by a mixture of distributions where the coefficients 1 d and 2 d meet the conditions: 1 2 x x e . Let us find the probability density of the sum of the process (4) and Gaussian noise (3) with using expression (2). Integral in (2) becomes From expression (6) we can conclude, that obtained probability density is a mixture of Gaussian distributions, and the only difference betweeen (4) to (6) is the fact that the noise variances ξ σ 2 was added to the variances of the components in (4). Now we generalize the expression (6). Let the probability density of a process ( ) s t be represented as a mixture of distributions with an unlimited number of components where Then the probability density of the sum of this process and Gaussian noise, by analogy with (6), is defined by Thus, if the probability density of the process is described by a mixture of distributions, then, using expression (8), we can always obtain an expression for the probability density of this process with additive Gaussian noise. Now we consider one of the most important case of Gaussian mixture of distribution which is unimodal two-component Gaussian mixture. Probability density of that mixture of distribution in this case can be obtained from (4) by letting 1 m and 2 m be equal to zero: (9) It is easy to obtain the mean, variance and cumulant coefficients of such a mixture [5]: In purpose of illustration let the parameters of distribution (9) be defined as: Fig. 1 illustrates that with decreasing SNR the distribution of the process tends to a Gaussian distribution. Cumulant coefficients Let us consider the cumulant coefficients of the process (1). Since the probability density can be uniquely defined by infinite series of moments [6], cumulants, and, consequently, the cumulant coefficients, knowing the cumulant coefficients allows us to obtain probability density in cases where it is not possible analytically, as well as estimate the probability density experimentally using momentum representation. The cumulants of the distribution (2) are given by the formula [6] ( ) where ( ) f u is the characteristic function, which is equal to According to the properties of the characteristic function, ξ ( ) f u can be written as Knowing the cumulants of the distribution ( ) If the signal-to-noise ration is given by the equation then, according to the expression (18), the second To obtain expressions for the cumulant coefficients of the process (1) we shall use expressions (18) and (20) and the formula for cumulant coefficients [6] ( ) Based on the expression (21) we can conclude that adding Gaussian noise to the process multi- Detection of the signal Now let us investigate the possibility of detection a noise-like signal with probability density (9) in the presence of Gaussian noise as well as the possibility of reconstruction its probability density. The algorithm of modeling of mixture of distributions (4) described in detail in [7], so in this paper we will not pay attention to it. As the parameters of the distribution of signal (9) we will take the parameters (11). Note that for all the examples the length of the realization of signal is = 6 10 N . In the Table 1 simulation results are presented. For each value of SNR using expressions (10) and (21) theoretical values variance σ 2 and cumulant coefficients γ 4 and γ 6 were obtained as well as their estimations σ 2 , γ 4 and γ 6 with usage of modeling of signals. Using estimates σ 2 , γ 4 and γ 6 , and the identification algorithm proposed in [8], we can reconstruct the probability density of the payload signal. For this purpose expression (21) was used to recalculate the cumulant coefficients and the variance for each SNR value. The theoretical probability density and the probability density of reconstracted signals with different SNR values are compared in Fig. 2. As the reconstruction error δ we used the integral metric x is the theoretical probability density. Fig. 2 and Table. 2 show that if the SNR value is greater than 0,1 (-20 dB) then reconstructed probability density almost fully matches to the theoretical, only when the value of SNR is 0.1 the result of the identification can be unsatisfactory. Conclusion In this paper a general analytical expression for the probability density of a process that comprises a sum of a signal and noise, where the signal's distribution is a mixture of distributions, and the noise is a Gaussian process, was obtained. Analysis of cumulant coefficients of the process demonstrated that the cumulant coefficients of the process (1) and the cumulant coefficients of the payload signal differ by value This fact allows us to solve the problem of detecting a payload signal in the presence of Gaussian noise and to identify its probability density with a mixture of distributions. The possibility of detecting the payload signal and acquiring its characteristics for different signalto-noise ratios was demonstrated with numerical modeling.
1,764
2020-01-01T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
2H,3H-Decafluoropentane-Based Nanodroplets: New Perspectives for Oxygen Delivery to Hypoxic Cutaneous Tissues Perfluoropentane (PFP)-based oxygen-loaded nanobubbles (OLNBs) were previously proposed as adjuvant therapeutic tools for pathologies of different etiology sharing hypoxia as a common feature, including cancer, infection, and autoimmunity. Here we introduce a new platform of oxygen nanocarriers, based on 2H,3H-decafluoropentane (DFP) as core fluorocarbon. These new nanocarriers have been named oxygen-loaded nanodroplets (OLNDs) since DFP is liquid at body temperature, unlike gaseous PFP. Dextran-shelled OLNDs, available either in liquid or gel formulations, display spherical morphology, ~600 nm diameters, anionic charge, good oxygen carrying capacity, and no toxic effects on human keratinocytes after cell internalization. In vitro OLNDs result more effective in releasing oxygen to hypoxic environments than former OLNBs, as demonstrated by analysis through oxymetry. In vivo, OLNDs effectively enhance oxy-hemoglobin levels, as emerged from investigation by photoacoustic imaging. Interestingly, ultrasound (US) treatment further improves transdermal oxygen release from OLNDs. Taken together, these data suggest that US-activated, DFP-based OLNDs might be innovative, suitable and cost-effective devices to topically treat hypoxia-associated pathologies of the cutaneous tissues. Introduction Hypoxia is a major feature of several skin pathologies of different etiology, such as bedsores, burns, ulcers, diabetes-associated vasculopathies, methicillin-resistant Staphilococcus aureusinfected wounds or even melanomas, all characterized by insufficient oxygen supply to dermal and sub-cutaneous tissues [1][2][3][4].In general, acute and/or mild hypoxia supports adaptation and survival.On the contrary, chronic and/or extreme hypoxia leads to tissue loss.While tumors are metabolically designed to thrive under hypoxic conditions [5], hypoxia in wounds is primarily caused by vascular limitations, further worsened by concomitant conditions (e.g.infection, sympathetic response to pain, hyperthermia, anemia caused by major blood loss, cyanotic heart disease, high altitude) leading to poor healing outcomes [1].Partial pressure of oxygen in the wound tissue can in principle be increased as adjunct therapy to trigger healing responses and to boost other concomitant therapeutic interventions, e.g. to improve responsiveness to growth factors and acceptance of grafts [6][7][8][9].Currently, hyperbaric oxygen therapy and topical oxygen therapy are practiced [1,10].Hyperbaric oxygen therapy effectively bolsters tissue oxygen levels and promotes wound healing under specific conditions, however it is expensive, uncomfortable and even dangerous due to fire accident risks [11].Topical oxygen therapy is cheaper and better accessible for in-home use, but so far inadequately delivers oxygen deep into the skin to fibroblasts, keratinocytes, and inflammatory cells to restore their function [1]. To overcome this problem, intensive research is being pursued to develop new carriers able to release therapeutically significant amounts of oxygen to tissues in an effective and timesustained manner [12,13].Hemoglobin (Hb)-based oxygen carriers have been developed as cell-free suspensions, either encapsulated within vehicles, or complexed with protective enzymes [12,13].Alternative carriers are based on perfluorocarbons, which can carry molecular oxygen without actually binding it, thus favoring gas exchange [14].However, they are not water-miscible, and therefore need to be formulated into emulsions for in vivo use [12,14].Unfortunately, despite attractive characteristics, no perfluorocarbon-based oxygen emulsion is currently approved for clinical uses: some of them, such as Fluosol-DA, have failed due to secondary effects of the surfactants employed, whereas others, such as Oxygent, displayed adverse cerebrovascular effects on cardiopulmonary bypass [14].Among the alternative options currently under investigation, perfluorocarbon-based oxygen-loaded microbubbles (OLMBs) have been reported to deliver clinically relevant oxygen amounts in dosages that are approximately 1/500 of the corresponding quantities of other perfluorocarbon-based oxygen carriers [15].In particular, OLMBs, cored with perfluoropentane (PFP), proved to act as an efficient, biocompatible and stable oxygen delivery system in vitro [16].Formulations were further optimized in order to reach the nanometer size range, and new oxygen-loaded, PFP-cored nanobubbles (OLNBs), both coated with chitosan or dextran, were developed [17,18].In contrast with the micrometer size, which is generally associated to diagnostic purposes, the nanometer size displays several advantages on a therapeutic level.First, in accordance with Laplace's law, the smaller the bubble radius, the higher the oxygen partial pressure.Notably, OLNBs remain relatively stable in water for a long time or rise very slowly, gradually shrink, and finally collapse.This in turn leads to reduced diffusivity of OLNBs that helps to maintain adequate kinetic balance of OLNBs against high internal pressure.When needed, oxygen release from OLNBs can be easily promoted upon complementary ultrasound (US) administration.On the contrary, OLMBs tend to increase in size, rapidly rise, and quickly collapse due to long stagnation and dissolution of inner gases into the surrounding water [19].Therefore, OLNBs appear more clinically feasible to counteract tissue hypoxia than OLMBs.Furthermore, OLNBs are potentially allowed to pass through the nano-sized inter-endothelial gaps of tumor-associated fenestrated capillaries, thus paving the way for potential exploitation in cancer therapy.Finally, nanobubbles have also proven able to carry on molecules other than gaseous oxygen, such as DNA [20], thus suggesting future gene therapy applications [21]. The present work aimed at improving gas delivery to hypoxic tissues by developing a new platform of oxygen nanocarriers based on 2H,3H-decafluoropentane (DFP) and prepared in formulations suitable for topical treatment of dermal tissues.Since DFP is liquid at body designated as inventors in the national patent request no.TO2013A000707 [Nanostructure for conduction of gases and/or active principles and its uses] deposited by the University of Torino on 2nd Sep 2013 (international extension: Jul 2014).This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials.Other than that, the authors do not have any conflict of interests to declare.* O 2 is merely indicated for its presence/absence in the solution (YES/NO), as it was added in excess to reach saturation; specific O temperature, unlike gaseous PFP, these nanocarriers were named oxygen-loaded nanodroplets (OLNDs).While OLNDs keep all the advantages of OLNBs (higher oxygen partial pressure than OLMBs; sensitivity to US and subsequent capability to undergo cavitation events; and ability to pass through the inter-endothelial gaps of fenestrated capillaries), they also display further improvements, appearing more stable and more effective in oxygen storing and releasing, and displaying lower manufacturing costs, no toxicity, and ease of scale-up. Materials Unless otherwise stated, the materials employed here were from Sigma-Aldrich (St Louis, MO). Preparation of OLND, OFND, OLNB, OFNB and OSS formulations Preparation of liquid formulations.Compositions and structures of all formulations are detailed in Table 1 and schematized in Fig. 1.For oxygen-loaded nanodroplet liquid formulations, 1.5 ml DFP (Fluka, Buchs, Switzerland) along with 0.5 ml polyvinylpyrrolidone (Fluka, Buchs, Switzerland) and 1.8 ml soy lecithin (Degussa, Hamburg, Germany) solved in 1% w/v ethanol (Carlo Erba, Milan, Italy) and 0.3% w/v palmitic acid solution (Fluka, Buchs, Switzerland) were homogenized in 30 ml water (preparation A) or phosphate buffered saline (PBS) (preparations C-D) for 2 min at 24000 rpm by using Ultra-Turrax SG215 homogenizer (IKA, Staufen, Germany).Ultrapure water was obtained using a 1-800 Millipore system (Molsheim, France).Thereafter, the solution was saturated with O 2 for 2 min.Finally, 1.5 ml dextran (preparations A, C) or fluorescein isothiocyanate (FITC)-labeled dextran (preparation D) solution was added drop-wise whilst the mixture was homogenized at 13000 rpm for 2 min.For OLNB water formulation, the protocol developed by Cavalli et al. [18] was applied by using PFP as a core fluorocarbon.Oxygen-free nanodroplet (OFND) and nanobubble (OFNB) water formulations were prepared according to OLND and OLNB protocols without adding O 2 .For oxygensaturated solution (OSS) water formulation, OLND preparation protocol was applied omitting dextran and DFP addition. Liquid formulations were characterized for average diameters, polydispersity index, and zeta potential by light scattering, and for oxygen content through a chemical assay (see Materials and Methods).Results are shown as means ± SD from ten preparations (average diameters, polydispersity index, and zeta potential) or three preparations (oxygen content) for each formulation.See also Figs. 1-3 and Table 1 for further detail on OLND or OLNB structure, morphology, size distribution, and formulations. Sterilization.OLNDs, OFNDs, OLNBs, OFNBs, and OSS were sterilized through UV-C exposure for 20 min.Thereafter, UV-C-treated materials were incubated with cell culture RPMI 1640 medium (Invitrogen, Carlsbad, CA) in a humidified CO 2 /air-incubator at 37°C up to 72 h, not displaying any signs of microbial contamination when checked by optical microscopy.Moreover, UV-C-sterilized O 2 -containing solutions underwent further analyses through O 3 measurement and electron paramagnetic resonance (EPR) spectroscopy (Miniscope 100 EPR spectrometer, Magnettech, Berlin, Germany), showing no O 3 generation and negligible singlet oxygen levels immediately after UV-C exposure. Characterization of nanodroplets and nanobubbles Morphology, average diameters, and shell thickness.The morphology of nanodroplet and nanobubble formulations was determined by transmitting electron microscopy (TEM) and by optical microscopy.TEM analysis was carried out using a Philips CM10 instrument (Philips, Eindhoven, The Netherlands), whereas optical microscopy was carried out using a XDS-3FL microscope (Optika, Ponteranica, Italy).For TEM analysis nanodroplet and nanobubble formulations were dropped onto a Formvar-coated copper grid (Polysciences Europe GmbH, Eppelheim, Germany) and air-dried before observation.Furthermore, average diameters and shell thickness were elaborated from TEM images. Size, particle size distribution, zeta potential, refractive index, viscosity, and shell shear modulus.Sizes, polydispersity indexes, and zeta potentials of nanodroplets and nanobubbles were determined by dynamic light scattering using Delsa Nano C instrument (Beckman Coulter, Brea, CA), displaying a (0.6 nm-7 μm) range for measurements of particle size distribution.Each value reported is the average of three measurements of ten different formulations.The polydispersity index indicates the size distribution within a nanodroplet or nanobubble population.For zeta potential determination, formulation samples were placed into an electrophoretic cell, where an electric field of approximately 30 V/cm was applied.Each sample was analyzed at least in triplicate.The electrophoretic mobility was converted into zeta potential using the Smoluchowski equation [22].The refractive indexes of OLND and OLNB formulations were calculated through a polarizing microscope (Spencer Lens Company, Buffalo, New York).The viscosity and the shell shear modulus were determined through Discovery HR1 Hybrid Rheometer (TA instruments, Milan, Italy). Oxygen content.Immediately after preparation, oxygen content of OLNDs, OLNBs and OSS was evaluated for characterization purposes by adding known amounts of sodium sulfite and measuring generated sodium sulfate, according to the reaction: Afterwards, during all the subsequent experiments, an oxymeter was employed.Stability.The stability of formulations stored at 4°C, 25°C or 37°C was evaluated over time up to 6 months by determining morphology, sizes and zeta potential of nanodroplets and nanobubbles by optical microscopy and light scattering. Biocompatibility assessment Human keratinocyte cell cultures.HaCaT (Cell Line Service GmbH, Eppelheim, Germany), a long-term cell line of human keratinocytes immortalized from a 62-year old Caucasian male donor [23], was used for assessment of OLND biocompatibility.Cells were grown as adherent monolayers in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 100 U/ml penicillin, 100 μg/ml streptomycin (Cambrex Bio Science, Vervies, Belgium) and 2 mM L-glutamine in a humidified CO 2 /air-incubator (Thermo Fisher Scientific Inc., Waltham, MA) at 37°C.Before starting the experiments, cells were washed with PBS, detached with trypsin/ethylenediaminetetraacetic acid (0.05/0.02% v/v), washed with fresh medium and plated at a standard density (10 6 cells/well in 6-well plates) in 2 ml fetal bovine serum-free Panserin 601 medium (PAN Biotech, Aidenbach, Germany) to prevent serum interference in the toxicity assay. Evaluation of OLND uptake by human keratinocytes.HaCaT cells were plated in 24-well plates on glass coverslips and incubated in Panserin 601 medium for 24 h with/without 200 μl FITC-labeled OLNDs in a humidified CO 2 /air-incubator at 37°C.After 4',6-diamidino-2-phenylindole (DAPI) staining to visualize cells nuclei, fluorescence images were acquired by a LSM710 inverted confocal laser scanning microscope (Carl Zeiss, Oberkochen, Germany) equipped with a Plan-Neofluar 63×1.4 oil objective, that allowed a field view of at least 5 cells.Wavelength of 488 nm was used to detect OLNDs, and of 460 nm to detect the labeled nuclei.The acquisition time was 400 ms. OLND cytotoxicity.The potential cytotoxic effects of OLNDs were measured as the release of lactate dehydrogenase (LDH) from HaCaT cells into the extracellular medium.Briefly, cells were incubated in Panserin 601 medium for 24 h in the presence or absence of increasing doses (100-400 μl) of OLNDs, either in normoxic (20% O 2 ) or hypoxic (1% O 2 ) conditions, in a humidified CO 2 /air-incubator at 37°C.Then, 1 ml of cell supernatants was collected and centrifuged at 13000g for 2 min.Cells were washed with fresh medium, detached with trypsin/ethylenediaminetetraacetic acid (0.05/0.02% v/v), washed with PBS, resuspended in 1 ml of TRAP (82.3 mM triethanolamine, pH 7.6), and sonicated on ice with a 10 s burst.5 μl of cell lysates and 50 μl of cell supernatants were diluted with TRAP and supplemented with 0.5 mM sodium pyruvate and 0.25 mM NADH (300 μL as a final volume) to start the reaction.The reaction was followed measuring the absorbance at 340 nm (37°C) with Synergy HT microplate reader (Bio-Tek Instruments, Winooski, VT).Both intracellular and extracellular enzyme activities were expressed as μmol of oxidized NADH/min/well.Finally, cytotoxicity was calculated as the net ratio between extracellular and total (intracellular + extracellular) LDH activities. In vitro determination of oxygen release from OLNDs Oxygen release without ultrasound.The concentration of oxygen released by diffusion from OLND, OLNB and OSS liquid or gel formulations into a hypoxic solution was monitored up to 6 h through Hach Lange LDO oxymeter (Hach Lange, Derio, Spain), displaying an accuracy of 0.01 mg/l.Before each measurement, the oxymeter was calibrated in air, waiting for stable temperature and humidity conditions to be reached. Oxygen release with ultrasound and trespassing of skin membranes.To study the ability of US-activated OLND and control formulations to release O 2 through biological membranes, a US probe with a high frequency transducer (f = 2.5 MHz; P = 5 W) was used, combined with a home-made apparatus with two sealed cylindrical chambers (lower chamber: OLND, OFND, OLNB, OFNB or OSS solutions; upper chamber: hypoxic solution) separated by a layer of pig ear skin employed as a model of biological membrane, as previously described [24].The US transducer (f = 2.5 MHz; P = 5 W) was alternatively switched on and off at regular time intervals of 5 min for an overall observational period of 135 min, and oxygen concentration in the recipient chamber was monitored by Hach Lange LDO oxymeter every 45 min.The probe was employed in continuous mode.The wave was sinusoidal.No inertial cavitation was observed at applied settings.Because of the local heating caused by US, the O 2 sensor was positioned laterally in order to prevent possible damage of the oxymeter, whereas the transducer was held in a fixed position, within the donor compartment.The acoustic power of the transducer was determined through a balance's radiation force with a reflecting target, with an uncertainty of 4%.Of note, LDO Hach Lange oxymeter also allowed to measure the temperature of the solution, which never exceeded 30°C in our experiments. In vivo determination of oxygen release from OLNDs Mice.BALB/c mice were bred under specific pathogen-free conditions by Fujifilm Visualsonics (Amsterdam, The Netherlands) or at the Molecular Biotechnology Center (Torino, Italy).Before performing the experiments, healthy mice were shaved locally (abdomens or hind limbs depending on the study, as described in the following paragraphs) and anesthetized by injecting intramuscularly a mixture of tiletamine/zolazepam 20 mg/Kg (Zoletil 100, Carros Cedex, France) and 5 mg/Kg xylazine (Rompun, Bayer, Leverkusen, Germany).All procedures were done in accordance with the EU guidelines and with the approval of the Università di Torino animal care committee (16/03/2011). Measurement of oxy/deoxy-Hb levels without ultrasound (photoacoustic imaging).The shaved hind limbs of nine anesthetized mice were topically treated with OLND, OFND or OSS gel formulations.Before, during and after treatment (10 min), the subcutaneous levels of oxy-and deoxy-Hb were monitored through Vevo LAZR photoacoustic imager (Fujifilm Visualsonics, Amsterdam, the Netherlands) featuring a hybrid US transducer (central f = 21 MHz; spatial resolution = 75 μm). Measurement of tcpO 2 with ultrasound.The shaved abdomens of eight anesthetized mice were topically treated with OLNDs and sonicated using a home-made US equipment (f = 1 MHz; P = 5 W; t = 30 sec).US probe was employed at f = 1 MHz since this is the frequency value routinely employed in clinical practice.Temperature changes after 30 sec US administration were neglectable.Before and after treatment (1 h), the transcutaneous tension of oxygen (tcpO 2 ) was measured through TINA TCM30 oxymeter (Radiometer, Copenhagen, Denmark).All tcpO 2 measurements were taken after physiological stabilization. Statistical analysis Every characterization of ten preparations for each formulation was performed in triplicate, and results are shown as means ± SD (light scattering and oxygen measurement) or as a representative image (TEM and optical microscopy, rheological analysis).Data from cell studies are shown as means ± SEM (LDH and MTT) or as a representative image (confocal microscopy) from three independent experiments analyzed in duplicate.Results from in vitro oxygen release studies are shown as a representative image (release without US) or as means ± SD (release with US) from three independent experiments.Results from in vivo oxygen release studies are shown as a representative image (release without US) or as means ± SD (release with US) from eight mice.SD or SEM were used for descriptive or inferential information, respectively (see Cumming et al [25] for an exhaustive review).Data were analyzed for significance by Student's t test (software: Characterization of OLND and control formulations After manufacturing OLNDs and control preparations were characterized for: morphology and shell thickness, by optical microscopy and TEM; size, particle size distribution, polydispersity index and zeta potential, by dynamic light scattering; refractive index by polarizing microscopy; viscosity and shell shear modulus by rheometry; and oxygen content (before and after UV-C sterilization) through a chemical assay.Results are shown in Fig. 2 and Tables 2-3.Both nanodroplets and nanobubbles displayed spherical shapes.All sizes were in the nanometer range, with average diameters ranging from ~490 nm (OLNBs) to ~590 nm (OLNDs) for oxygen-loaded carriers and from ~210 nm (OFNBs) to ~240 nm (OFNDs) for oxygen-free carriers.All diameters were hydrodynamic, sphere equivalent, and were volume weighted.Zeta potentials ranged from ~-27 to ~-25 mV.Refractive indexes were similar for both OLND and OLNB formulations (~1.33).OLND formulation displayed a viscosity value of 1.59188 e-3 PaÁs and a shear modulus value of 5.43 e-2 mPa, calculated at a shear rate value of 150 s -1 .OLNB formulation displayed a viscosity value of 1.94426 e-3 PaÁs and a shear modulus value of 6 e-2 mPa mPa, calculated at a shear rate value of 150 s -1 .OLNDs displayed a good oxygenstoring capacity of about 0.40 mg/ml of oxygen before and after 20-min UV-C sterilization.Such oxygen amount was comparable with that of OLNBs or OSS, thus similar volumes of OLND, OLNB and OSS preparations were further employed in the subsequent experiments. Thereafter, OLND toxicity and cell viability both in normoxic (20% O 2 ) and hypoxic (1% O 2 ) conditions were evaluated by LDH and MTT assays, respectively.As shown in Fig. 4A, increasing volumes of OLND PBS suspensions, ranging from 100 to 400 μl, did not result toxic to cells either in normoxia or hypoxia.Eventually, OLNDs improved keratinocyte viability in either condition of oxygenation (Fig. 4B).In vitro oxygen release from OLNDs OLND, OLNB, and OSS abilities to release oxygen in vitro were comparatively evaluated.Following dissolution of the same amount of nanocarrier-containing water or 2% HEC gel formulatios in a hypoxic solution, OLNDs released larger amounts of oxygen for longer times (up to 6 h) than OLNBs and OSS (Fig. 5A-B).Of note, the dynamics of oxygen release from OLNDs over time was characterized by two subsequent phases: first, OLNDs tended to agglomerate, thus leading to a plateau in the graph; then, they simultaneously delivered oxygen, thus leading to a great dip-off in the graph.Oxygen release was also compared following sonication.As shown in Fig. 5C-D, US (f = 2.5 MHz; P = 5 W) improved the ability of OLNDs to cross the pig skin membrane and to release oxygen into the hypoxic chamber up to 135 min, and such oxygen release was larger than that from OFND, OLNB, OFNB, and OSS liquid or gel formulations. In vivo oxygen release from OLNDs The skin oxygenation of the shaved hind limbs of nine anesthetized mice topically treated with OLND, OFND or OSS gel formulations was monitored by visualizing the subcutaneous levels of oxy-Hb and deoxy-Hb through photoacoustic imaging before, during and after the treatment (t = 10 min).As shown in Fig. 6, oxy-Hb levels significantly increased for the entire observational period in the animals treated with OLNDs.As expected, OSS induced a high but only transient peak in oxy-Hb, whereas OFNDs did not affect oxy-/deoxy-Hb balances at all.Thereafter, OLND ability to improve tissue oxygenation in vivo upon US treatment was investigated.The shaved abdomens of eight anesthetized mice were topically treated with OLNDs and sonicated for 30 sec (f = 1 MHz; P = 5 W).Skin oxygenation was investigated through transcutaneous oxymetry (Fig. 7: panel A, 0-15 min; panel B, 1 h) before and after the treatment.Basal tcpO 2 values in mice were inhomogeneous, possibly as a consequence of the different level of peripheral vasoconstriction induced by anesthesia.Nevertheless, after topical administration of US-activated OLNDs hypoxic mice displayed larger oxygenation levels in a time-sustained manner up to 1 h. Discussion The major novelty of the present nanocarriers (OLNDs) is the oxygen-storing core structure consisting in DFP.Notably, DFP displays oxygen-solubilizing capabilities as well as PFP [24,[26][27], the most widely used fluorocarbon for oxygenating emulsions and nanobubble formulations [12,14,[17][18].However, in DFP the carbon skeleton is surrounded by ten fluorine and two hydrogen atoms, whereas in PFP it is surrounded by twelve fluorine atoms (see Fig. 1).Therefore, while binding oxygen molecules, DFP can also establish hydrogen bonds between hydrogen and oxygen atoms in addition to instantaneous dipoles between fluorine and oxygen atoms (as for PFP).The hydrogen bond (5 to 30 kJ/mole) is stronger than a van der Waals interaction, but weaker than covalent or ionic bonds.As such, the presence of DFP in the core makes OLNDs more stable than OLNBs, without compromising their ability to release oxygen.However, unlike PFP, which has a boiling point of 32°C and is gaseous at body temperature, DFP boils at 51°C and is therefore liquid at 37°C.For this reason, the new nanocarriers are actually nanodroplets.On the other hand, dextran was kept as the main constituent of the polysaccharidic shell as for former OLNBs [18], since dextran-based formulations have been extensively tested for biocompatibility [28][29], and dextran-based hydrogels are currently used as matrices in tissue engineering, without showing signs of inflammation in vivo [30].Recent toxicological studies on mechanically processed polysaccharides of different molecular weight showed that dextran, along with the products of its mechano-chemical processing, can be classified as class 4 (low-toxicity) substance [31].OLND properties were challenged by comparison with several control preparations including OLNBs, OFNDs, OFNBs, and OSS.All preparations were manufactured either in liquid (water or PBS solution) or gel (2% w/v HEC) formulations.Following nanocarrier manufacturing, UV-C exposure for 20 min was chosen as a sterilizing procedure, which proved to be effective without significantly inducing O 3 or singlet oxygen generation. OLNDs and control preparations were characterized for morphology, average diameters, shell thickness, particle size distribution, polydispersity index, zeta potential, refractive index, viscosity, shell shear modulus, and oxygen content (before and after UV sterilization).Both OLNDs and OLNBs displayed spherical shapes, nanometric sizes, similar viscosity and shell shear modulus, and negative charges.Interestingly, the diameters of OLNDs and OLNBs resulted increased by almost three times with respect to the oxygen-free formulations.Such an increase can be related to the different solubility of oxygen either in DFP or PFP.Indeed, the presence of oxygen in the core of the formulations can change the interfacial layer structure, modify the surface tension, and lead to different hydrophobicity [26][27].This might also explain the slightly larger size of OLNDs with respect to OLNBs.On the other hand, OLND and OLNB negative charges are a likely consequence of the presence of dextran in the outer shell.Indeed, although dextran is a neutral polymer, it is well known that when it is immerged in a saline solution (such as PBS here) it acquires negative polarity [32].Besides, the zeta potential measures charge repulsion or attraction between particles.Therefore, it is also a fundamental parameter to determine nanoparticle physical stability, with zeta potentials lower than-30 mV or larger than +30 mV being generally required for physical stability of colloid systems [22].Although nanodroplets and nanobubbles displayed zeta potentials slightly larger than-30 mV, our formulations proved to be physically stable over time for the steric repulsion of the polymer chains, as assessed by monitoring their sizes and zeta potential by dynamic light scattering up to 6 months after manufacturing.In addition, nanoparticle charge makes them suitable for topical treatment, enhancing their interaction with skin and improving their therapeutic effect on inflamed cutaneous tissues, either without [33] or with concomitant US treatment [34].Interestingly, although cationic nanoparticles are generally preferred for topical treatment due to the anionic nature of the skin [35], some authors have shown that anionic nanoparticles can be more effective [36] and less toxic [37] than the cationic ones. Furthermore, OLND solution displayed a good oxygen-storing capacity (0.4 mg O 2 /ml) either before or after UV-C sterilization.The partial pressure of oxygen (pO2) in dermal wounds ranges from 0 to 10 mm Hg centrally to 60 mm Hg at the periphery, whereas the pO2 in the arterial blood is approximately 100 mm Hg [38].The rate of oxygen consumption in the wound is determined by the availability of oxygen as a substrate and the local metabolic conditions in the wound.This drives home the rationale for providing supplemental oxygen to wounds and explains why wounds that are profoundly ischemic (tcpO 2 < 20 mm Hg) fail to heal and are more prone to infection [38].Interestingly, Davis and colleagues have demonstrated using a pig model that a topically applied perfluorocarbon emulsion with an oxygen concentration of 2 mL O 2 /ml can increase the rate of wound epithelialization, thus providing supporting evidence from a preclinical model that supplementing oxygen delivery using a topical approach can improve healing outcomes [39]. OLND toxicity on human cells was evaluated by testing in vitro cultures of human HaCaT keratinocytes, a cell line immortalized from a 62-year old Caucasian male donor [23].The cell type and the age of the original donor are crucial in the context of the present work, since hypoxia-associated pathologies of dermal tissues are more frequent in the elderly.Moreover, the production of human keratinocyte matrix metalloproteinases, a family of enzymes playing a key role in tissue repair and wound healing mechanisms, was shown to be differentially altered by hypoxia, depending on donor's age [40].Interestingly, OLNDs were avidly internalized by HaCaT cells and did not result toxic for cells, both in normoxic and hypoxic conditions, eventually improving keratinocyte viability. As a next step, the abilities of OLND, OLNB, and OSS liquid or gel formulations to release oxygen in vitro were comparatively evaluated.Intriguingly, OLNDs released larger amounts of oxygen for longer times than OLNBs and OSS.Of note, OLND gel formulation, specifically developed and standardized to allow OLND topical use, displayed lower oxygen levels but faster oxygen release ability than OLND liquid formulation.This appears to be a consequence of the preparation protocol for the gel formulation (see Materials and Methods), which might likely cause a slight loss of oxygen content and lead to the formation of aggregates in the emulsion, thus justifying both lower oxygen levels and faster release dynamics.Nevertheless, those alterations did not affect our purposes, since OLND formulation still appeared more effective than OLNB and OSS formulations.As previously discussed, while binding oxygen molecules, DPF can establish hydrogen bonds, unlike PFP.Therefore, OLND cores likely contain more oxygen than OLNBs, thus justifying why oxygen release from OLNDs resulted higher and longerlasting. Oxygen release was also compared following sonication.In fact US is expected to impact on oxygen release kinetics through several mechanisms.Firstly, US can induce bubble formation after acoustic droplet vaporization [41].Under particular gas content and bubble radius conditions, bubble oscillations might lead to a more violent release mechanism due to cavitation, that is the formation, growth, and implosive collapse of bubbles in a liquid [42][43].Finally, US might elicit sonophoresis: indeed, it has been proven that the cellular uptake of drugs and genes is increased when the region of interest is under US administration, especially in the presence of a contrast agent, and such an increased uptake has been attributed to the formation of transient porosities in the cell membrane, which are big enough for the transport of drugs into the cell [44][45][46][47].For instance, preclinical and clinical evidence that combined sonoporation and chemotherapy effectively impede development of pancreatic cancer either in mice or in human patients has become available recently [48][49].According to our results, US effectively improved the ability of OLNDs to release oxygen through a pig skin layer into a hypoxic chamber, with such oxygen release being larger than that from OFND, OLNB, OFNB, and OSS liquid or gel formulations. OLNDs were finally tested in vivo.The skin oxygenation of mice topically treated with OLND, OFND or OSS gel formulations was monitored by visualizing the subcutaneous levels of oxy-Hb and deoxy-Hb through photoacoustic imaging.This innovative hybrid imaging technique, based on the light absorption and the acoustic transmission properties of a tissue slice interrogated by a computed tomography photoacoustic imager [50][51], can quantify the density of tissue chromophores such as oxy-Hb and deoxy-Hb, measuring physiological parameters such as blood oxygen saturation and total Hb concentration [52].According to our results, oxy-Hb levels significantly increased for the entire observational period in the animals treated with OLNDs, whereas OSS induced a high but only transient peak in oxy-Hb and OFNDs did not affect oxy-/deoxy-Hb balances at all.Of note, the fluence level employed for the present photoacoustic measurements were below 20 mJ/cm 2 , in accordance with the ANSI standard of maximum permissible exposure limit to the skin [53].Therefore, the laser used in the photoacoustic system did not cause droplet evaporation, which generally occurs at very high fluence.For example, fluence values for water droplets are ~3 J/cm 2 independent of drop size [54]. OLND ability to improve tissue oxygenation in vivo was also investigated upon US treatment.Transcutaneous oxymetry was chosen to perform this analysis since it measures the oxygen transcutaneous tension (tcpO 2 ) through a non-invasive method which elicits a heatingrelated vasodilatation, generating fast diffusion of gases from the vessels to an electrode located on the skin.When capillary oxy-Hb dissociation occurs, the reaction of oxygen reduction generates a current which is directly proportional to capillary oxygen arterial pressure.Monitoring tcpO 2 is a well-consolidated technique extensively used also in clinical practice [2,55].Basal tcpO 2 values in mice were inhomogeneous, possibly as a consequence of the different level of peripheral vasoconstriction induced by anesthesia [38].Nevertheless, after topical administration of US-activated OLNDs, hypoxic mice displayed larger oxygenation levels in a timesustained manner.Interestingly, US-activated OLNDs did not alter basal tcpO2 in those mice which were already normoxic per se.This appears particularly relevant, since hyperoxia might eventually lead to alterations of cardiac, vascular, and respiratory functions as well as developmental disorders [56][57]. In conclusion, the DFP-based oxygen nanocarriers described here appear as innovative, promising, nontoxic, and cost-effective therapeutic tools for hypoxia-associated dermal pathologies.Dextran-shelled OLNDs, with 600 nm average diameters, negative charge, good oxygen capacity, and without toxic effects on human keratinocytes, can be manufactured both in liquid and gel formulations, being the latter more suitable for topical administration.OLNDs have higher effectiveness in releasing oxygen to hypoxic media and superficial tissues compared to OLNBs and OSS.Sonication further enhances transdermal oxygen delivery from OLNDs, possibly by promoting cavitation and sonophoresis.Therefore, future preclinical and clinical studies look encouraging as OLNDs offer promising treatment of chronic wounds, including bedsores, critical limb ischemia and diabetic foot. Fig 1 . Fig 1.Schematic structure of nanodroplet and nanobubble formulations.The oxygen nanocarriers described in the present work display a core-shell structure.As core fluorocarbon, DFP was employed for OLNDs, whereas PFP was used for OLNBs.Dextran was chosen as polysaccharidic shell molecule for both nanocarriers.In selected experiments, OLNDs were functionalized by conjugation with FITC.All nanocarrier solutions were prepared either in liquid (water or PBS) or gel (2% HEC) formulations.doi:10.1371/journal.pone.0119769.g001 Fig 2 . Fig 2. OLND and OLNB morphology, size distribution, and shell shear modulus.OLNDs and OLNBs were checked for morphology by optical microscopy or by TEM, for size distribution by light scattering, and for shell shear modulus by rheometry.Results are shown as representative images from ten different preparations.Panel A. OLND image by optical microscopy.Magnification: 60X.Panel B. OLND image by TEM.Magnification: 21000X.Panel C. OLND size distribution.Panel D. OLND flow curve.Panel E. OLNB image by optical microscopy.Magnification: 60X.Panel F. OLNB image by TEM.Magnification: 21000X.Panel G. OLNB size distribution.Panel H. OLNB flow curve.doi:10.1371/journal.pone.0119769.g002 Fig 5 . Fig 5.In vitro oxygen release from OLND liquid and gel formulation and US-triggered sonophoresis through skin membranes.Panels A-B.Oxygen release without US.OLND, OLNB and OSS water (A) and 2% HEC gel (B) formulations were monitored up to 6 h through an oxymeter for oxygen delivery by diffusion.Results are shown as a representative image from three independent experiments.Panels C-D.Oxygen release with US.US abilities to induce sonophoresis and oxygen release from OLND and control water (C) or 2% HEC gel (D) formulations were evaluated up to 135 min.Changes in oxygen levels in the hypoxic chamber between each time interval (0-45 min; 45-90 min; and 90-135 min) are indicated as ΔO 2 .Results are shown as means ± SD from three independent experiments.Data were also evaluated for significance by ANOVA.Versus OLND formulation: p < 0.001.doi:10.1371/journal.pone.0119769.g005 Fig 6 .Fig 7 . Fig 6.Topical treatment with OLND formulation effectively enhances oxy-Hb levels in vivo.The shaved hind limbs of nine anesthetized mice were monitored by photoacoustics for oxy-Hb and deoxy-Hb levels before (0 min, upper row), during (0-10 min, central row) and after (10 min, lower row) topical treatment with OSS (first column), OLND (second column) and OFND (third column) gel formulations.White/red pixels: oxy-Hb; blue pixels: deoxy-Hb.Data are shown as representative images from three independent experiments (three mice per experiment) with similar results.doi:10.1371/journal.pone.0119769.g006 Table 3 . Refractive indexes of OLND and control formulations.
7,784
2015-03-17T00:00:00.000
[ "Medicine", "Materials Science" ]
Ridge and Transverse Correlation without Long-Range Longitudinal Correlation A simple phenomenological relationship between the ridge distribution in Δ𝜂 and the single-particle distribution in 𝜂 can be established from the PHOBOS data on both distributions. The implication points to the possibility that it is not necessary to have long-range longitudinal correlation to explain the data. An interpretation of the relationship is then developed, based on the recognition that longitudinal uncertainty of the initial configuration allows for non-Hubble-like expansion at early time. It is shown that the main features of the ridge structure can be explained in a model where transverse correlation stimulated by semihard partons is the principal mechanism. This work is related to the azimuthal anisotropy generated by minijets in Au-Au collisions at 0.2TeV on the one hand and to the ridge structure seen in 𝑝𝑝 collisions at 7TeV on the other hand. Introduction The ridge structure in two-particle correlation has been studied in nuclear collisions at the Relativistic Heavy-Ion Collider (RHIC) for several years [1][2][3][4][5] and has recently also been seen in collisions at the Large Hadron Collider (LHC) [6].The nature of that structure is that it is narrow in Δ (azimuthal angle relative to that of the trigger) but broad in Δ (pseudorapidity relative to the trigger).In [3], the range in Δ is found to be as large as 4.So far there is no consensus on the origin of the ridge formation [7].It has been pointed out that the wide Δ distribution implies longrange correlation [8][9][10].That is, a view based partially on the conventional estimate that the correlation length is about 2 [11].We make here a comparison between the ranges of single-particle distribution and two-particle correlation, using only the experimental data from PHOBOS [3,12].It is found that the large-Δ ridge distribution is related simply to a shift of the inclusive distribution and an integral over the trigger .That is a phenomenological observation without any theoretical input.Any successful model of ridge formation should be able to explain that relationship. There are subtleties about the single-particle distribution for all charges, ch /, that to our knowledge has not been satisfactorily explained in all its details.Since it sums over all charges, hadrons of different types are included, making ch / to be quite different from /, which can be fitted by a Gaussian distribution in with width = 2.27 [13,14].That difference cannot be readily accounted for in any simple hadronization scheme.Fortunately, detailed examination of ch / is not required before we find its relationship to the ridge distribution ch /Δ, since both are for unidentified charged hadrons, and the empirical verification is based on the data from the same experimental group (PHOBOS). As a consequence of the phenomenological relationship, we consider the possibility that there is no intrinsic longrange longitudinal correlation apart from what gives rise to the single-particle distribution.We have found that to generate ch /Δ it is only necessary to have transverse correlation at different points in , provided that at early time the small- partons do not expand in Hubble-like manner.If spatial uncertainty of wee partons is allowed at early time, the identification of spatial and momentum rapidities may not be valid near the tip of the forward light cone.Therein lies the origin of transverse correlation due to the possibility of near crossing of soft-and hard-parton trajectories.The energy lost by a hard parton enhances the thermal energies of the medium partons in the vicinity of the hard parton's trajectory.The transverse broadening of any small- parton that passes through the cone of that enhancement leads to measurable effect of the ridge.The parton model that we use does not rely on flux tubes or hydrodynamics. Recently, the existence of ridge has been called into question by investigations on the effect of fluctuations of the initial configurations in heavy-ion collisions [15,16].Using hydrodynamical model and transport theory to relate the eccentricities of the spatial initial state in the transverse plane to the azimuthal momentum anisotropy in the final state, it has been shown that the harmonic coefficients V observed in the data can be understood in terms of such transverse fluctuations [17][18][19][20][21][22][23][24][25].That is, however, only one of the possible interpretations of V .The effect of minijets on the initial configuration can yield similar consequences.Data on two-dimensional (2D) angular correlation with integrated have been analyzed by model fits; it is found that the same-side 2D peak can account for all higher Fourier components with > 2 [26].In [27], it is shown that the data on V can also be well reproduced by taking the minijets into account in the recombination model without the details of hydrodynamics.Here, we raise the issue about the effect of longitudinal fluctuations that seem to be as important as transverse fluctuations but have hardly been investigated. After the phenomenological relationship between ch / Δ and ch / is established in Section 2, we give our interpretation of the phenomenon in Section 3. We show the possibility that the ridge can have the observed properties in the absence of long-range longitudinal correlation.Section 3 includes many subsections in which both longitudinal and transverse aspects of the correlation are examined in the parton model.Connections between what we do here with azimuthal anisotropy generated by minijets in Au-Au collisions at RHIC and with the ridge structure found in collisions at LHC are given in Section 4. Our conclusion is given in Section 5. Comparison between Ridge and Inclusive Distributions Our focus is on the PHOBOS data on two-particle correlation measured with a trigger particle having transverse momentum trig > 2.5 GeV/C in Au + Au collisions at √ = 200 GeV [3].The pseudorapidity acceptance of the trigger is 0 < trig < 1.5.The per-trigger ridge yield integrated over |Δ| < 1, denoted by (1/ trig ) ch /Δ, includes all charged hadrons with ≳ 7MeV/c at = 3 and ≳ 35MeV/c at = 0, where the superscript stands for associated particle in the ridge.For simplicity, we use the notation Since all ridge particles are included in the range |Δ| < 1, the Δ dependence of the ridge structure does not show up in the properties of ch /Δ.We have previously studied the Δ dependence of the ridge [28], which will be summarized in Section 3.2.Here we focus on our aim to relate the ridge distribution in Δ to the single-particle distribution in .We first make a phenomenological observation using only PHOBOS data for both distributions.After showing their relationship, we then make an interpretation that does not involve extensive modeling. To do meaningful comparison, it is important to use single-particle distribution, ch /, that has the same kinematical constraints as the ridge distribution.That is, it involves an integration over and a sum over all charged hadrons where ℎ 1 (, ) = ℎ / , and the lower limit of the integration is 35(1 − /3.75)MeV/c in keeping with the acceptance window of [3].The data on (1/ trig ) ch /Δ are for 0-30% centrality.PHOBOS has the appropriate ch / for 0-6%, 6-15%, 15-25%, and 25-35% centralities [12], as shown in Figure 1(a).Thus we average them over those four bins.The result is shown in Figure 1(b) by the small circles for 0-30% centrality.Those points are fitted by the three Gaussian distributions, located at = 0 and ±η, shown by the solid (red) line in that figure with = 468, 0 = 2.69, 1 = 0.31, η = 2.43, 1 = 1.15.The dashed line shows the central Gaussian, while the dash-dotted line shows the two side Gaussians.The purpose of the fit is mainly to give an analytic representation of ch / to be used for comparison with the ridge distribution.Nevertheless, it is useful to point out that the width 0 of the central Gaussian in is larger than the width of the pion -distribution, = 2.27, mentioned in Section 1.The two side Gaussians are undoubtedly related to the production of protons, since BRAHMS data show significant / ratio above = 2 and > 1 GeV/c [29].The value of η in (2) being > 2 is a result of the enhancement by proton production.Any treatment of correlation among charged particles without giving proper attention to the protons is not likely to reproduce the inclusive distribution given by (2), whose width is significantly stretched by the side Gausssians.We now propose the formula where is a parameter that summarizes all the experimental conditions that lead to the magnitude of the ridge distribution measured relative to the single-particle distribution.In particular, does not depend on 1 or 2 ; otherwise, the equation is meaningless in comparing the dependencies.Data are from [12].The (red) line in (b) is a fit using (2), whose first term is represented by the dashed line and the other two terms by the dash-dotted line (color online). There is no theoretical input in (3), except for the question behind the proposal: how much of the Δ distribution can be accounted for by just a mapping of ch / 2 with a shift due to the definition Δ = 2 − 1 , and an integration over 1 due to the trigger acceptance, 0 < 1 < 1.5?Another way of asking the question is how would the range of correlation be affected if the experimental statistics were high enough so that the trigger's range can be very narrow around 1 = 0? The proposed formula in (3) is tested by substituting the fit of ch / according to (2) into the integrand on the right-hand side.The result is shown in Figure 2 with being adjusted to fit the height of the ridge distribution; its value is 4.4 × 10 −4 .The peak in the data around Δ = 0 is, of course, due to the jet component associated with the trigger jet and is not relevant to our comparison here.That component has been studied in the recombination model as a consequence of thermal-shower recombination that can give a good description of the peak both in Δ and Δ [30].For the ridge considered here, it is evident that the large Δ distribution in Figure 2 is well reproduced by (3).Since our concern is to elucidate the implications of the range of Δ, we leave the fluctuation from the flat distribution in the interval −2 < Δ < −1 as an experimental problem.In qualitative terms, the width of the ridge distribution is due partly to the width of ch / and partly to the smearing of 1 , which adds another 1.5 to the width.No intrinsic dynamics of long-range longitudinal correlation has been put in.Note that the center of the plateau in Δ is at −0.75, which is the average of the shift due to 1 being integrated from 0 to 1.5.It suggests that if 1 were fixed at 1 ≈ 0 when abundant data become available, then the width of ch /Δ would be only as wide as that of the single-particle ch /.No theoretical prejudice has influenced these observations. Interpretation of Phenomenological Observation We now consider an interpretation of what (3) implies, given the empirical support for its validity from Figure 2. First, we ask what the implication of the phenomenological observation is in terms of the range of longitudinal correlation.Then we describe a model for ridge formation first for azimuthal dependence at mid-rapidity then for larger pseudorapidity pertinent to the data.The considerations from various perspectives lead to the notion of transverse correlation that will become the core element of our model to explain the ridge phenomenon. Range of Longitudinal Correlation. Since the observed ridge distribution integrates over trigger , we write it as where we exhibit also explicitly the sum over the hadron type of the ridge particle ℎ 2 and the integral over its transverse momentum, denoted by 2 .According to the definition of Advances in High Energy Physics , we can express the per-trigger ridge correlation as where 1 is the transverse momentum of the trigger particle; and in the superscript denote background and ridge, respectively.The jet component in the associated-particle distribution is excluded in (5). On the other hand, with (1) substituted into (3) we have, using 2 and 2 instead of and , Comparing ( 6) to ( 4) we see that the ridge distribution . Thus the crux of the relationship between the ridge and inclusive distributions involves the interpretation of where ( 2 ) is the transverse component that contains the explicit exponential behavior of 2 .Although ℎ 2 ( 2 , 2 ) has some mild 2 dependence due mainly to mass effects of ℎ 2 , the average transverse momentum ⟨ 2 ⟩ is determined primarily by the inverse slope and is not dependent on 2 .This is an approximate statement that is based on the BRAHMS data [13,14], which show that ⟨ ⟩ is essentially independent of rapidity.Since serves as the phenomenological bridge between ℎ 2 and ℎ 2 1 , the key question to address is which of the two components, the longitudinal ℎ 2 ( 2 , 2 ) or the transverse ( 2 ), does the two-particle correlation generated by a trigger at 1 exert its most important influence in relating If there is longitudinal correlation from early times as in [8][9][10]31], then its effect must be to convert In that case ( 2 ) is relegated to the secondary role due to radial flow (which is, nevertheless, essential in explaining the Δ restriction as in [9,10,32,33]).On the other hand, if there is no intrinsic long-range longitudinal correlation, then ℎ 2 ( 2 , 2 ) is unaffected, and the ridge can only arise from the change in the transverse component, ( 2 ), due to a hard scattering that leads to the trigger.Without phenomenology one would think that the first option is more reasonable, when |Δ| ∼ 4 is regarded as large, and especially when there is an inclination based on theoretical ideas that prefer the existence of long-range correlation.With the ridge phenomenology described by (3) pointing to direct relevance of ℎ 2 ( 2 , 2 ), the question becomes that of asking: |Δ| is large compared to what?If it is now recognized that |Δ| is not large compared to the 2 range of ℎ 2 1 ( 2 , 2 ) after the widening due to 1 smearing (remarked at the end of the previous section) is taken into account, then the need for a long-range dynamical correlation to account for the structure of ℎ 2 ( 1 , 2 , 2 ) is lost.We describe below a possible explanation based on the second option of no long-range correlation.The key is to accept the suggestion of the data that the unmodified longitudinal component ℎ 2 ( 2 , 2 ) is sufficient. A series of articles have treated the subject of ridge formation in the recombination model [34], beginning with (a) the early observation of pedestal in jet correlation [30,35], to (b) its effects on azimuthal anisotropy of single-particle distribution at mid-rapidity [36,37], and then to (c) the dependence on the azimuthal angle of the trigger relative to the reaction plane [28,[38][39][40].Forward productions in d-Au and Au-Au collisions have also been studied in [41,42].Our consideration here of ridge formation at |Δ| > 2 is an extension of earlier studies with the common theme that ridges are formed as a consequence of energy loss by semihard or hard partons as they traverse the medium.The details involve careful treatment of the hadronization process with attention given to both the longitudinal and transverse components.The dependence has been studied thoroughly in [28,40], and the dependence should take into account of the experimental fact that the / ratio can be large (>2.5) at large [29] so that ℎ 2 ( 2 , 2 ) in ( 7) can be properly reproduced. Azimuthal Dependence of the Ridge. We give in this subsection a brief summary of the Δ distribution that we have obtained previously in our treatment of the ridge formation [28].In so doing we also explain more thoroughly an aspect of the basic elements of our model. The tenets of our interpretation of the ridge structure are that its formation is due to (a) the passage of a semihard parton through the medium and (b) the conversion of the energy loss by the parton to the thermal energy of the soft partons in the vicinity of its trajectory.Hadronization of the enhanced thermal partons forms the ridge standing above the background.In [28], we have considered the geometry of the trajectory of a semihard parton traversing the medium in the transverse plane at mid-rapidity, || < 1, taking into account the azimuthal angle of the trajectory that is to be identified with the trigger direction relative to the reaction plane.Along that trajectory, labeled by points (, ) in the transverse plane, the medium expands in the direction (, ).If (, ) is approximately equal to for most of the points (, ) along the trajectory of the semihard parton, then the thermal partons enhanced by successive soft emissions are carried by the flow along in the same direction; the effects reinforce one another and lead to the formation of a ridge in a narrow cone.On the other hand, if the two directions are orthogonal, then the soft partons emitted from the various points along the trajectory are dispersed over a range of surface area, so their hadronization leads to no pronounced effect.These extreme possibilities suggest a correlation function between and , which we assume to have the Gaussian form where the width-squared is a parameter to be determined.This correlation is the central element of our Correlated Emission Model (CEM) [28]. Considerable care is given to the calculation of the observed ridge yield ( ) as a function of .It involves integrations over the path length of the trajectory of the semihard parton and its point of creation in the medium whose density depends on nuclear overlap, and so forth.To compare with the data on ( ), we also have to integrate over all of the ridge particle.It is found that by adjusting the value of it is possible to fit the data on ( ) in the entire range 0 < < /2 for both 0-5% and 20-60% centralities.The value determined is = 0.11, corresponding to a width = √ = 0.34 rad, which is much smaller than the width of the ridge itself, Δ ∼ 1.We have been able to show that using = 0.11 the calculated distribution of the ridge /Δ agrees well with the data.We further made a prediction on the existence of an asymmetry property of the ridge (, ) in its dependence relative to .That prediction was subsequently verified by the STAR data [43,44]. The mechanism for correlation described above will form the basis of transverse correlation when we move away from mid-rapidity to || > 1.It is necessary, however, to start the consideration with a discussion of the forward-moving soft partons relative to the semihard partons at early time. Longitudinal Initial Configuration. We now extend the mechanism for ridge formation at mid-rapidity described above to || > 1.Of course, without examining /ΔΔ at |Δ| > 1 one cannot strictly refer to the structure at |Δ| < 1 as ridge, which by definition has a flat distribution in Δ, but is restricted in |Δ|.We have actually considered the Δ behavior before we investigated the Δ structure at a time when the ridge was referred to as pedestal [30].Calculation was done in the framework where the trigger is formed by thermal-shower recombination and the associated particles in the ridge by the recombination of enhanced thermal partons.In view of our present phenomenological finding in Figure 2 and expressed in (3) and ( 6), we reformulate our model here with attention given to the initial configuration relevant to the problem at hand. In Section 3.3 we have discussed the correlation between the semihard parton at and the local flow direction at (, ), expressed in (8) for || < 1.To extend the same mechanism to || > 1, it is important to recognize first that the longitudinal momenta of the hadrons produced outside the mid-rapidity region are not generated by the semihard parton, as it would be ruled out simply by energy conservation.In accordance with the original parton model [45], the right-and left-moving partons in the initial configuration provide the main thrust for forward and backward momenta.To be more quantitatively pertinent to the ridge structure observed in [3], let us recall that the pseudorapidity ranges of the trigger and ridge particles are 0 < trig < 1.5 and −4 < Δ < 2. For the sake of discussing positive momentum fractions, let us reverse the signs of without loss of generality and regard 1 > −1.5 and 2 < 2.5 so that −2 < 2 − 1 < 4. Let us be generous and set 2 < 3; it corresponds to 2 > 0.1.That is, a ridge particle has / = tan 2 > 0.1.Assuming an average ⟨ ⟩ ∼ 0.4 GeV/c implies < 4 GeV/c.The coalescing quarks that form a pion at such a would have on average a longitudinal momentum of < 2 GeV/c (even less for a proton).For √ = 200 GeV, the corresponding momentum fraction of the quarks is 2 /√ < 0.02.Those soft partons do not have very large , being very nearly in the wee region [45].Thus the kinematics of the particles in the ridge does not indicate that the coalescing quarks are very much in the forward (or backward) fragmentation region. For √/2 = 100 GeV, the Lorentz contraction factor is sometimes taken to be ∼ 100, but that corresponds to = 1, where no quarks exist.If we take the average valencequark momentum fraction to be ⟨ val ⟩ ∼ 1/4, then the corresponding is ∼25 and Δ val ∼ 2 / ∼ 0.5 fm, which has a width that is not very thin.When two such slabs overlap in the initial configuration, the wee partons of the Au-Au colliding system can occupy a wider longitudinal space (Δ ∼ 2 fm) of uncertainty due to quantum fluctuations-1 fm on each side of the overlapping slabs consisting of soft parton with much smaller than ⟨ val ⟩.Our point is then that in that space of Δ ∼ 2 fm in the initial configuration quantum fluctuations free us from requiring the soft partons to follow a Hubble-like expansion, that is, the faster partons are on the outer edges of that longitudinal space, right-moving ones on the right side, and left-moving ones on the left.Note that we have this freedom because we have not restricted ourselves to a dynamical picture of flux tube being stretched by receding thin disks, as in [9,10,31]. For a trigger particle to have trig > 2.5 GeV/c, the initiating semihard or hard parton must have > 3 GeV/c and is created at early time.In Figure 3 we show a sketch of the initial configuration in - plane that depicts the relationship among various possible momentum vectors at that time.The horizontal thickness of the shaded region is Δ ∼ 2 fm and the vertical height is 2 ∼ 12 fm for central collisions, thus not to scale.The central slab marked by a darker region of Δ val ∼ 0.5 fm represents the longitudinal extent in which the valence quarks are contracted.The (red) arrow labeled 1 is the semihard parton that initiates the trigger; it starts from inside the narrow slab because the longitudinal momenta of the colliding partons before scattering are high.The two other (blue) arrows labeled by 2 and 2 represent two possible soft partons with ≲ 2 GeV/c, originating from outside the inner slab, since their Δ is larger than Δ val .We place those vectors in such positions to emphasize the possibility that they can originate from the opposite sides of the slab. That is what we mean by expansion at early time that is not of Hubble-type.The conical region (shaded green) around vector 1 represents the vicinity of the trajectory of the semihard parton where the thermal partons are enhanced due to the energy loss by the semihard parton.Note that since the soft partons 2 and 2 have larger Δ than that of the valence quarks, they can cross the conical region, so the transverse components of the soft partons can be broadened by their interaction with the enhanced thermal partons. Transverse Correlation. The discussion above on the space-momentum relationship between the semihard and soft partons at early time in the uncertainty region Δ gives the conceptual basis for our view of how hadrons in the ridge are formed at late time.Our main point about the initial longitudinal uncertainty is that the forward-moving soft partons that eventually hadronize can be influenced by the semihard parton because the soft-parton trajectory starting from the left side of the central slab shown in Figure 3 can traverse the cone of enhanced thermal partons.To be more quantitative we return to the general factorizable form of the single-particle distribution given in (7) where 2 refers to the transverse component of particle 2. The effect of the semihard parton on particle 2 is the transverse broadening of the soft parton 2 in Figure 3, in much the same way that the Cronin effect is conventionally explained in terms of initialstate broadening [46].That is, the 2 dependence is affected if (a) there is a semihard parton 1 , and (b) 2 (and other soft partons not shown in Figure 3) passes through the cone in the vicinity of 1 .We denote the case without the semihard parton by ( 2 ) representing the background, where and the case with semihard parton and with in the vicinity of the cone by where > 0 is a result of the interaction with the enhanced thermal partons.Then the ridge has a transverse component that rises above the background and has the dependence This is the essence of transverse broadening due to the presence of semihard parton.Since the soft partons 2 must pass through the enhanced cone (narrow in ) in order to develop transverse broadening, they contribute to the ridge only within the Δ interval around 1 , discussed in [28]. The transverse correlation that we refer to is not what one usually associates with the correlation between hadrons in the fragments of a high- jet.All of those fragments are in a small range of Δ and have transverse-momentum fractions that are correlated.They populate the peak in Figure 2. In our problem about the ridge we have been concerned with the transverse momentum of a particle associated with a trigger outside that peak.The former reveals the effect of the medium on the jet, while the latter reveals the effect of the jet on the medium.That is the basic difference between the jet and ridge components of the associated particles.Since semihard or hard scattering takes place early, transverse broadening can take place for soft partons (the medium) moving through the interaction zone, leading to the ridge structure. It is important to note that although the exponential behaviors of the thermal partons have been parametrized by 0 and , there is no implication that those parameters are conventional temperatures and that hydrodynamics is valid from the beginning of the evolution process to the end.We have referred to as the inverse slope, as is appropriate for an exponential peak at low in any hadron scattering.The word thermal is used in reference to the soft component with the assumption that just before hadronization the bulk partons in the local system has an underlying thermal distribution as opposed to a power-law behaved hard component above the background.We do not assume that the global system is equilibrated at an early universal time and that the whole system can be adequately treated by hydrodynamics without considering the effects of the minijets.Our emphasis on semihard partons as the generators of the ridge and our reliance on non-Hubble-like expansion in the initial longitudinal configuration are features that explicitly depend on the departure from the usual assumptions of global thermalization in hydro calculations. The Ridge. We may now write the per-trigger ridge correlation distribution ℎ 2 ( 1 , 2 , 2 ) that is introduced in ( 4) and ( 5) in the form where, for Δ in the range of the ridge, (Δ, 2 ) may be approximated by ( 2 ) given in (11), that is, As we have seen in Figure 2 and (3) that range of Δ where > 0 is no more than the 2 range of ch / 2 , which in turn is determined by the 2 range of ℎ 2 ( 2 , 2 ) in (12).Thus in practice we may suppress the Δ dependence in (Δ, 2 ). The constant in ( 12) characterizes the magnitude of the ridge, which can depend on many factors that include the fluctuations in the initial configuration, the details of correlation dynamics, the experimental cuts, the Δ interval where the ridge is formed, and the related scheme of background subtraction.Its value (that was not calculated) does not affect the relationship between the dependencies of the two sides of (12).The expression for ( 2 ) in ( 13) was first obtained in [36,37] as a description of the ridge distribution without trigger.It was noted there that ( ) → 0 as → 0 and that / sets the scale for V 2 ( , ) for < 0.5 GeV/c in agreement with the data on it.More recently, a detailed study of V 2 ( , ) and the inclusive distribution has been carried out in [27], where it is found that 0 = 0.245 GeV and = 0.283 GeV, so that = 1.825GeV.Although our conclusion to be drawn below does not depend on the precision of those values, more comments on that subject will be given in Section 4. Let us give an overview of what we have done.The LHS of ( 14) is the measured ridge distribution in Δ, which is related to the two-particle distribution through the definitions given in ( 4) and (5).Instead of concentrating on and examining the dynamics of long-range longitudinal correlation, we have found through the phenomenological observation made in (3), and thus (6), that the correlation data can largely be understood by focussing on the relation given in (12), where the ridge correlation is expressed in terms of the component in the transverse-momentum part of the single-particle distribution that exhibit the same ( 2 ) behavior at various Δ values in the range where ( 6) is valid without any Δ dependence in the longitudinal component, that is, transverse correlation.Thus the ridge is generated by the same dynamical mechanism at any in the range where single-particle distribution can reach.That mechanism depends on semihard or hard partons (with or without trigger) whose energy loss to the medium leads to transverse broadening of small- partons that encounter the enhanced region of thermal partons. The transverse-momentum distribution of the ridge particles is the same for any , and the range of the ridge is no more than that of the single-particle inclusive distribution because the partonic origin of the longitudinal momentum of any particle is the same. Relationship to Azimuthal Quadrupole at RHIC and the Ridge in 𝑝𝑝 Collisions at LHC Having described how the ridge phenomenon observed by PHOBOS can be understood in terms of transverse correlation without longitudinal correlation, we now solidify that description by connecting the dynamical mechanism to other features observed at RHIC and LHC that exhibit more quantitative behaviors.They are (a) azimuthal quadrupole (usually referred to as elliptic flow in fluid description) generated by minijets in noncentral Au-Au collisions at RHIC and (b) the ridge phenomenon found in collisions at LHC.In establishing the relationship between ( 6) and ( 14), we made the argument that ( 2 )/( 2 ) in ( 14) can be approximated by a constant in the 2 region where the integrand has a maximum.The PHOBOS experiment provides no details about the of the associated particles, since it is integrated over the entire detected region [3].Thus the approximation made cannot be done without some quantitative knowledge of the 2 dependence.In (13) we show the functional form of ( 2 ), while in ( 7) ( 2 ) is given.Their 2 behaviors have been examined in great detail in the study of the spectra and azimuthal anisotropies of pions and protons produced in Au-Au collisions at various centralities [27], without being concerned about correlations.It is therefore important to note here that the subject matter of transverse correlation, discussed in Section 3.4, is intimately related to the , and dependences of single-particle distribution without triggers.The connection between the two is the ridge. The basic physical origin of the ridge is the pervasive presence of semihard partons.Whether or not the semihard parton is detected by a trigger, its effect on the single-particle distribution ℎ is always present.Thus in [27] ℎ for hadron ℎ has been written in the form at mid-rapidity, low and impact parameter ℎ ( , , ) = ℎ ( , ) + ℎ ( T , , ) + ℎ ( , , ) , (15) where the three terms correspond to base, ridge, and minijets, respectively.The base, ℎ ( , ), has no dependence; its dependence is where N ℎ ( , ) is a normalization factor for hadron ℎ that depends on the hadronic wave function. ( ) is given in (9).The ridge term has a specific dependence due to semihard partons in the initial configuration and can be shown to account for the observed V 2 ( , ) without using hydrodynamics [27].For our purpose here we mention only Advances in High Energy Physics that after averaging over all the resultant ℎ ( , ) has the form where ( ) is as given in (13).The main point we want to stress is that the per-trigger correlation distribution ℎ 2 ( 1 , 2 , 2 ) discussed in Section 3.5 involves the same (Δ, 2 ) as the ( ) in (17) embedded in the singleparticle distribution ℎ ( , , ).Although (Δ, 2 ) has not been measured directly, the form of ( ) has been tested by the dependence of V 2 ( , ), as described in [27].The values of 0 and determined there lead to our conclusion in Section 3.5 that ( 2 )/( 2 ) inside the integral in ( 14) can be approximated by a constant , since the high- 2 region is suppressed by ℎ 2 1 ( 2 , 2 ).Thus our proposal in (3) is confirmed. The above discussion refers to different aspects of the Au-Au collisions at RHIC.Now, we turn to a different connection between the ridge found at RHIC and the ridge observed in collisions at LHC, which is a study of autocorrelation between two particles produced at 7 TeV without using triggers [6].That connection, as stunning as it was at the time of discovery, provides another quantitative verification of the concept of transverse correlation discussed here. In collisions one does not expect even at 7 TeV the formation of a dense system that can be treated by hydrodynamics.Since our approach has been to emphasize that the origin of the ridge is not to be found in hydro flow but in minijet production, it is then very natural to apply our model to collisions at LHC.In [6], reported by CMS, it is found that the two-particle correlation function develops a ridge structure at |Δ| > 2 and that the ridge yield increases significantly with event multiplicity in the region 1 < < 3 GeV/c but not outside that region.That is a direct statement on the dependence of the ridge that is highly relevant to what we have regarded as transverse correlation.Indeed, the problem has been studied in [47], where the two particles at 1 and 2 separated by at least 2 units are treated as longitudinally independent, but transversely correlated in the factorizable form where ( ) is as given in (13).The factorized form in ( 18) is an explicit expression of the assumption that there is no longitudinal correlation, yet there exist correlations between particles produced at widely separated 1 and 2 because their distributions are both enhanced by a common semihard jet [47].The values of 0 and in ( ) are adjusted to fit the data.Excellent results are obtained by virtue of the dependence in (13) that has a peak in just the region where the CMS data show the ridge structure.Thus our approach to the present problem receives strong support from the dual properties that we are able (a) to relate the ridges in these two very different systems and (b) to show that the transverse correlation with the same dependence (except for the numerical values of and 0 ) is responsible for both systems.It is of interest to also remark here about the ridge found in Pb collisions at √ = 5.02 TeV at LHC [48][49][50][51].Comparison between the and Pb collision systems can best be made by examining [6,48], which report data obtained by the same experimental group (CMS) and analyzed in the same way.Indeed, similar properties of the same-side ridge are found in that the associated yields at 2 < |Δ| < 4 are most pronounced in the region 1 < < 2 GeV/c and at high event multiplicities.Since the results are on autocorrelation without trigger, it is not possible to apply the method used in Section 2 to relate single-particle distribution to the Δ dependence of the ridge structure.It should be recognized that in autocorrelation the two particles at 1 and 2 can be separated by |Δ| = 4 but on opposite sides of an undetected semihard jets with 1,2 − jet = ±2 thus not correlated to the jet with a range as long as 4. Note, however, that the dependences of the ridges in the and Pb systems are similar and are consistent with transverse correlation discussed here, as noted in [47].An important difference is the dependence on event multiplicity.The highmultiplicity events in Pb collisions are consequences of particle production in multiple soft proton-nucleon scatterings, whereas in collisions those events arise from rare multiple hard-scattering processes.Thus to understand fully the ridge structure in Pb collisions in the framework of the present approach requires detailed study that has not yet been undertaken. Conclusion An issue that this study has brought up is the usage of the word "large" in referring to the range of Δ in the ridge structure.Our phenomenological observation in (3), substantiated by Figures 1 and 2, does not reveal any quantitative definition of what large Δ means.To be able to relate large Δ to dynamical long-range correlation is a worthy theoretical endeavor but more can be added to its phenomenological relevance if it can also elucidate the empirical connection between the two sides of (3). The approach that we have taken involves no long-range longitudinal correlation for the ridge.The observed ridge distribution is interpreted in our approach as being due to transverse correlation with a range in Δ, that is, no more than that of the single-particle distribution.That is, the distributions of the detected hadrons in the ridge have a larger inverse slope than that of the particles outside, which have larger Δ than the ridge width.We have described a partonic basis for how the transverse correlation can arise; it emphasizes the point that without semihard partons there can be no ridge (with or without trigger detection). If a hard (or semihard) scattering is likened to an earthquake, then the ridge is the counterpart of tsunami, and the thermal medium carrying the enhancement is the ocean water.Transverse correlation is the rise in water level at various points along a coast hit by the tsunami.Although the tsunami damage is insensitive to the horizontal separation among the coastal cities, it should not be interpreted as evidence for long-range horizontal (longitudinal) correlation.The buildings in different cities are not horizontally correlated, but their uprooting by vertical displacements is a sign of transverse correlation caused by the tsunami.Similarly, there is transverse correlation at various points in the ridge but no long-range longitudinal correlation.Where the analogy fails, as all analogies do at some point, is that our expanding system illustrated in Figure 3 is not Hubble-like in the initial configuration and that the soft partons must intersect the enhanced cone of the hard parton in order to carry the effect of enhancement at |Δ| > 1.That is where the restriction in Δ enters in the ridge problem.There is no such complication in the earthquake/tsunami example, which is strictly a classical case of wave propagation.Another point where the analogy may be misleading is that in the case of the tsunami the energy of wave propagation is provided entirely by the earthquake.In our problem, the momenta of the forward-moving soft partons are in the initial state whether or not there is a hard (or semihard) scattering.They are the medium; their transverse momenta can be enhanced to form a ridge in the same way that the ocean water can be perturbed by the earthquake to develop a tsunami, whose underlying medium, however, does not expand.Note that in both cases the detection of trigger or earthquake is not essential in assessing the effect of ridge or tsunami.The main point of the analogy is to illustrate the meaning of transverse correlation at separated rapidities without longitudinal correlation (and without suggesting similarity in dynamics). A crucial point in our interpretation of the ridge phenomenon is that the quantum fluctuation of the longitudinal coordinates of the initial configuration is important, as illustrated in Figure 3.Because of the possibility that low- partons with positive momenta do not necessarily have to be located on the positive side of the thinner slab to which the high- partons are contracted, the usual approximation that equates spatial rapidity with momentum rapidity should not be extended to the neighborhood of the tip of the forward light cone.Fluctuations of the initial longitudinal configuration are not usually considered.Here we find that longitudinal fluctuation of the initial parton configuration can be the source of the longitudinal structure in the ridge phenomenon.Fluctuations of the initial transverse configuration have been investigated vigorously in recent years, leading to results according to hydrodynamical expansion that have significant phenomenological consequences on the transverse structure quantified by the azimuthal harmonics, one of which being the diminution of the ridge itself.That approach relies heavily on the validity of hydrodynamics, which has not been used here.The relevance of higher-harmonic fluctuations has also been challenged by a study of the -integrated 2D angular correlation [26].The transverse momentum distributions of the base and ridge that we rely on have been studied in detail in connection with V 2 ( ) generated by minijets in Au-Au collisions at RHIC and with ridge formation in collisions at LHC; they all have the same structure. Finally, we return to Figure 2 and note that this investigation was motivated by the observation made on the empirical relationship between the ridge distribution in Δ and the single-particle distribution in shown in that figure .A number of previous studies on the origin of the ridge structure are based on other approaches, the first being by Wong in the momentum-kick model [52,53], followed by others, such as in flux-tube initiated hydrodynamics [54], and especially a large group working in the framework of Color Glass Condensate, as in [10,31,55], which led more recently to [56][57][58] on the ridge formation in and Pb collisions.In all those investigations, the authors focus on different mechanisms that offer various sufficient but not necessary explanations of the ridges.In most of those approaches, the emphases are on long-range correlation, and none of them recognize the relationship exhibited in Figure 2, which should therefore provide a useful constraint on all the models proposed. To sum up our work here, we have two important findings to emphasize.One is the phenomenological relationship between ch /Δ and ch / that shows the absence of necessity for intrinsic long-range correlation in .The other is an interpretation of that relationship in terms of transverse correlation without long-range longitudinal correlation. Figure 1 : Figure 1: Pseudorapidity distribution in Au-Au collisions at √ = 200 GeV for (a) various centrality bins and (b) 0-30% centrality.Data are from[12].The (red) line in (b) is a fit using (2), whose first term is represented by the dashed line and the other two terms by the dash-dotted line (color online). Figure 2 : Figure2: Two-particle correlation of charged particles.Data are from[3] that include both ridge and jet components.The line is a plot according to (3) using distribution from Figure1[12] (color online). Figure 3 : Figure 3: A sketch of initial configuration in - plane at early time.Horizontal thickness of the medium is Δ ∼ 2 fm; the inner vertical slab indicates the relative thickness (∼0.5 fm) of the overlapping contracted disks in which the valence quarks are restricted.Red arrow represents semihard parton surrounded in medium by a cone of enhanced thermal partons.Blue arrows represent soft partons with ≲ 2GeV/c that originate from outside the slab and can therefore interact with the cone (color online).
10,505.6
2013-06-05T00:00:00.000
[ "Physics" ]
What is the effect of changing eligibility criteria for disability benefits on employment? A systematic review and meta-analysis of evidence from OECD countries Background Restrictions in the eligibility requirements for disability benefits have been introduced in many countries, on the assumption that this will increase work incentives for people with chronic illness and disabilities. Evidence to support this assumption is unclear, but there is a danger that removal of social protection without increased employment would increase the risk of poverty among disabled people. This paper presents a systematic review of the evidence on the employment effects of changes to eligibility criteria across OECD countries. Methods Systematic review of all empirical studies from OECD countries from 1990 to June 2018 investigating the effect of changes in eligibility requirements and income replacement level of disability benefits on the employment of disabled people. Studies were narratively synthesised, and meta-analysis was performed using meta-regression on all separate results. The systematic review protocol was registered with the Prospective Register for Systematic Reviews (Registration code: PROSPERO 2018 CRD42018103930). Results Seventeen studies met inclusion criteria from seven countries. Eight investigated an expansion of eligibility criteria and nine a restriction. There were 36 separate results included from the 17 studies. Fourteen examined an expansion of eligibility; six found significantly reduced employment, eight no significant effect and one increased employment. Twenty-two results examined a restriction in eligibility for benefits; three found significantly increased employment, 18 no significant effect and one reduced employment. Meta-regression of all studies produced a relative risk of employment of 1.06 (95% CI 0.999 to 1.014; I2 77%). Conclusions There was no firm evidence that changes in eligibility affected employment of disabled people. Restricting eligibility therefore has the potential to lead to a growing number of people out of employment with health problems who are not eligible for adequate social protection, increasing their risk of poverty. Policymakers and researchers need to address the lack of robust evidence for assessing the employment impact of these types of welfare reforms as well as the potential wider poverty impacts. Publishability: The primary concerns that I have with the paper regard the actual implementation of the meta-analysis. As I detail in my comments below, it seems the narrative of the paper contradicts itself in some places, is less clear than it can (or should) be in others, and overall would benefit greatly from additional polish and attention to detail. I have done my best below to explain these thoughts in relation to where they came up within the current version of the manuscript. Recommendations by Location 1. (Lines 56-57): The logic here is not necessarily true as it depends on the rate at which average age and retirement age are increasing. That's not to say it can't be true, but it's a claim that needs to be backed up especially if it is forming the core of the research's motivation. 2. (Lines 67-69): These few lines encapsulate a running concern I have with this manuscript that I'll return to throughout this report. At a fundamental level, a meta-analysis should be about combining multiple estimates of some underlying parameter of interest to increase the precision beyond what any single estimate could provide on its own. The obvious concern being pointed out here is that, for something like disability benefits, other institutional factors matter, including access to health care and other social safety net policies. As a result, it can be challenging to defend the combination of estimates across countries (here across the U.S. and Europe). It is perfectly reasonable to limit a meta-analysis to a specific group of countries, either because the research question being asked focuses on that group or because the best evidence comes from there. However, throughout this manuscript I see comparisons (primarily U.S. vs. other) of estimates that seem to imply to me that the authors themselves see these as two different "groups" of estimates that should not be considered as independent draws from the same underlying treatment effect distribution; these lines are the first instance of this. As a result, it is unclear to me why the authors decide to combine all of the results during their meta-analysis, ignoring these differences and (at least implicitly) assuming that such results are directly comparable. 3. (Lines 74-75): This is related to my point above. It is a good point that, although similar in absolute terms, the effects of expanded eligibility may not simply be -1 times the effects of restricted eligibility. However, in your meta-analysis you effectively treat these as the same in most cases. Your funnel plot combines them and, although you admittedly separate them in Figure 2, I do not see any discussion of why we should believe these effects mirror one another. The empirical evidence I see here seems to imply that they don't look statistically different from one another, implying that the conclusions of the previous literature are unaffected by this beyond having greater imprecision. However, given the work here is still rather imprecise, that does not yield a clear improvement. (Line 81): The analysis here only includes eight countries (out of a possible 37), so this seems to be, at best, a marginal improvement. Furthermore, of those eight, three are represented by a single study. Perhaps these studies included are of a higher quality, but I do not see a compelling argument for quantity of studies as a marked improvement. 5. (Lines 84-85): Why is regional variation within a country (your example is across Canadian provinces) a limitation? I would see a group of studies using cross-region withincountry variation as preferable to a series of cross-country studies. 6. (Lines 96-98): Why do we believe that these estimates across countries are comparable? You've already argued earlier (Lines 67-69) why U.S. studies should be thought of differently from other OECD countries and it appears that Barr et al. (2010) stated this explicitly in their exclusion of such studies. Why the change in stance? I think it's reasonable to be skeptical to assume that U.S./non-U.S. studies are drawing from the same underlying treatment parameter distribution, even conditional on assuming all such estimates are unbiased and causal, so this decision requires motivation and justification. 7. (Lines 123-126): How do you handle the wide variety of age ranges included in these studies? There are a large number of factors here that may influence both the probability of being (or becoming) disabled. You say that the study must incorporate older workers, and while many of these studies focus at or near your age range of 50-65, those that do not will be biased from the effect on older workers towards the average effect for all workers. If you believe those effects are the same, you should defend that claim. If you do not believe so, you should discuss how these studies will potentially bias your results as my guess would be the inclusion of younger workers pushes you towards a null effect. Given that's your ultimate conclusion, could be a real concern. 8. (Lines 144-150): With only 17 studies in your final data set, why not apply this doublechecking process to all of them? 9. (Lines 176-179): You criticized the informativeness of papers studying increased access (Lines 74-76) because that's not the direction of most current policy changes. This criticism, of course, has merit. However, here you assume that the effects are reflexive and I am unsure why you have decided to not only back off of their informativeness but make the additional assumption that the effects are 1-for-1 in the opposite direction. 10. (Line 238): Each paper only showed approximately 2 regression estimates? Or have you pared down the estimates in some other manner that you haven't mentioned? It's relatively common to restrict each study to it's main or primary conclusions, so I don't have any concerns regarding that, but an explicit acknowledgment should be included. studies why lump them together in your meta-analysis and ignore the clear cross-country heterogeneity? I understand that statistical power is a classic limitation of this type of work, but these studies don't appear to be fundamentally comparable in this manner and, if they are, it would require discussion and justification. 14. (Lines 329-330): There's a fundamental issue here that I believe the authors need to discuss. Namely, that is the "bite" of these disability policies. It seems most studies find a treatment effect estimate that is in line with the hypothesis that restricted eligibility increases employment and relaxed eligibility decreases employment. However, most studies also find a small effect in absolute terms and this effect is most often statistically insignificant. However, the authors do not discuss the prevalence of disabilities within the broader population. To demonstrate, imagine the "true" RR of relaxing eligibility was a decrease in employment by 1 percentage point for the sample being studied. Most people will not be affected by disabilities, though. So, to calculate the effect of this policy on disabled workers, the estimate needs to be scaled by their prevalence in the sample studied. If 10 percent of the population were affected by this policy change, then the effect on them would be 10 times as large. This is especially important here since, as the authors point out, some of these studies use the population of workers in a given country. As a result, to truly understand the effect of this policy change on affected workers the estimates need to be scaled. This also comes, of course, with the implicit assumption that there are no spillover effects as a result of the policy on non-disabled workers. This would also introduce a potential bias when using the population of workers, but my gut feeling is that this bias is secondary in nature. At minimum, this concern needs to be discussed and descriptive statistics need to be provided giving the reader the potential magnitude and scope of this concern. 15. (Figure 3): It is hard to determine the median value from this figure, but it appears to be to the right of 1.0. As with my comment above, even a small effect in absolute terms could be large when looking more closely at the group of workers actually affected. 17. (Line 394): What was the result of your FAT? One way to really enhance the arguments put forth in this manuscript would be to provide direct, quantifiable estimates wherever possible. At present, the manuscript does this inconsistently. 18. (Lines 397-405): I must admit, I am a bit unsure what this paragraph is trying to accomplish. As I read it, it seems to me to argue against the quality of the studies included in the meta-analysis. That's not to say the points raised here are not valid, but it seems to undermine the idea of performing a meta-analysis with such studies in the first place. 19. (Lines 409-419): It could also be the case that there is an effect but its size is small when examined at the population level. As stated earlier, I think if this type of claim is going to be made (i.e. that the effects are close to zero) at the very least the upper end of the CI needs to be reported as the largest effect size the authors are willing to rule out. But this also needs to account for the bite of the policy. Perhaps the bite is quite large and so the effects do not need to be scaled a great deal but it would still need to be addressed. Minor recommendations 1. (Throughout): The polish of the writing is quite rough throughout the manuscript and it needs a thorough proofreading. 2. (Figure 3): Figure 3 should have the standard error on the y-axis changed to the precision (1/se). That way the graph itself does not need to look any different, but the y-axis can be increasing from bottom to top rather than decreasing. The flipped axis initially confused me as I was looking at it. 3. (Lines 424-425): I'm not completely sold on making generalizations to OECD countries given that only 20% of its countries are represented here (eight out of a possible 37) and that there seems to be a great deal of heterogeneity across the U.S./non-U.S. studies.
2,840.6
2020-12-01T00:00:00.000
[ "Medicine", "Economics" ]
Grassmann Matrix Quantum Mechanics We explore quantum mechanical theories whose fundamental degrees of freedom are rectangular matrices with Grassmann valued matrix elements. We study particular models where the low energy sector can be described in terms of a bosonic Hermitian matrix quantum mechanics. We describe the classical curved phase space that emerges in the low energy sector. The phase space lives on a compact Kahler manifold parameterized by a complex matrix, of the type discovered some time ago by Berezin. The emergence of a semiclassical bosonic matrix quantum mechanics at low energies requires that the original Grassmann matrices be in the long rectangular limit. We discuss possible holographic interpretations of such matrix models which, by construction, are endowed with a finite dimensional Hilbert space. Introduction Models with matrix like degrees of freedom make numerous appearances throughout physics. Applications range from the study of the spectra of heavy atoms to models of emergent geometry [1,2,3,4,5,6]. In this paper we will concern ourselves with a particular class of quantum mechanical models whose degrees of freedom are purely fermionic rectangular matrices ψ Ai , with A = 1, ..., M and i = 1, ..., N . The matrices transform in the (M, N ) bifundamental representation of a U (M )×SU (N ) symmetry group. In a Lagrangian description of the system, transition amplitudes can be expressed as path integrals over Grassmann valued paths ψ Ai . Grassmann matrices naturally appear as the supersymmetric partners of bosonic Hermitian matrices in supersymmetric matrix quantum mechanical theories such as the low energy worldline dynamics of a stack of N D0-branes in type IIA string theory [3,7] or the Marinari-Parisi matrix model [8]. Our interest is in quantum mechanical models consisting of only the Grassmann matrices. There, it was shown how the problem of Grassmann matrix integrals at large N , M can be expressed as an eigenvalue problem for the composite N × N matrix Φ ij = Aψ iA ψ Aj , which is effectively bosonic. Unlike bosonic matrices, a Grassmann valued matrix cannot be diagonalized and characterized in terms of eigenvalues. Instead, the authors were able to analyze the model by diagonalizing Φ ij . Certain features of the Φ ij integral, such as a contribution to the potential of the form tr log Φ, were shown to be universal and specifically related to the Grassmann nature of the original problem. Along a similar vein, emergent bosonic matrices from spin systems were considered in [12,13]. The models of interest in our work can be viewed as multi-particle quantum mechanical models of fermions which can occupy a finite set of single particle states |A, i, α , labeled by the matrix indices. In particular the Hilbert space is finite dimensional. Fermionic multi-particle models often arise as lattice models in condensed matter physics, where there is typically an assumption about some sort of nearest-neighbour interaction between the fermions reflecting spatial locality. In contrast, the class of models of interest in our paper have no such notion of spatial locality. They are described by actions of the form: The potential V (x) is an N ×N matrix valued function. The index α is an spinor index associated to the d-dimensional rotation group, but we will focus on the particular case of d = 3 and take the σ αβ to be the ordinary Pauli matrices. We will also demand that the potential V (x) be SO(3) invariant. 1 An example of such a model was studied in [14]. The objects we wish to understand are path integrals over {ψ α iA (t), ψ α Ai (t)} rather than simple integrals. In particular, we study to what extent the Grassmann matrix models at large N and M can be described in terms of a composite bosonic matrix degree of freedom. We then describe several features of the emergent bosonic matrix quantum mechanical systems. We focus on the case where V (x) is quartic in the Grassmann matrices, but the techniques we develop can be used more generally. As mentioned, our models have a finite dimensional Hilbert space. In this sense they differ from many of the quantum mechanical models studied in the context of holography, such as the D0-brane quantum mechanics or N = 4 super Yang-Mills, where the systems have an infinite space of states, even at finite N . On the other hand, several proposals have been made throughout the literature suggesting that the holographic dual of a de Sitter universe (or at least its static patch) is indeed a system with a finite dimensional Hilbert space [15,16,17,18,19,20]. Our considerations are particularly similar, in spirit, to those of [15,16] where the basic building blocks are also taken to be a large collection of fermionic operators. Part of our motivation is to understand to what extent systems with a finite Hilbert space can give rise to a holographic description with a dual gravitational theory in an appropriate large N type limit. In order for this to be the case, bosonic variables (such as the Hermitean matrices) should emerge from the discrete variables, at least at low energies and in an appropriate large N limit. The models studied in this work serve as toy models where this can be seen explicitly, and we can examine to what extent the bosonic effective degrees of freedom adequately capture the physics and when this description breaks down. The first part of the paper provides a detailed study for the N = 1 case, in which the degrees of freedom are organized as vectors. We derive several results regarding the physics of the effective composite degree of freedomψ α A σ αβ ψ β A . We show to what extent the theory is described by three bosonic degrees of freedom x = (x, y, z) transforming as an SO(3) vector. The Euclidean path integral is expressed as a path integral over x and a low velocity expansion is developed at large M . We study the theories at finite temperature and note a breakdown of the bosonic description at high temperatures. We describe the structure of the emergent classical phase space for the effective bosonic theory, which is the compact Kähler manifold CP 1 . Some of the results in this section have appeared in several contexts (see for example [21,22,25]). However, certain aspects of our treatment are novel and furthermore our treatment naturally generalizes to the matrix case. This is studied in the second part of the paper, where now the effective theory becomes that of three bosonic Hermitian N ×N matrices Σ a ij , with a ∈ {x, y, z}. The matrix Σ a ij transforms in the adjoint of SU (N ) and is an SO(3) vector. The matrix analogue of the emergent classical phase space is identified as a compact Kähler manifold, first introduced by Berezin [26]. The Kähler metric is parameterized by a complex N × N matrix Z ij . We discuss how the Z ij and Z † ij relate to the description of the system in terms of the Σ a ij as well as the original Grassmann matrices. The volume of the Kähler metric computes the dimension of the Hilbert space captured by the (quantized) classical phase space. It is shown to precisely match the dimension of the U (M ) invariant Hilbert space of the original Grassmann theory. We end with an outlook discussing speculative connections of our models to holography. Vector model In this section we discuss a quantum mechanical model in which the degrees of freedom are a vector ψ α A of complex Grassmann numbers, with A = 1, . . . , M and α = 1, 2 a spinor index of SU (2), the double cover of the rotational group SO(3). Our system has a 2 2M complex-dimensional Hilbert space of states. The purpose of the section is to analyze a simplified version of the matrix model studied in the next section, which however still retains some of the salient features. We focus on an action with quartic interactions of the specific form: where it is understood that the A and α indices are summed over and the σ a αβ = {σ x αβ , σ y αβ , σ z αβ } are the three Pauli matrices. The model has an SU (2) × U (M ) global symmetry group. The (ψ α A ) ψ α A transform in the (anti-)fundamental representation of U (M ) and SU (2). Upon canonical quantization, the non-vanishing anti-commutation relations between the fermionic operators are given by {ψ α A , ψ β B } = δ αβ δ AB . The SU (2) genera-tors working on these operators are given byĴ a =ψ α A σ a αβ ψ β A /2. The U (M ) generators are given by:Ĵ The T n AB with n > 0 are the traceless generators of SU (M ) subgroup of U (M ), and T 0 AB = δ AB generates the U (1) subgroup of U (M ). c is a normal ordering constant that appears as a possible central extension of the U (1). As expected, [Ĵ n ,Ĵ a ] = 0. We take g > 0 in what follows and measure quantities in units of g so that g = 1. Spectrum The Hamiltonian of the system is proportional to the normal ordered square of the angular momentum operator: wheren ≡ψ α A ψ α A , commutes with theĴ a . If we view the index A as a lattice site, the system above is describing two-body SU (2) spin-spin interactions of spin-1/2 fermions between all M possible lattice sites, each with equal strength. From (2.3), it follows that the the eigenstates |J, m; n can be labeled by their total angular momentum J, their angular momentum m in the z-direction and their eigenvalue n with respect to then operator. The energy of |J, m; n is simply E = −4J(J + 1) + 3n. For M > 1, the ground states |g are the (M + 1) states in the maximally spinning spin-M/2 multiplet, whereas the J = 0 state with n = 2M has maximal energy. We can construct the full Hilbert space by acting with theψ α A operators on the particular J = 0 state |0 , defined to be the state annihilated by all the ψ α A . For instance the ground state with maximal spin-z angular momentum is |M/2, M/2; M = Aψ For each A we have two states with vanishing angular momentum in the zdirection, and a spin-1/2 doublet. The full Hilbert space can thus be written succinctly as H = (0 ⊕ 1/2 ⊕ 0) ⊗M . The degeneracies for a given angular momentum in the z-direction can be obtained from the partition function: From the above partition function, we can also obtain the degeneracies of the multi- plets with total spin J: . Effective theory We would now like to recast the Euclidean path integral of the theory as a Euclidean path integral of a bosonic (mesonic) variable and understand several features of the model in terms of the bosonic degree of freedom. The Euclidean path integral computes features in the low energy sector the system. For instance, the generating function of vacuum correlation functions is given by: where the Euclidean action S E is obtained from −iS by a Wick rotation t = −iτ . Upon introducing an auxiliary three-vector x and integrating out the Grassmann variables, this can be recast as: where r = |x|. From the partition function we can read off the effective action for the x degree of freedom: As it stands, the above action is highly non-local in τ . We would like to understand under what conditions this action can approximated by a small velocity expansion. Generally speaking there is no a priori reason for this to be the case in a quantum system, given that the spectrum is discrete and one cannot continuously change the kinetic energy. However, one may hope that it would be a valid approximation at large M . We will see that this is the case. Small velocity expansion It is useful to diagonalize the 2 × 2 Hermitian matrix x · σ for each τ . Since the σ are traceless, we take some U ∈ SU (2) such that U † σ · x U = r σ z for each τ . The U matrix is parameterized by a unit vector n = (sin θ cos φ, sin θ sin φ, cos θ). Explicitly: (2.11) It then follows that: Notice that we can transform the above functional determinant under the time reparameterization symmetry 14) The first factor on the right-hand side of (2.14) is independent of U and r and can be absorbed into the overall normalization of the path integral. The above symmetry can therefore be used to set r to a constant in performing a small velocity expansion of the functional determinant. 3 It follows from this that no time derivatives will be generated for r. We expand (2.12) in powers of υ a σ a = i U †U by expanding the logarithm. The zeroth order term is the effective potential governing r. Going to Fourier space, the computation becomes: where we have regulated the ω-integral by differentiating once with respect to r and re-integrating it back while setting the constant of integration to zero. Note that the effective potential is minimized at r = 2M for which V The first order term in the velocity expansion is given by: whereυ a (l) is the Fourier transform of υ a at frequency l. The linear velocity piece kin is the phase picked up by a unit charge moving on the surface of a two-sphere, in the presence of a magnetic monopole of strength M/2 at the origin. Similarly, the quadratic kinetic term is found to be: where in the right-hand side we have expressed the answer in terms of x, but now written in spherical coordinates. The higher order terms can be similarly computed and they contain even powers of time derivatives of the angular variables divided by one less power of r. 4 Denoting the characteristic frequency for some particular motion of θ and φ by ω c , the condition that there is a small derivative expansion is: For r near the minimum of the effective potential, we have ω c M . Hence, for large M there is a parametrically large range of frequencies allowing for a small velocity expansion. 4 In appendix B we consider a modified vector model where the leading kinetic piece is (2.17). Finite temperature As was previously noted, the whereas at small β we have simply the dimension of the Hilbert space: The transition between these two behaviors occurs at β ∼ 1/M . We now consider the finite temperature partition function as a Euclidean path integral over x. We must integrate out the Grassmann numbers with anti-periodic boundary conditions along the thermal circle. In analogy to previous calculations, we can compute the thermal effective potential. What changes is that the ω-integrals are replaced by sums over the thermal frequencies ω n = 2π(n + 1/2)/β with n ∈ Z. The thermal effective potential thus becomes: As before, the sum has been regulated by differentiating with respect to r. For large β, the minimum of V ef f is at r = 2M as for the zero temperature analysis. We can find the critical point for r in a large β expansion. To first order: From this we see the tendency of r to decrease upon increasing the temperature. At small β, we can Taylor expand: We see that for β 1/M the thermal potential is minimized at r = 0. In figure 2 we show a plot for the values of r minimizing V ef f (β) as we vary β. When r is near zero, we can no longer assume that the kinetic contributions are small and thus our analysis breaks down. This as an indication that the high temperature phase does not have a reliable small velocity description in terms of x. Instead, the correct description requires taking into account the full set of Grassmann degrees of freedom. Bloch coherent state path integral So far we have introduced the variable x as a convenient integration variable to capture correlations in the vacuum state and thermal properties. Here we would like to point out that in a fixed large angular momentum sector, there is some more significance to x. Following Bloch, we define a collection of coherent states built from the state |v , which has the lowest angular momentum in the z-direction and hence is also a minimal energy state. In other words |v = Aψ 2 A |0 . We can act on |v with the spin raising operatorĴ + =Ĵ x + iĴ y to generate states in the maximally spinning multiplet, These states are not orthogonal, but they constitute an over-complete basis of the Hilbert space of the maximally spinning multiplet, The purpose of these states is to describe, with minimal uncertainty, points on the S 2 of spin directions. Indeed, the angular momentum expectation value defines a point on S 2 -through the stereographic projection -with decreasing uncertainty in the One may ask about transition amplitude between two such states: z N |e −iTĤ |z 0 for some given HamiltonianĤ built out of theĴ a . The result is [23,24]: This is the Fubini-Study metric on CP 1 ∼ = S 2 , and we occasionally refer to it as the Bloch sphere. The symplectic form is given by the Kähler form and the large M limit plays the role of the small Planck constant limit. Time evolution of a function A(z,z) in the emergent classical phase space is governed by the Poisson bracket, i.e. (2) symmetry of the original Grassmann model acts on z as: Since the classical phase space has finite volume, we recover the fact that the underlying system has a finite number of ground states. The complex coordinate (z,z) can Matrix model The goal of this section is to analyze a matrix version of the vector model studied above. Given that the model is more complicated, we will not be able to attain as explicit a description, however we will uncover and generalize several of the features found in the vector model. Action and Hamiltonian Our degrees of freedom are now 2M N complex rectangular Grassmann matrices,ψ α iA and ψ α Ai , with A = 1, . . . , M and i = 1, . . . , N . As before, α is an SU (2) spinor index. The dimension of the Hilbert space now becomes 2 2N M . The Grassmann elements obey the anti-commutation relations {ψ α Ai ,ψ β jB } = δ αβ δ ij δ AB . We will focus on the following action: 5 S = dt iψ iA ∂ t ψ Ai + g (ψ iA σ a ψ Aj )(ψ jB σ a ψ Bi ) . We will analyze g > 0 and from now on choose units setting g = 1. Unlike the vector case previously studied, the combinatorial problem of finding the exact spectrum ofĤ seems to be rather difficult and we have not solved it. Instead, we will try to extract information about the low energy sector of the theory by going to an effective description in terms of bosonic matrices. Before doing so, we will establish some further properties about the operator algebra. U (2N ) operator algebra The analogues of the spin operatorsĴ a = Aψ A σ a ψ A /2 studied in the previous section are the U (M ) invariant N ×N spin matrix operators:Ŝ a ij = A (ψ iA σ a ψ Aj )/2. These operators transform as vectors in the three-dimensional real representation of SU (2), as well as in the adjoint of the SU (N ). Introducing an additional operator S 0 ij = A (ψ iA σ 0 ψ Aj )/2, with σ 0 the 2 × 2 identity matrix, we have the following closed operator algebra: The N diagonal components of theŜ a ij generate N copies of the usual su(2) algebra. The above operators can be arranged in a 2N × 2N Hermitian matrix σ µ αβ ⊗Ŝ µ ij (with µ = {0, x, y, z} summed over) and hence they generate a u(2N ) algebra. They Effective theory We introduce three N × N Hermitian bosonic matrices Σ a ij = (Σ x ij , Σ y ij , Σ z ij ). In analogy with the vector case, we introduce them as auxiliary variables which are given on-shell by Σ a ij = 2Ŝ a ij . Upon integrating out the ψ α Ai , the generating function of vacuum correlations of ψ andψ can be expressed as a Euclidean path integral over the Σ ij : where J a ij are sources for theŜ a ij . It is worth noting that, unlike the N = 1 case, thê S a ij no longer commute with the Hamiltonian and thus non-trivial time correlations amongst them may exist. We now proceed to study the validity and properties of the 'small velocity' expansion of det (−∂ τ + R) = exp [Tr log (−∂ τ + R)]. Since R is a 2N × 2N Hermitian matrix, we can diagonalize it as U † RU = λ with λ = diag [λ 1 , . . . , λ 2N ] , U ∈ U (2N ) and λ n ∈ R. Note that due to the tracelessness of R, not all λ n can have the same sign. Similar to the N = 1 case, in the diagonal R frame, we can write the functional determinant as: Tr log (−∂ τ + R) = Tr log −∂ τ − U †U + λ . (3.8) With the above expression we can again use the time reparameterization symmetry to see that the effective action will be independent ofλ n , analogous to how the vector model is independent ofṙ. Using the propagator: we can expand the logarithm in powers of the Hermitian matrix υ = iU †U . Each term in the expansion will be endowed with a U (2N ) symmetry taking U †U → The linear velocity contribution to the effective action is: Theυ(l) is the Fourier transform of υ at frequency l. To define the above ω-integral we have put a cutoff at large ω, performed the exact integration and then taken the large cutoff limit. The kinetic piece containing two time derivatives in U (τ ) is given by: with Λ mn = 1/|λ m − λ n | and the sum running only over the pairs (n, m) for which λ n and λ m have opposite signs. The reason why only pairs of λ m with opposite sign appear in the sum is that the integral appearing in (3.12): vanishes whenever λ n and λ m have the same sign. It is interesting to note that the effective kinetic piece of the theory, and hence what we mean by the dynamical content, depends on the particular distribution of eigenvalues λ n . Having obtained expressions for the first few velocity dependent terms in the effective action, we can estimate when the low velocity expansion is valid. Denoting the characteristic frequency for some motion as ω c , then in order for S (1) kin to be large compared to S (2) kin one requires: ω c λ n N . (3.14) The factor of N stems from the fact that S (2) kin has an additional matrix index to be summed over that was not present in the vector model previously studied. In what follows we will see that the effective potential is minimized for λ m ∼ M . Thus, in the limit M N , we can have a large range of allowed ω c (in units where g = 1). If instead M does not scale with N and we take the large N limit, the window of allowed ω c shrinks to zero. Since the global symmetry group of the theory, for our choice of Hamiltonian, is Effective potential We would now like to focus on the effective potential V ef f for Σ. In order to compute this we can take Σ to be time independent. V ef f must respect the SU (N ) × SU (2) symmetries. For instance it can contain a piece which is the trace of a function of the SU (2) invariant matrix Σ · Σ. Moreover, when the Σ are diagonal (or when they all commute with each other), it must reproduce N copies of the potential (2.15) we found in the vector model. Finally, the piece of V ef f originating from the functional determinant must scale linearly in Σ. We can write a general expression by noting that: is the characteristic polynomial for matrix R with eigenvalues λ n . We must also take the product over all ω, a procedure which must be regulated. For each λ n , we can express the product over the ω as the exponential of an integral over the logarithm: To define the above integral, 6 we have subtracted the integral of log(ω 2 ). Putting things together: As expected, V ef f is invariant under both the SU (N ) and SU (2) global symmetries. It is instructive to write the 2N × 2N matrix R 2 explicitly: From the above expression, it immediately follows that trR 2 = 2 tr Σ · Σ. However, this does not imply that tr The indices (a, b) run over all distinct pairs of (x, y, z), thus rendering the expression SO(3) invariant. Since the Hermitian matrix Σ · Σ has positive eigenvalues, and the commutator i[Σ a , Σ b ] is Hermitean, we see that non-zero commutations cost potential energy. Thus, at least locally the potential (3.17) is minimized when the Σ mutually commute (which means, in turn, that we can mutually diagonalize the Σ). In this approximation, we can estimate the minimum value of V ef f as the first term in the expansion (3.19). The problem we want to solve becomes a saddle point approximation of the following matrix integral for M N : In order to obtain the saddle point equation for the eigenvalues, we first introduce a delta function δ(ρ − Σ · Σ) and integrate out the Σ, such that we remain with an integral over the N × N Hermitian ρ matrix. Upon diagonalizing ρ, and including the 6 One may be concerned about the discontinuity of the first derivative at λn = 0. However, the expression agrees with what we expect of the determinant ω (1 + λ 2 n /ω 2 ). Namely, it should equal one when λn = 0, it should be symmetric under λn → −λn and have an exponent linear in λn. Moreover, one can check that at any non-zero temperature T for which ω → 2πT (n + 1/2) with n ∈ Z, the kink at λn = 0 smoothens out. Vandermonde contribution, we can obtain the potential for its eigenvalues ρ i ≥ 0. It is convenient at this point to rescale ρ i = M 2ρ i . We find: To leading order in a large M expansion (taking M to be much larger than N ) we can considerρ i to be peaked aroundρ i ∼ 4. Expanding aboutρ i = 4 + δ i for small δ i , and keeping the leading term only, we have: There is a slightly more efficient way to see the above. Using the property tr R 2 = 2 tr Σ · Σ we can write the effective potential (3.17) completely in terms of the eigenvalues of R as: Again, at least in the limit M N where we can ignore the effects of the matrix measure, we find V (min) ef f ≈ −M 2 N as before. We now proceed to study the kinetic contribution linear in velocity. 7 We are considering here the situation where both M and N are large but M N . Linear velocity term We consider the linear velocity term for the matrix model. The simplest case occurs when the Σ ij matrix is diagonal, i.e. Σ ij = x i δ ij with i = 1, . . . , N . In this case, we simply find a sum of N terms (one for each x i ) each identical with the vector case. Each will have their own M + 1 lowest Landau levels. Generally, however, the Σ a will not be mutually diagonalizable. Inspired by the expression (2.28), we claim that the linear velocity term is given by: where Z ij is a complex N × N matrix. The stereographic map (2.26) relating z to a point on the Bloch sphere is generalized to: (3.28) In order to verify that Σ a = (Σ a ) † it is useful to take advantage of identities such Berezin coherent states As in the vector case, the matrix action (3.25) can stem from a curved phase space endowed with a Kähler structure. These compact Kähler manifolds were studied extensively by Berezin [26]. The Kähler metric is given by: where c is a normalization constant. The Kähler potential is given by: This potential transforms under the U (2N ) isometry (3.29) as More precisely, what Berezin shows [26] is that there exist a collection of coherent states, analogous to the Bloch coherent states, parameterized by a complex matrix Z ij . Explicitly: where the state |v is the state annihilated by all ψ 1 Ai andψ 2 iA operators. It can be expressed as |v = A,iψ 2 iA |0 , where |0 is the state that is annihilated by all the ψ α Ai operators. Consequently |v is annihilated byŜ − ij . The overlap between two Berezin coherent states is given by: states was computed in [27]. The result reads: We can study the behavior of dim H K in various limits. When N M 1 we find dim H K ∼ 2 2M N to leading order. Thus in this limit, the dimension of the effective Hilbert space closely approximates the full Hilbert space of the original Grassmann where α is fixed in the large N limit, we have: with: Similarly, in the α → ∞ limit, f (α) ∼ log α for which log dim H K ∼ N 2 log M . As Hamiltonian and path integral In the vector case, the HamiltonianĤ (2.3) we studied was constant along the Bloch two-sphere given that all the Bloch coherent states had the same total angular momentum. In this regard our matrix model differs from the vector case. Given our Hamiltonian operator (3.2), the Hamiltonian H[Z, Z † ] ≡ Z|Ĥ|Z † governing time evolution on the emergent classical phase space is found to be: to leading order in M . We have defined: We end with some speculative remarks on this question. Outlook We have discussed systems with a finite dimensional Hilbert space, whose constituents Holographically, large N matrix models might be associated with a gravitational theory. For the quantum mechanical model [7] dual to the ten-dimensional geometry near a collection of N D0-branes, one has nine N ×N Hermitian bosonic matrices X I ij and their Fermionic superpartners. The index I is an SO (9) index, corresponding to the rotational symmetry of the eight-sphere in the near horizon of a stack of N D0branes in type IIA string theory. The indices i and j run from 1 to N . The Hilbert space is infinite dimensional and there are states with indefinitely high energy. In these models, the emergent radial direction has been argued to be captured by the energy scale. At high energies, the quantum mechanics is weakly coupled. One manifestation of this, from the bulk viewpoint, is that the size (in the string frame) of the eight-sphere shrinks indefinitely at large radial distances, eventually leading to a stringy geometry. Consider now a system where the spectrum is capped, as occurs in the deep infrared of a CFT living on a spatial sphere (due to the curvature coupling of the fields). In such a situation we expect the emergent sphere to cap off. This is indeed what happens in global anti-de Sitter space where the sphere at fixed r and t smoothly caps off in the deep interior. 8 Consider now the geometry of the static patch of fourdimensional de Sitter space: Notice that the size of the two-sphere resides on a finite interval. It smoothly caps off at r = 0 and is largest at r = 1 where the cosmological horizon resides. If, somehow, r was an emergent holographic direction related to the energy scale [28], then it would seem we have to cap the spectrum both in the infrared as well as the ultraviolet. This would indicate a holographic quantum mechanical dual with a finite number of states [15,16,17,18,19,20], so long as the spectrum is discrete. If moreover we require the holographic model to have a matrix-quantum mechanical sector described by ordinary bosonic matrices, perhaps the systems we have considered above are natural candidates. We postpone the examination of this proposal and the relation to other approaches of de Sitter holography (for an overview see [29]) to future work. A Counting U (M ) gauge invariant states In this appendix we present the derivation of the formula for the dimension of the Hilbert space of two complex Grassmann matrices χ i A and θ i A with indices ranging from i = 1, . . . , N and A = 1, . . . , M . Therefore we consider the action: Integrating out the gauge field gives us M 2 constraints: We define the vacuum state |0 of the theory to be annihilated by all χ and θ operators. Note that it obeys the gauge constraint and is thus gauge invariant. Moreover, acting with gauge invariant operators always increases the energy, hence |0 is unique. We wish to find the thermal partition function and extract the entropy S(T ) at infinite temperature. We can then use the fact that lim T →∞ S(T ) = log dim H to find the dimension of the Hilbert space with a U (M ) singlet constraint imposed. In the absence of the gauge field A t , we would have dim H = 2 2N M . A.1 Euclidean path integral We can compute the thermal partition function as a Euclidean path integral. Wick rotate time t → −iτ such that The Grassmann variables obey anti-periodic boundary conditions around the thermal circle. The Euclidean path integral of interest is: The gauge transformations acting on A τ are given by A τ → U A τ U † + i∂ τ U · U † . Due to the non-contractible thermal circle, we can only fix the gauge up to the holonomy around the thermal circle [30]. The Fadeev-Popov procedure in doing so gives us the following action for the (time independent upon gauge fixing) eigenvalues of A τ which we denote α A : We have dropped an overall constant which we must later recover by computing the zero temperature entropy, which should vanish because the ground state is unique. We have yet to calculate the contribution to the action of the fundamental matter fields. We first expand them in a Fourier expansion: χ(τ ) = n∈Z e i2π(n+1/2)τ /β χ n , θ(τ ) = n∈Z e i2π(n+1/2)τ /β θ n . (A.7) Thus we obtain the thermal eigenvalues: λ A n = 2π(n + 1/2)/β + im 1 + α A ,λ A n = 2π(n + 1/2)/β + im 2 − α A . (A.8) The determinant to be evaluated is given by n λ A nλ A n . It is UV divergent. We regulate the logarithm of the determinant by taking two derivatives with respect to m and integrating m twice while setting the integration constants to zero. The result is: n log λ A nλ A n = log cos Our remaining integral becomes (we are rescaling the eigenvalues by a factor of the temperature in obtaining the below formula): Our task has been reduced to solving a multi-variable integral for the N variables B Modified vector model In this appendix we briefly mention a slight modification of the vector model considered in the main body of the text. The degrees of freedom are given by two sets of M complex fermion spinors {ψ α A , θ α A }. We consider the following Euclidean action: (B.1) Following the procedure outlined in the main text, we end up with an effective action for a bosonic three-vector x: The reason for the cancellation is that this model has a Hamiltonian given by the difference in angular momentum. The ground state is given by the configuration where the two angular momenta, whose operators are given byĴ 1 =ψ A σψ A /2 and J 2 =θ A σθ A /2, are anti-aligned. In the language of the charged particle on the twosphere, it is as if we have added a positron on top of the electron, thus canceling the effect of the Lorentz force, leaving an ordinary kinetic term for the bound neutral particle. The configuration space is still parameterized by the angles on a two-sphere. The mass of the neutral particle is twice that of the original one, explaining the 1/4 as opposed to the 1/8 in (B.3). As before, at large M we have a controlled low velocity expansion. At high energies, the two angular momenta can fluctuate independently and this simple picture is lost. A similar modification can be made for the matrix model.
8,926.4
2015-12-11T00:00:00.000
[ "Physics" ]